Google announced the dissolution of its AI Ethics advisory panel on April 4, barely more than a week after it announced its introduction on March 26. read more>> The panel was going to be tasked with providing perspective on some of Google’s toughest ethical questions including the fair use of machine learning, and the protection of sensitive data such as facial recognition. However, Google employees and members of the public quickly raised concerns over the makeup of the panel. Specifically, Kay Coles James’ position on the panel was questioned due to her “vocally anti-trans, anti-LGBTQ, and anti-immigrant” views. This is not the first time Google employees have acted as a conscience for the organisation. Last summer, employees successfully protested Project Maven, which would have used their AI and facial recognition capabilities to improve US military drone targeting. read more>>

Google’s recent stumble reflects the wider struggle facing big tech companies as they try to find the best way to enshrine ethical standards and governance.  Facebook has backed an external research body at the Technical University of Munich.  read more>>  Meanwhile, Amazon has received recent criticism for the bias in its Rekognition facial identification software, which it sells to law enforcement. read more>>

As the investment and importance of AI technologies continues to rise, it is becoming a more urgent global concern to establish a set of ethical regulations. While the U.S. had nearly 75% of the equity deal share in AI companies in 2014, its share shrunk to less than 40% of deals in 2018. The rest of the world, particularly China, have been catching up. read more>>  While the U.S. is still home to the headquarters of private companies leading investment in AI, U.S. planned government spending on AI is dwarfed by the Chinese government which are expected to invest $30 billion by 2030. The government programs will also support technological development for social and political control systems including facial recognition software and credit scores that take into account “social credit.” Naturally, this raises concerns about ethical and appropriately regulated applications.

Last week the European Commission made a significant step for a regulatory body, rolling out a pilot phase with its detailed assessment list. This deliverable comes from a group of 52 industry and academic experts who have been working since April of last year. They outline a list of 7 Essentials to keep AI trustworthy and beneficial to society. The list includes human agency and oversight, robustness and safety, privacy and data governance, transparency, diversity/non-discrimination and fairness, societal and environmental wellbeing, and accountability. They are now launching a large scale pilot with member states to collect feedback with an ultimate goal of moving towards international consensus on ethical standards and regulation. read more>>