Google has appointed Luciano Floridi, Oxford Internet Institute’s Professor of Philosophy and Ethics of Information, as a member of their new Advanced Technology External Advisory Council (ATEAC), which will function to ensure the responsible development of artificial intelligence.
Floridi has stated that he is “deeply engaged with emerging policy initiatives on the socio-ethical value and implications of digital technologies and their applications” and that he has “worked closely on data ethics…including the ethics of algorithms and AI”.
The Oxford professor, who has been commended as “a leading philosopher and expert in digital ethics” by Google, will work alongside a team of respected names in the field, including William Joseph Burns, the former U.S. deputy secretary of state, and Joanna Bryson, the University of Bath’s Associate Professor in the Department of Computer Science.
Floridi exclusively told The Oxford Student that “Google has drawn together an External Advisory Council on Advanced Technology, with a range of different backgrounds and expertise, and I’m honoured to be part of it. There will clearly be a variety of opinions in the room and I look forward debating robustly and constructively with fellow members of the Council.”
Beginning in April 2019, the council will hold a number of meetings focusing upon the ethical issues surrounding Google’s ventures into facial recognition, as well as discussing the company’s “fairness” with regard to machine learning.
The inauguration of the ATEAC follows the publication of Google’s “AI principles” that were announced in 2018, where assurances had been made that the tech giant would “avoid creating or reinforcing unfair bias” and “avoid unintended results that create risks of harm”.
The principles also addressed the controversy that arose surrounding Google’s involvement with the U.S. military, as it was revealed that the company held a contract to aid in the manufacture of facial recognition technology to be used in warfare. C.E.O. Sundar Pichai used the document to clarify that Google would “not pursue… technologies that cause or are likely to cause overall harm”, including “developing AI for use in weapons”.
Kent Walker, Google’s senior vice president for global affairs and chief legal officer, stated that the ATEAC will “complement the internal governance structure and processes that help us implement the [AI] principles”. Google had previously founded an internal Artificial Intelligence board in 2014, upon the request of Demis Hassabis and Mustafa Suleyman. The pair sold their AI startup DeepMind to Google in 2015, on the proviso that an ethical control group would be implemented to avoid the abuse of AI technology. Although such a board was eventually drawn up, it was publicly perceived to be unsatisfactory due to its lack of transparency.
Image Credit: Mike MacKenzie/Flickr