2017 Evidence Meeting 5 – Governance, Social and Organisational Perspective for AI – Overview

 In Evidence, Evidence Meeting Overview

Evidence Meeting 5 | Monday, 11 September 2017 | 5:30 – 7:00pm
Committee Room 4A, House of Lords

Main Focus:

  • What is the social impact of AI?
  • AI and new cultural systems (human and machines working together)
  • The image (sensational image) vs. the reality (what happens in real life) of AI.
  • New forms of organisational structures, corporate governance and wealth in an AI-focused economy (forms of living, machines and people working together – collaboration, co-creation).



  • Krishna Sood – Technology Lawyer, Microsoft
  • Miles Brundage – AI Policy Research Fellow, Oxford Future for Humanity Institute, University of Oxford
  • Dr Joanna Bryson – Reader at the Department of Computer Science, University of Bath, and Affiliate at the Center for Information Technology Policy at Princeton University
  • Dr Stephen Cave – Executive Director of the Leverhulme Centre for the Future of Intelligence, University of Cambridge
  • Dr Kate Devlin – Senior Lecturer, Department of Computing, Goldsmiths, University of London
  • Dr Julian Huppert – Director of the Intellectual Forum, Jesus College, University of Cambridge
  • Rodolfo Rosini – Co-Founder and CEO, Weave.Ai
  • Dr Sandra Wachter – Postdoctoral Researcher in Data Ethics and Algorithms, Oxford Internet Institute


The fifth APPG AI Evidence Meeting was chaired by Lord Tim Clement-Jones and focused on the social and organisation perspectives in the sphere of AI, exploring the social purpose of AI technologies. The conversation engaged with the debate on how AI should be governed.

Krishna Sood, the first of eight panelists, discussed specific principles stakeholders in AI should follow. These included: AI products should be designed to assist humanity, they should be transparent, they should aim to maximize efficiency without sacrificing human dignity, they should respect the notion of privacy, they should address the issue of algorithmic accountability, and they should guard against biases and stereotypes. Companies should hold the responsibility to uphold these principles and form collaborations such as Partnership on AI to encourage discussion on AI impacts. Krishna recommended the UK government to promote the free movement of data, implement GDPR, continue to invest in research and development, and promote relevant skills across all generations.

Miles Brundage, from the Future for Humanity Institute, took the microphone and started his speech reminding the group that technological progress is exponential. He argued we cannot really anticipate how fast technology will develop and in what way. In consequence, the UK government should not base their policies on a specific time frame. He also noted some likely near-term and long-term implications. Near-term impacts will be seen in the job market and in the security sector. Long-term impacts include safety concerns (i.e. countries racing to compete in an era of Advanced AI might not pay as much attention to safety standards and the consequences could be catastrophic).

Joanna Bryson highlighted the fact that AI and humans are different. She says: “What sets AI apart from Natural Intelligence is that Artefacts are made deliberately, by humans.” We shouldn’t over- personify machines and remember the importance of the human-factor when making decisions. Furthermore, she noted that many of the ethical implications we now link to AI have existed in society for centuries. For instances, biases already exist within our cultures. AI only surfaces the issues and urges stakeholders to react. She advises the government to enable the on-demand and routine auditing of AI and algorithmic systems.

Steven Cave, the fourth speaker, discussed the overlapping commonalities amongst the issues addressed in the meeting. Thinking of Ethical and Governance issues in three big categories, he explained how autonomy, data, and intelligence are linked together. For example, the issue of transparency is directly linked to autonomy and data and, also, indirectly linked to intelligence. Hence, when considering holistic solutions to these problems it is important to recognize their interplay. He called for the UK government to create an advisory body that will be responsible for AI governance and suggested for this body to be closely tied to Royal Society’s proposal for a data governance agency.

Kate Devlin spoke about the increasing trend we are seeing in which technology is being humanized. She shed light on the fact that AI is impacting all sectors and industries, even those we might not feel so comfortable discussing (i.e. sex robotics). The sex tech industry is a $30 billion market, she stated, and the impact of AI has been tremendous. There is much ethical debate in technologies like sex robots that stakeholders must consider. Government must encourage more research to understand impact across every sector, even one that might be taboo. It is important to regulate based on evidence, she emphasized.

Julian Huppert agreed with the others: AI is a hugely exciting field with potential for much good but also potential harm. The challenge is that regulating too tightly might mean losing out on some benefits but regulating too loosely might mean social harms. He focused his talk on three major implications of AI: power, governance, and work. UK should be concerned about the overconcentration of power in the hands of one entity and the government should promote open standards, open interactions, and a competitive landscape. Stakeholders should adapt governance models – like that of DeepMind Health – in which they are audited by external affiliates and held accountable to these results. Furthermore, stakeholders need to recognize the impact AI will have on jobs but consider the positive outcomes of this disruption. Perhaps, automation will lead us to rethink one’s purpose and the typical 40 hour working week? He urged the community to not think in simple trade-offs.

Rodolfo Rosini was next to speak and highlighted the technological acceleration in the global context. Applied AI is changing the world and is a force multiplier for other technologies, he stated. UK has the opportunity to lead not only technologically but also in regard to governance. The national strategy on AI needs to address lack of entrepreneurship in the country, job destruction, migration policies, and educational challenges. Throughout history, UK has performed well in developing new technologies but not so well in exploiting their commercial usage. We have the chance to change that.

Sandra Wachter was the last to speak and focused her talk on how the GDPR – expected to be put into force in March 2018 – will affect society. She argued that GDPR has some flaws because it only gives citizens the right to contest if the decision is made through a fully automatic process. Furthermore, it does not give a right to explanation. She urged the UK to lead in closing this accountability gap by (1) ensuring GDPR applies for decisions solely or predominantly based on automated making and (2) ensuring the right to explanation is legally binding. She encouraged stakeholders to work together for a future in which AI and humans can work side by side.

Lord Tim Clement-Jones thanked the panel and open the discussion to the wider audience. There were several questions zooming in on specific ethical implications such as algorithmic biases, privacy, the use of autonomous weapons, etc. A key question surfaced: even if we can technically make something, should we? The evidence meeting concluded with a positive message encouraging stakeholders to make AI with social purpose that can be used to make us better humans.


Recent Posts

Leave a Comment

Contact Us

Please use this form to send us an e-mail directly