2018 Evidence Meeting 3 – ACCOUNTABILITY – Overview

 In Evidence, Evidence Meeting Overview, News

Please download PDF Evidence Meeting 3 – Accountability- Overview here.

I. Details

  • Date: 12 March 2018
  • Time: 5:30 – 7:00 pm
  • Location: Committee Room 2, House of Lords
  • Participants: 108 registered attendees

II. Purpose

The All-Party Parliamentary Group on Artificial Intelligence (APPG AI) was set up by co-chairs Stephen Metcalfe MP and Lord Clement-Jones CBE to explore the impact and implications of Artificial Intelligence.

In 2018, the APPG AI has decided to focus on building a roadmap to understand the practical steps for addressing key AI implications. The group has prioritised six policy areas: data, skills, accountability, innovation & entrepreneurship, infrastructure, and trade. Each meeting will explore one of the six policy areas’ respective economic, social, and ethical implications.

Evidence Meeting 3 concentrated on: Accountability.

III. Speakers

  • Tracey Groves: Founder and Director, Intelligent Ethics
  • Aldous Birchall: head of Financial Services AI, PwC
  • Robbie Stamp: Chief Executive, Bioss International
  • Sofia Olhede: Professor and Director of Centre for Data Science, UCL
  • Tom Morrison-Bell: Government Affairs Manager, Microsoft

IV. Questions for Inspiration

  • What is the path towards accountable automation?
  • How do we make ethics part of business decision-making processes?
  • How do we assign responsibility around algorithms?
  • What auditing bodies can monitor the ecosystem?
  • Can AI systems be transparent?
  • How do we ensure explainability of AI enabled decisions?

V. Background: Setting the Scene

Technological advances in AI have the potential to improve accountability, making (i) governments more accountable to citizens, (ii) corporations more accountable to their shareholders, customers, and society (iii) and individuals more accountable for their own actions.

For example, AI tools can be used to facilitate democratic consultation, enabling elected officials to engage with citizens when passing policy on matters ranging from health to education to national security. Predictive algorithms can be used by corporations to recognise opportunities or threats that might arise when implementing a new product or service. Visual recognition systems can be used by society to hold individuals responsible for potentially illegal activity.

Yet, on the other hand, although AI is in many ways improving accountability, it is also simultaneously challenging its definition, which is based largely on existing notions of transparency and explainability.

Accountability : /əˌkaʊn.təˈbɪl.ə.ti/ :

The fact of being responsible for your decisions or actions and expected to explain them when you are asked

Related words: answerability, transparency, explainability, responsibility, ethics, values

The increasing deployment of AI in the lives of individuals is posing socio-ethical questions for what it means to be accountable in the modern era and where responsibility lies if something goes wrong. Specifically, AI is challenging how transparent and explainable a decision-making process can be and ought to be.

At the moment, the way AI systems work as they move from one point to the next is complex and abstruse. It is difficult (and sometimes impossible) to explain the logic underpinning how an AI reaches a specific decision, especially in a language that can be easily understood by today’s wider society.

As AI progresses in technological capacity (moving from weak to strong applications known as general AI), the process becomes even harder to understand. Ultimately, this proves true not just for the average citizen but also for the experts who build the systems.

For instance, one of the most commonly used forms of machine learning today relies on deep neural networks. Scientists have a good grasp of the original data and conditions (input) that are initially put into the neural networks, as well as a good grasp of the end results (output). However, due to the complex nature of the neural networks, scientists lack understanding of the in-between steps the neural networks take to reach a given end-point.

This is known as the “black box problem.

To add to the black box problem, corporations developing and deploying AI systems are worried that transparent systems risk sensitive material being shared with competitors and their stakeholders.

Even if scientifically a decision-making process enabled by AI can be explained and even if a company is willing to be transparent, the majority of the population lacks the capacity to make sense of the explanation. The lack of understanding amongst the average citizen on how AI systems work makes the explainability issue even more complex.

Consequently, both limited transparency and explainability on decisions enabled via AI systems have made the issue of accountability a high concern for policymakers globally.

If AI is increasingly being applied to help make life-changing decisions, then it stands to reason that those decisions should be held to the highest scrutiny. This is particularly true in areas of high social impact such as health, security, education, etc.

Ethical implications are largely intertwined within this debate, triggering us to ask complex and multidimensional questions. Should we use AI systems in our decisions if we cannot explain their processes? Who is accountable for an output if the decision-making process is inexplainable and/or lacks transparency? Who is responsible for ensuring the wider society has the skills and capacity to adequately understand an explanation?

Ultimately, the accountability of this technology is essential to build public trust and confidence in its development and deployment within society.

Over the last couple of months, there has been a growing dialogue of uncertainty around if, when, and how AI is being (or should be) used to make decisions. Breaking news such as the Facebook and Cambridge Analytica data misuse scandal and the death of a pedestrian caused by an autonomous vehicle have only further spotlighted the urgency of addressing the AI and accountability issue.

AI systems have been let loose and if policymakers want society to benefit from the mass opportunities they offer, they must take action to combat the current lack of accountability in the systems – addressing issues related to transparency and explainability.

 

VI. Meeting Overview

On 12 March 2018, the APPG AI convened an Evidence Meeting to discuss these critical matters around AI and accountability. Chaired by Lord Janvrin, the meeting had a total of 108 registered attendees, five of which were selected experts from industry and academia asked to provide oral evidence.

The first to speak was Tracey Groves, Founder and Director of Intelligent Ethics, an independent consulting practice advising clients on the topics of AI and Ethics.

When looking at the topic of accountability, Tracey highlighted three key factors to ensure ethics become part of decision-making processes. These are education, empowerment, and excellence. First, businesses should inspire curiosity across departments and levels, she said, and implement a leadership development programme which educates individuals to assess ethical dilemmas in a critical manner. Second, to build trust in an organisation, Tracey stressed empowerment. Two-way engagement programmes between leaders and employees that create intelligible accountability of decisions is essential. Third, businesses must design and monitor key performance indicators of ethical culture and behaviours.

Tracey argued that it is not only the responsibility of government to address the issues of accountability. Industry, academics, and regulators have a huge role to play. She said: “The Government should place pressure on corporates to evidence and demonstrate ethical business conduct and accountable decision-making, through deeds not just words.”

PwC’s Aldous Birchall was next to speak, providing a practical approach to addressing some of the issues around AI and accountability. He stressed that both business managers and data scientists need bigger awareness on the impact of algorithms in society. As a potential solution, he suggested a ground-up approach in which software engineers are trained and educated on ethical consequences.

Organisations need qualified senior representatives that can be held accountable for AI deployment, he argued. Each technology (or project) should have a chain of causality from the AI agent back to the person (or organisation) that could be reasonably held responsible for its actions.

Organisations should “reflect the risks of deploying AI in their governance structures and link ethical policy to AI software development and deployment.”

Aldous urged existing regulatory bodies to become experts in their areas, developing AI specific capabilities required to monitor their spaces. Government should intervene when there are potential risks in areas such as health and security.

The third presenter was Robbie Stamp. Referring to the fictional character Marvin the Paranoid Android, Robbie illustrated many of the social concerns surfacing around AI recently.

He asked the group to hold two contradictory ideas in mind when looking at issues of accountability related to AI. First, AI is not human, but humans will have a new set of “working relationship” with it. Second, humans will anthropomorphise AI at every turn.

Robbie called for Ethical AI Governance to establish the ethical boundaries for how AI is put into a wider ecosystem. These governance structures should have monitoring and feedback loops to constantly review things are working as planned.

A potential Ethical Governance framework is the Bioss AI Protocol, established by Robbie for organisations to consider five key questions when deploying AI.

  1. Is the work Advisory, leaving space for human judgement and decision making?
  2. Has the AI been granted any Authority over people?
  3. Does the AI have Agency (the ability to act in a given environment)?
  4. What skills and responsibilities are we at risk of Abdicating?
  5. Are lines of Accountability clear, in what are still organisations run by human beings?

Sofia Olhede was next in line, providing the perspective of a technologist. Sofia stressed the need for collaboration to develop standards that will enable accountability.

She asked the group to realise that there isn’t only one measure of performance. Oftentimes, much of the algorithms are based on average notion of performance but this average does not reflect the spread of outcomes a person may encounter. In some cases, it might be preferable to apply a different measure.

Furthermore, Sofia highlighted the urgency to address algorithmic bias as it threatens AI credibility and fuels inequalities.

Ethics boards, both public and private, are one way to set and develop standards. Organisations are developing internal ethics boards and bodies such as the new Centre for Data Ethics and Innovation could be ideal to connect them all.

Lastly, Sofia highlighted the importance the international perspective when looking at many of these AI issues. To fully address some of these issues, regulations and standards will have to be set on an international level.

Last to speak was Microsoft’s Tom Morrison-Bell. He provided the group with three use cases for how Microsoft is using AI projects and technologies to make social impact. The first was an application called Seeing AI that narrates the world for the low vision community. The second was AI for Earth, an initiative to enable organisations to explore available AI tools, learn how to use them, and discover how these tools can help solve environmental problems. The third was Inner Eye, applying machine learning to build innovative tools for the automatic, quantitative analysis of three-dimensional radiological images.

Tom’s use cases are few of the many ways AI can be deployed to improve society, but he noted that not all organisations might have the resources to build such impactful projects. Nonetheless, he urged industry to adapt key principles that will underline their work in order to ensure their processes are ethical internally and externally. Microsoft, for instance, has chosen six ethical principles to guide the cross-disciplinary development and use of AI. These are: fairness, reliability and safety, privacy and security, inclusivity, transparency, and accountability.

Lord Janvrin thanked the panel for their thought-provoking remarks and asked the Officers, Advisers, and wider audience to pose any questions.

Will Hutton asked the panel to provide their comments on the impact Brexit would have on all these complicated issues. The group stressed that many of these issues cross national borders and the UK would need to collaborate with foreign governments to ensure data moves responsibly from one country to another.

Many in the room commented on the need to engage the wider public in the conversations around ethics and accountability. The panel highlighted the importance of including the entire organisation in the discussion and not just the engineers.

Both the development and deployment of AI should be inclusive, but so should be the conversations around the socio-ethical concerns and implications.

Lord Janvrin thanked the panel and audience for their participation, inviting them to build on the conversation and carry it forward in their own communities, in hopes to encourage public engagement and awareness.

 

VII. Written Evidence

Recent Posts

Leave a Comment

Contact Us

Please use this form to send us an e-mail directly