2018 Evidence Meeting 7 – Next Steps – Overview

 In Evidence, Evidence Meeting Overview

Please download PDF Evidence Meeting 7 Overview here

Video recording of the Evidence Meeting available below.

I. Details

  • Date: 5 November 2018
  • Time: 5:30 – 7:00 pm
  • Location: Committee Room 4A, House of Lords
  • Participants: 135 registered attendees

II. Purpose

The All-Party Parliamentary Group on Artificial Intelligence (APPG AI) was set up by co-chairs Stephen Metcalfe MP and Lord Clement-Jones CBE to explore the impact and implications of Artificial Intelligence.

In 2018, the APPG AI has decided to focus on building a roadmap to understand the practical steps for addressing key AI implications. The group has prioritised six policy areas: data, skills, accountability, innovation & entrepreneurship, infrastructure, and trade. Each meeting will explore one of the six policy areas’ respective economic, social, and ethical implications.

Evidence Meeting 7 concentrated on: Next Steps

III. Agenda

IV. Questions for Inspiration

  • What are the practical steps of setting international rules, norms and standards?
  • What is our vision of the new AI and data driven world?
  • What does it mean to be human in 2025 and 2052?
  • Is the roadmap national or international?

V. Background: Setting the Scene

Artificial intelligence (AI) has quickly become the most powerful narrative of our century. Its impact has been seen across regions, industries, and sectors.

Although AI offers many opportunities for both our economies and our societies, it simultaneously raises many concerns – related to matters including security, inequality, privacy, employment, and education.

Recently, countries worldwide have been launching national AI strategies to reap the benefits of AI technologies and protect their nations from potential harms. However, it has become very clear that many issues related to AI cross national borders. No nation alone can address these issues without working together. While each country must consider its specific needs, we need a global framework to help us solve complex and pressing global issues.

Global coordination is essential to truly address the heart of these concerns. Policymakers and other stakeholders need to coordinate in order to champion AI and its implications on society.

Speaking at the World Economic Forum in Davos, the UK’s Prime Minister said: “When technology platforms work across geographical boundaries, no one country and no one government alone can deliver the international norms, rules and standards for a global digital world.”

Organisations like the IEEE Standards Association and the British Standards Institution are bringing together stakeholders across industries and sectors to build standards guiding the development and deployment of these technologies. International organisations including the OECD and the United Nations are forming high-level groups to further explore AI’s impact on an international scale. Governments are forming partnerships, committing to collaboration when addressing some of these issues related to cybersecurity, taxation, and data regulation.

Together, stakeholders worldwide are trying to agree on a vision our policies and strategies should aim to move towards. As AI transforms nearly everything around us, we need to convene individuals from different backgrounds to discuss our vision for a new AI and data driven world. We are in a pivotal time in history in which we can decide what our future looks like – what good looks like and what good doesn’t look like.

The very essence of what it means to be human is changing as AI becomes an increasingly bigger part of our daily lives. Children are now being brought up in a society in which technologies affect them from they very day they are born. AI technologies are part of the homes they grow in as well as the teaching environments they learn in. Data is being collected from the toys children play with, the learning material they engage with, the social platforms their parents are subscribed to, the health providers they visit, and much more. As this generation enters adult live, it follows that their lives will be dramatically different from those of today. What it means to be human will be completely different.

VI. Meeting Overview

On the 5th of November, the APPG AI brought together policymakers, industry representatives, academics, philanthropists, and members of the public to discuss next steps in our journey towards an AI and data-driven world. There were 135 individuals who registered to attend the evidence meeting. Five speakers were invited to provide their insights on questions around international norms and standards, what it will mean to be human in the future, and the debate between national and international roadmaps.

Stephen Metcalfe MP welcomed the attendees. After providing a short overview of the APPG AI programme for 2019, he asked the panel to provide the Parliamentarians and the wider audience with their views of what our next steps should be.

Scott Steedman, Director of Standards at BSI, spoke first on the work the British Standards Institute is doing to provide the infrastructure for the standards needed around AI technologies. He emphasised the need for standards in AI ethics before the technology becomes ubiquitous. Furthermore, he urged stakeholders to prioritise global standards first, regional standards second, and national standards third. Addressing the Parliamentarians, Scott urged the government to engage with standards bodies and the wider community to ensure the new standards developed reflect the interests of the society.

Professor John McDermid, Director of Assuring Autonomy International Programme at University of York, spoke next. Professor John argues that rules, norms and standards (especially at a technical level) tend to be domain specific. Government must collaborate with organisations like BSI and ISO to set these but should also be conscious that technology moves fast and standards take a longer time to be create. Therefore, he spoke about the importance of mechanisms such as Publicly Available Specifications (PAS) to develop material quickly and to evolve it as the technology changes. 

The third speaker was President of The Law Society, Christina Blacklaws. Christina’s evidence reflected the work of the Technology and the Law Policy Commission to examine the ethical implications of artificial intelligence (AI). She summarized most of the work they’ve done in three points: impact, fairness, and expert approval. She reminded the audience that moving forward we need a multidisciplinary approach that reflects a diverse range of voices.

Ernst and Young’s Adrian Joseph spoke fourth on the panel, agreeing with the others on the need to approach these issues through an international lens. He focused on four myths around AI, five predictions for the future, three challenges for the economy and the society, and three recommendations for policymakers to take on. He suggests policymakers help build trust around AI, focused on values of: Performance, Bias, Resiliency, Explainability and Transparency. Furthermore, Adrian called for the creation of an ethics code of conduct, and investment into the creation of modern skills for younger and older generations.

Last to speak was Dr. Spiros Denaxas, Associate Professor of Biomedical Informatics at University College London. Spiros introduced two big challenges stakeholders must urgently address. He called for increased data access and the building of a talent pipeline of individuals empowered to safely develop and deploy AI. Focusing on healthcare, he asked the Parliamentarians to invest in a robust national infrastructure which empowers patients to see what data are recorded, when, by who and how are they being used for their care. He also asked stakeholders to promote capacity building in skills from relevant social sciences that are required to understand the ethical, social and political challenges of AI in healthcare.

VII. Written Evidence

Recent Posts

Leave a Comment

Contact Us

Please use this form to send us an e-mail directly