2018 Evidence Meeting 1 – DATA – Overview
Please download PDF Evidence Meeting 1 – Data – Overview here.
- Date: 22 January 2018
- Time: 5:30 – 7:00 pm
- Location: Committee Room 1, House of Lords
- Participants: 162 registered attendees
The All-Party Parliamentary Group on Artificial Intelligence (APPG AI) was set up by co-chairs Stephen Metcalfe MP and Lord Clement-Jones CBE to explore the impact and implications of Artificial Intelligence.
In 2018, the APPG AI has decided to focus on building a roadmap to understand the practical steps for addressing key AI implications. The group has prioritised six policy areas: data, skills, accountability, innovation & entrepreneurship, infrastructure, and trade. Each meeting will explore one of the six policy areas’ respective economic, social, and ethical implications.
Evidence Meeting 1 concentrated on: Data.
- Elizabeth Denham, UK Information Commissioner, Information Commissioner’s Office
- Professor J. Mark Bishop, Director of The Centre for Intelligent Data Analytics and Professor of Cognitive Computing at Goldsmiths – University of London
- Stephen Chandler, European General Counsel, NVIDIA
- Tim Pullan, Founder & CEO, ThoughtRiver
IV. Questions for Inspiration
- How do we ensure consumers are protected when it comes to data?
- What do businesses/people need to do to prepare for the GDPR (General Data Protection Regulation)?
- How can the ‘right to explanation’ be enforced? Should it?
- What should data governance structures look like?
- How can access to data be democratised? How can potential monopolies be prevented?
- How should the public sector take more advantage of the data revolution?
- How can we deal with unintended stereotypes and prejudices in data?
V. Background: Setting the Scene
Prime Minister Theresa May, in her recent speech at the 2018 World Economic Forum Annual Meeting, highlighted the need for UK to lead in AI and its deployment in a safe and ethical manner. The Prime Minister, said:
- “Harnessing the power of technology is not just in all our interests – but fundamental to the advance of humanity. But this technological progress also raises new and profound challenges which we need to address.” – Theresa May, UK Prime Minister, World Economic Forum Annual Meeting, 25 Jan 2018
The unprecedented impact AI has (and will continue to have) on society has been made clear throughout the APPG AI’s Evidence Meetings in 2017. Accenture estimates that AI could add an additional £630 billion to the UK economy by 2035, increasing the annual growth rate of GVA from 2.5% to 3.8%. PwC refers to AI as “the biggest commercial opportunity in today’s fast changing economy,” predicting UK GDP to be 10.3% higher in 2030 as a result of AI. The evidence gathered thus far has mirrored Hall and Pesenti’s review of AI, urging the UK to seize the opportunities of these emerging technologies and to deliver on their economic potential.
The dependent relationship between AI and data has also been emphasized in Evidence Meeting 3 on Data Capitalism. In fact, without data, AI systems would lack the raw material essential for their creation, development, and sustainability. The amount of data generated, collected, used, and managed has now reached previously unimaginable numbers. A research report by SAS and the Centre for Economics and Business Research states that Big Data will benefit the UK economy up to £241 billion between 2015 and 2020.
To untap these game-changing opportunities, however, government must work with industry and the wider public to build an environment in which the gains are widely distributed, and potential harms are mitigated. As the Prime Minister suggests, the introduction of these new technologies and socio-economic forces are disrupting much of what we know today and creating a new set of challenges to address (i.e. algorithmic accountability, data privacy, technological unemployment, skill shortages, AI bias, cybercrime, etc.).
Data issues play a large role in any AI debate.
Studies by the Information Commissioner’s Office (ICO) show that only one in four UK adults currently trusts businesses with their personal information.
- IF THE UK IS TO BENEFIT FULLY FROM THE ECONOMIC AND SOCIAL GAINS OF DATA AND AI, THE PUBLIC NEEDS TO KNOW THAT THEIR PERSONAL DATA IS SAFE AND USED RESPONSIBLY.
The General Data Protection Regulation (GDPR), voted on in April 2016 and expected to take full force in the UK from May 2018, is a step in this direction. Its purpose is to protect the personal data of individuals and guarantee people with a ‘right to explanation’ for all decisions made by automated systems.
However, this is only one step. The government acknowledges that more has to be done and, hence, has announced the establishment of a new Centre for Data Ethics and Innovation to advise on the measures needed to enable and ensure safe, ethical and innovative uses of data-driven technologies. The building of “Data Trusts” across the UK regions has also been prioritized in order to educate individuals on their rights and facilitate the sharing of data between organisations holding data and organisations looking to use data to develop AI.
Business, academia, and the wider public must take their share of responsibility, working closely with government in a combined effort to help harness technology as a force for good.
VI. Meeting Overview
The APPG on AI met on 22 January 2018 to further explore the topic of ‘Data and AI’ and to start discussing practical steps for how to reap the benefits and mitigate the risks.
As the UK prepares for the full enforcement of the General Data Protection Regulation (GDPR) in May, it only follows that a large chunk of the conversation centred on its implementation. However, the group also zoomed in on topics related to children’s data, accountability, user rights, and public sector data.
Elizabeth Denham, UK Information Commissioner, was first to provide evidence. She reminded the Committee Room that algorithms have existed for years, but what is new is the amount of data that now goes into them. Our responsibility, she noted, is to ensure data policies help distribute gains fairly and, simultaneously, protect human values such as privacy. She continued to say that the UK is well positioned to handle the challenge because various stakeholders, including government and business, are determined to get it right.
- “We all recognise the enormous socio-economic value that AI can bring us – as long as we get the public policy right. We are lucky in the UK because businesses and government want to get this right. The social will needed to push for solutions is there.” -Elizabeth Denham
She considers the GDPR an important step in sharpening our regulatory toolkit in the UK and encouraging automated decision making that is fair, accountable, and accurate. However, she realizes that the implementation and administration of the GDPR are just as important.
Next to provide evidence was Professor J. Mark Bishop from The Centre for Intelligent Data Analytics. Commenting on Britain’s historical leadership role in technological applications and social practices, he called for the government to continue to lead in this technological revolution, “as an educator of the public, as an exemplar of good data practice, as a facilitator of the generation and manipulation of data through cutting-edge, industry-driven innovation, and as a protector for individuals in their interaction with an increasingly complex and interconnected world.
- First, modern machine learning is so complex that providing a subject with a simple explanation will be difficult.
- Second, the public is poorly educated about how their data is used and what their rights are.
- “The public is often unaware that their personal data is being used in a decision-making process and, hence, doesn’t know they are entitled to make a demand for an explanation. Government must invest in ICO and other organisations to educate the people.” -Professor Bishop
He asked policymakers to consider both a scenario of a hard Brexit and that of a soft Brexit when making important decisions post GDPR. He suggested government to encourage the sharing of trusted data amongst organisations by facilitating services through bodies such as the NHS or HMRC.
Lastly, Professor Bishop made two recommendations for the UK to position itself in the forefront of digital rights worldwide.
- Keep the right to human intervention on the part of the controller in cases where legal consequences follow.
- Extend the right to explanation of the purpose of processing to all cases.
The third to speak was Stephen Chandler, European General Counsel for NVIDIA. Commenting on consumer protection in the UK, he noted that AI has no direct interest in the personal nature of data but only about recognizing a correlation between a mathematical pattern and an output. Nonetheless, personal data can be retrieved from AI systems and this could pose potential security concerns.
He argued that the two main issues with the GDPR are (1) the strain between data retention and flawed output, and (2) the inability to seek full transparency in algorithms. He noted it is important to rethink concepts of transparency and explainability by focusing on extracting only the key factors for how an AI reached a decision.
- “Data governance is needed to ensure accuracy. When you use flawed data, your output is also flawed. Governance structures need to ensure input is accurate.” – Stephen Chandler
Most important, he suggested that the UK needs to address accountability matters and decide how to audit AI by following principles of purpose, transparency, and accuracy.
The last speaker to provide evidence was Tim Pullan, CEO of ThoughtRiver, a leading legal technology that uses AI to automatically risk profile contracts.
Acknowledging data-driven knowledge as a driver for both the UK economy and “a liberal enlightened society,” he urged policymakers to ensure data protection law guards society from potential drawbacks. Furthermore, he reminded the audience that action does not necessarily have to be just legislative.
- “Encouraging the growth of data is not incompatible with protecting citizens.” – Tim Pullan
Tim noted government’s vital role in enabling a regulatory environment for industry-level data trust schemes that make datasets widely available to innovators. Referring to the increasing use of corporate data as a marketing tool, he also suggested policymakers to consider regulating algorithms that target vulnerable groups such as gamblers and/or children.
Individuals shouldn’t be expected to protect themselves through mechanisms such as consent, he argued, sharing that it would take 70 years for an average individual to read every consent statement he/she came across. In fact, Tim continued, most consumers lack awareness about what consent means and that itself is a fundamental problem. In addition, large companies will have a difficult time complying with the GDPR and this could create future economic risk.
He called for the government to look at practical measures to remove any regulatory barriers to the pursuit of AI, including the use of data for these purposes.
One possible solution is to establish an environment in which we control our data via personal data stores and allow corporations to access it around contacts. He suggested policymakers to look at the model first proposed by Doc Searle at Harvard in the late 1900s.
After thanking all four panellists for their remarks, Stephen Metcalfe and Lord Clement-Jones opened the floor to questions.
- “How would you go about educating the public to understand the value of their data and to get them to understand the use of that data?” – Stephen Metcalfe
Many in the Committee Room raised questions about education. The Right Reverend Dr Steven Croft commented on how fast technological capability is advancing compared to public awareness and asked the panel who they think should be responsible for educating the public about the value of their data and their user rights.
The panel agreed that various stakeholders share responsibility. Elizabeth Denham commented that law is now catching up to technology; and, hence, government, business, regulators, the community, and the family all need to work together to raise public awareness. “It will take a village to raise that child,” she said but added that she was confident the UK was well placed to address the challenge.
Next, Birgitte Andersen suggested the UK starts thinking about “user rights” and asked how the public sector can take advantage of the large numbers of data available to improve its own services and, consequently, bring about social value. Ms Denham replied that the current regulatory framework encourages the use of public data, but changes need to take place in regulation for the public sector to start using private (anonymous) data.
Focusing on the GDPR’s right for explainability and AI transparency, Lord Clement-Jones commented on whether the law should be applied for semi-automated decisions. Furthermore, he questioned whether we should apply explainability just to an AI system’s output or its outcome.
- “The right of explanation of an algorithm is hard to deliver and perhaps it is worth thinking of the right of explainability of an output or an outcome.” – Lord Clement-Jones
A debate spurred amongst the group on whether AI systems could be fully transparent in the first place. Professor Bishop and Tim Pullan argued it is not as simple as it sounds to articulate how a model actually reaches a decision. However, Roberto Desimone disagreed and framed the issue of transparency as a mandate for businesses to apply AI itself to help make other AI systems more transparent. Stephen Chandler and Adam Grzywaczerwski described the problem as an issue that should be approached using engineering design.
Towards the end of the meeting, Maria Axente (PwC) asked a critical question: “Does GDPR protect the most vulnerable groups of society, including children?” Elizabeth Denham agreed that this is a massively important aspect of data and, although the GDPR has special rules for children, more still must be done. The ICO is currently writing a code of practice for services directed to kids but the problem cannot be solved unless there are more advocates fighting for this issue, Ms Denham said.
Navin Ramachandran shared a vision in which AI and blockchain technology are used together to address complex issues through a decentralized approach. Tim Pullan agreed and stressed the need to look at the interplay of new technologies. Adam Grzywaczewski also commented on the numerous experiments that are undergoing which look at just this issue.
Geoff Hudson-Searle expressed his worry that UK companies would not be able to implement GDPR and, simultaneously, respond to changes to PECR. Ms Denham said that there will be time for corporates to prepare for GDPR and let it settle before any new provisions of the PECR are enforced.
The session came to an end with a brief discussion on the international landscape in regards to data. Abhijit Akerkar, from Lloyd’s Bank, commented on how the East is approaching data issues through a very different lens compared to the West. Although the panel agreed that there are increasingly more countries taking data protection seriously, they also agreed that this is a growing concern in regards to safety and ethical implications.
Stephen Metcalfe and Lord Clement-Jones thanked the committee room for their engagement and asked them to continue the conversation in and out UK Parliament. Data, AI and the interplay of the two are impacting all of us, and we should all be involved in helping shape what our futures look like.
VII. Written Evidence