2017 Evidence Meeting 3 – Ethics and Legal: Data Capitalism – Overview
Evidence Meeting 3 | Monday, 26 June 2017 | 5:30 – 7:00pm
House of Lords, Committee Room 4A
- Who owns our data?
- What can be done with our data?
- How should the legal systems adjust?
- A data charter?
- Data and social contract (‘opt in’ unless you ‘opt out’, ‘fair use)
- Cybersecurity and hacking
- Slides: Evidence Meeting 3 – June 26 – Ethics and Legal: Data Capitalism
- Theme Report 3: APPG AI Evidence Meeting 3 – June 26 – Ethics and Legal: Data Capitalism
- Sir Gordon Duff – Principal, St. Hilda’s College, University of Oxford
- Mark Skilton – Professor of Practice, Information Systems Management and Innovation, University of Warwick Business School
- Nicola Eschenburg – Global Head of Analyst Relations, BAE Systems
- Stewart Room – Partner and Global Data Protection Leader, PwC
- Clive Gringras – Head of Technology, Media, and Telecommunications, CMS
- Maria Ioannidou – Lecturer in Competition Law, Queen Mary University London
Following a short break leading to the General Election, the APPG on Artificial Intelligence held its 4th Evidence Meeting end of June. Co-chair Stephen Metcalfe MP welcomed back the group and focused the day’s agenda on data capitalism, and the ethical and legal issues that are associated with it.
Sir Gordon Duff was first to speak amongst the six experts on the panel. Former Chairman of the Medicines and Healthcare Products, his particular interest and experience are in innovative technology in the health sector. He was responsible for implementing the largest well curated clinical databases worldwide, holding records of over 20 million people. He argued that data has the potential of having a huge positive impact in the health industry and the majority of people are willing to donate their data for the social purpose. The question government needs to address is how to raise public confidence about one’s safety and security regarding data.
Mark Skilton, Professor at the University of Warwick Business School who has written 3 research books on the impact of AI, was next to speak. He talked about the key findings from his latest book, looking at how 4IR is impacting business. He identified three main trends with AI: (1) large data sets creating neural networks that model human behaviour, (2) automating machine behaviour augmenting some types of work, and (3) AI being used to monitor and respond to cyber-attacks. He called for a data charter with sensible guidelines to help stakeholders manage data and figure out “where effective jurisprudence is going on.”
The third panelist was Nicola Eschenburg for BAE Systems Applied Intelligence. Ethics and legal concepts are both self-enforced and politically-enforced, and are flexible and changeable depending on external variables. There is growing concern, she said, around who collects, owns, and uses data because we are shifting from a period of financial capitalism to data capitalism. According to her, regulation should be kept simple to reflect this fast-moving environment.
Dr. Maria Ioannidou from Queen Mary University agreed with Eschenburg, suggesting stakeholders should first analyse the law currently in place before creating new ones. Data is considered one of the most valuable resources and is changing business models worldwide. Yet, consumers lack understanding about how and when their data is being used. It is important to educate the public and empower them to understand how their data is being used, shared, and managed in exchange for a specific product or service. We should not forget, she reminds the group, that there is collective responsibility in every transaction.
Fifth to speak was Stewart Room from PwC. He has a background in law and now focuses mostly on what business is doing with their data. He argues that government should be cautious when changing existing legislature or creating new law. He believes that soft structures are the solution to the ethical issues related to data capitalism. Data needs to be put in the broader agenda to raise public awareness on the issue.
Lastly, Clive Gringras – Head of Technology, Media and Telecommunications at CMS took the floor. He agreed with Room that “law should be incrementally slow when it comes to technology.” He argued that change in the legislature would only create definitional and juridical arbitrage, as well as threaten the rights of human beings. Clive questioned whether GDPR would prevent society from reaping the full benefits of emerging technology. He believes that data is the fuel that drives engines of AI and that current IP systems in the UK tighten the ability to access this data. Hence, he also called for the loosening of intellectual property rights.
Co-chair Lord Tim Clement-Jones thanked the panel and summarized the conversation, spotting two main outlooks: the need for more law to reflect AI and big data versus the need for soft structures to promote ethical conversation. Justin Madders MP questioned the panel about the role of privacy and how much of the public they believe is knowledgeable when giving out consent for using their data. The panel agreed that this is an awareness and educational issue and that government can play a key role in informing the citizens how their data is being collected, used, and managed.
Overall, the majority of the group concluded that we need to be careful in the legislature we craft moving forward. AI can have huge benefits for society and laws might prevent society from reaping its full benefits. Hence, we need to collect more evidence and build use cases in order to act responsibly and strategically. New soft structures that educate the public and build trust in society are one solution for addressing the issue.