Teenager’s Response to Lords Report “AI in the UK: ready, willing and able?”
Sara Conejo Cervantes, aged 16, attended our Lords AI debate to discuss the future of AI in the UK, in the context of the AI in the UK: ready, willing and able? report produced by the Lords Select Committee on AI.
Below are her comments.
Please note the article was originally published on Medium on 26 April 2018.
Having read the above report on artificial intelligence in the UK produced by the House of Lords, I have the following comments:
As a young person aged 16, I have come to a clear realisation that artificial intelligence will have a profound impact on myself, my family and everyone around me. Only recently have we really seen the impact of AI on the world, some examples of these include our phones (we interact with virtual personal assistants such as Siri, Alexa, Cortana and Google Assistant), banking (AI analyses large amounts of data and pattern searching) medicine (AI is able to predict and diagnose lung cancer, heart disease and more). However, although some people understand the concept of AI and its use in our day to day life, the majority of people are scared of what life will be with these so called “fake humans” running our lives.
In my opinion, the first step that needs to take place, having the greatest effect, are the laws that will be put in place before the mass realisation of AI. Although some laws now stand, more discussion needs to take place around ethics and use of data, as well as guidance for those developing AI systems and algorithms. I note TechUK states that “concerns regarding the use of AI technologies … are focused around how data is being used in these systems” and that it is “important to remember that the current data protection legal framework is already sufficient”. However, I believe they have not taken into account the full potential of AI in the future which is key part to its advancement.
Leading on from that, I would like to discuss the ethical issue behind holding data for the use of AI itself. Many people, as I have mentioned before, are scared of this new technology and how quickly it could advance with them not knowing or understanding it. As stated in the report, an example of a large data set in the UK is the National Health Service, where they hold data of 54.3 million people (over 80% of the British population), this is a lot of personal data. This type of data is especially very beneficial to companies which want to make an advancement in healthcare using AI and machine learning. Companies such as Babylon Health and Your.MD, both located in London, are currently using artificial intelligence, their platform give doctors the opportunity to access patients through video to diagnose and treat conditions. This is a great progression for the whole population, however there is the issue of whether these platform are private or public.
We all want open data and companies working on AI now need data to get progress in their technology but they also should publish their outcomes to allow further development in the sector. It needs to be emphasised that if there are fails, then they should be at a low cost and must not put in danger anyone’s personal data especially when its in relation to NHS. The report does cover this issue in “The value of data” section, where it exclusively talks about NHS and how other corporations such as the Wellcome Trust and DeepMind are pushing the development further. Although, the House of Lords have not explicitly spoken about the actions it might take place if a large amount of data, such as NHS, falls into the hands of those who will maluse it, they should consider the outcome and what will be done to get back to the norm if events do occur.
The UK is economically capable to develop good and valuable artificial intelligence for this century — Deepmind is a great example of it. The question I am concerned about, however, is this: are the people ready, particularly in light of Brexit? With China and US leading the way in AI, can UK really compete, given the drastic state of our educational system, which cannot even get the basics right and does not teach the fundamental skills of the 21st century: learning to learn and how to solve real problems, of which there remains many.
The House of Lords as well as myself know well that the knowledge and technology exist to produce good AI, but the main drawback is lack of understanding what AI is and what it can do for the world at large. Is AI accessible to ordinary people or is it really only the realm of PhDs and academia?
To solve this issue we need to start “educating” people very early on about AI, the ethics and on how AI will impact the world. Most importantly, those who need to be taught on how AI works and the effects it has are the young generation such as myself and even younger. This is because we, and those coming after, are going to be the ones to create AI and decide on how artificial intelligence actually gets incorporated in our lives. We need to reinvent how this will be taught in schools by teachers to students, so that everyone can familiarise themselves with this technology. A topic like this should not be tested upon like Maths and English at GCSE and/or even A-level; instead what should be done is that it should be taught to all children (and not just during ICT or Computer Science classes) in a similar manner to personal development and PSHE. The aim of this is not to overwhelm students with new and irrelevant curriculum, but instead teach them something that is worthwhile and will definitely be useful in their futures, whether that is for them to work in the technology field or just them using it in their day to day life. The House of Lords did hit upon this topic in the report, however I believe it should have been considered more in-depth.
Thus, I strongly back the fact that the Government’s role now is to take action in the educational system to modify how computer science, particularly AI, is taught in schools for all. This can be done by changing the method of teaching, educating teachers, making it more interactive, beneficial and innovative. They should aim to have everyone aged 11–18 have a chance to learn about the technology we use and will use in the future, but most importantly be able to create technology and AI in particular, which is what Teens In AI initiative promotes and encourages through innovative events for teenagers like hackathons, bootcamps and accelerator programmes where we get a chance to solve real problems with the help of top expert mentors in AI and apply AI for the benefit of the wider world.