Are robots responsible for themselves?

 In Blogs, Library, News

A blog published on

by Nicola Eschenburg, Global Head of Analyst Relations – Cyber Security at BAE Systems Applied Intelligence, BAE Systems Applied Intelligence

(Or the need to worry about the right things)

I had the pleasure of being invited to join the Advisory Board of the Artificial Intelligence All Party Parliamentary Group (AI APPG) earlier this year. It was convened by Stephen Metcalfe (Conservative MP) and Lord Clement-Jones CBE (Liberal Democrat Peer) to consider the future and implications of artificial intelligence, and is expected to run for two years. This is the second in a blog series following the discussions of the Group.

In the ‘Blood and Flesh’ episodes of Star Trek Voyager[1], the crew meet a rebel group of holograms modified to make them more formidable prey for an alien species, the Hirogen. The holograms had exceeded their parameters and turned to guerrilla warfare to free themselves and other holograms across the galaxy, tired of experiencing pain and death many times over. These episodes encapsulate the next thorny question that we asked ourselves as a group – what are the ethical and moral lines and how do they apply to the systems themselves?

If we think about AI as ‘augmented’ rather than ‘artificial’ intelligence, designed to help people think and make better decisions based on a better analysis of data, we are also drawing an important distinction between a ‘smart’ robot and a sentient one. The discussion expanded on this logic to say that given we build the systems AI is based on, and given they are designed for a specific reason, it does not – it cannot – have genuine ‘agency’. AI may need to understand human emotions to do its job, but that does not by default make it human. Furthermore, if the systems are designed and built, they must therefore be owned by someone – and as our recent history has shown through the slave trade, no human should ever be owned by another. Logic then says we should then not want a robot to be sentient or human-like without truly considering how to treat it humanely.

The other extension of this argument says robots cannot take responsibility for their actions or decisions. Programmers choose their goals and behaviours, and the algorithms they create make the decisions. The challenge is that it’s nigh impossible to think through or predict every outcome; as the WEF points out[2], imagine an AI system that is asked to eradicate cancer in the world. After a lot of computing, it spits out a formula that does, in fact, bring about the end of cancer – by killing everyone on the planet. The computer would have achieved its goal of “no more cancer” very efficiently, but not in the way humans intended it. Context is clearly critical to the decisions – but who is responsible for them if neither the robot nor its designer?

The leader of the rebel group of holograms, Iden, is completely thrown when they liberate three mining holograms which turn out to be mindless machines, designed for a very specific job and without the capacity for growth. Yet arguably, this is technology being used in the right way – to improve their owners’ lives without causing anyone else to suffer. Technological progress has the potential to vastly improve the lives of everyone, but responsible implementation will be critical – and answering questions around responsibility and ethics will play a major role. We felt companies are ultimately responsible for their technology and its outcomes, not the individual engineer, and that ethics and cybersecurity need to be built in from the ground-up to avoid any abuse. Would you disagree?

Note: I will update this article with the official meeting summary once it is published.

[1] and


Recent Posts

Leave a Comment

Contact Us

Please use this form to send us an e-mail directly