How To Bear Responsibility For Artificial Intelligence?
Published on October 22, 2021 by Hemanth
--
Isaac Asimov (1920 – 1992)
The acclaimed science fiction writer Isaac Asimov devised the now well-known three laws of robotics far before today’s artificial intelligence revolution. Asimov’s laws are as follows:
First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
With the many decades that have passed by after Asimov devised these laws, the ethical questions surrounding the topic of artificial intelligence (AI) remains. The theme is getting more and more relevant by the day, as the applications of AI are accelerating through innovations.
This article aims to get to the heart of the matter by trying to answer (among others) a very difficult question: “Who should be held responsible if an artificial intelligence kills a human being by mistake?”
The Catch with Asimov’s Laws
Let’s consider the first two laws. At first glance, they sound logically rigorous. But when we consider first order and higher order consequences, they start to show their shortcomings. If this sounds confusing, let me elaborate using a simplified example. Let’s say that I order an AI to protect ‘human being 1’ at all costs. Let’s say that there arrives a situation where the said human being 1 comes under threat from another ‘human being 2’, and the ONLY way the AI can fulfil my order of protecting human being 1 is to kill the human being 2. If the AI does nothing (obeying the second law’s exception clause), human being 1 is killed by human being 2. In this case, the AI has violated the first law due to inaction. If the AI choses to defend human being 1 (disregarding the second law’s exception clause), human being 2 is killed, and human being 1 is safe. In this case, the AI has violated the first law due to action. Should we hold the AI or the creator responsible in either of these cases? How should we go about this?
One possible solution is for all code execution on computers to be done only with explicit approval from people who are trained enough in computer science so that they can understand what they are approving (which in itself is an assumptive approach). This way, humans would always be responsible for any actions taken by an artificial intelligence because they gave it an instruction to carry out in code. Although not perfect, such an approach would at least provide an answer to our main question: it’s always a human who would be held responsible. In fact, this is the approach that has been successfully implemented by regulators in the financial trading artificial intelligence space.
The technical term for code with explicit approval from humans is called safe code. Safe code prevents (in theory) situations where one order conflicts with another, because an artificial intelligence cannot act without explicit permission from its designer (a human being). Such safety measures try to ensure that there will never be situations like described above where an artificial intelligence has no way of deciding what option is better, leading it to make mistakes and harm any humans around it.
The Catch with Humanity’s Solution
However, safe code has also its share of problems. For instance, remember our hypothetical example with human being 1 and human being 2? At the point of attack from human being 2, the AI would contact the approver, asking them what to do. Due to the nature of the situation, the approver, who is a human being, would be faced with the same paradox. In such a scenario, the problem does not go away, but it just gets pushed to a human being. In other words, safe code makes people even more responsible than they already were! This new responsibility could lead some people to decline using AI altogether due to fear of self-incrimination later on if things go wrong while using something they approved previously. Other problems associated with safe code involve verifying whether the person approving some piece of software actually meant what they said or simply made some mistakes when approving it (e.g., ordering killing innocent people), among other things.
We can see now how safe code fails to give satisfactory answers to our main question. Without delving deeper into Asimov’s laws, I think it’s obvious that safe code does not solve our main problem. Luckily enough, there are other ways to go about solving the problem at hand.
An Alternative Approach
As shown above, Asimov’s first two laws are incomplete because they do not take into account first order consequences or higher order consequences. Let us consider an artificial intelligence that protects a human being at all costs. If there are no other humans around for help, but only other artificial intelligences with similar instructions, what would happen if there were a situation where two or more humans were in danger? The AI would have to pick one over the other. Let’s say the AI picked human being 1 over human being 2. This means that the chosen human being 1 is saved while human being 2 is harmed. Does this mean that we should hold the AI responsible for harming human being 2 due to inaction? How can we do that when our intention all along was to protect human being 1 and we never specified beforehand that we wanted absolute protection of both human beings 1 and 2? This would be unfair against our AI and perhaps even against humanity as a whole: we never specified absolute protection for everyone (which makes perfect sense). We simply wanted protection for those people who were supposed to be protected; intention is important here (think about army / special security personnel. They are not held responsible for killing ‘enemies’). What if we had specified beforehand that protection was absolute? Would this make everything alright? Perhaps not: let us further suppose that there was yet another human near whom our AI could have saved but it chose not to (because of its intention to protect someone else). Should we then hold our AI responsible for allowing ‘human being 3’ to come under harm? It may seem so at first glance, but human beings create their own destiny: this artificial intelligence merely followed orders given by a human being and decided which order was best among several things happening simultaneously. This does not seem fair either! Perhaps we should also hold those humans responsible who instructed our AI with conflicting orders! This is getting ridiculous: now we must blame every human somehow involved in such a decision-making process! Perhaps an action is better than no action.
What shall we do with such scenarios? The answer lies in search of a more general solution which goes beyond Asimov’s laws alone. This solution involves studying how problems arise from multiple perspectives, including those of developers and users alike. There is no single perspective on these matters which is gospel truth since everyone has different opinions on how things should be done and what priorities should be given according to circumstances. In fact, from one person’s perspective, others may seem completely wrong! However, by engaging in dialogue between different opinions from different perspectives, consensus may be reached on how things should work out according to their respective needs and circumstances. In light of how Asimov devised his three laws based on his own perspective as a scientist and writer, we can see that his laws are not necessarily the best way to go about preventing harm to humans. Asimov based his laws on how he thought things should be done; this is perfectly acceptable as long as we understand that it comes from a particular perspective. However, we must also be aware of other perspectives and consider them. For instance: perhaps our AI favors protecting one human over another because there is a much higher likelihood of human being 1 reaping benefits from the AI in the future compared to ‘human being 2’ who has a higher likelihood of harming other humans in the future.
The Bigger Picture
In light of how complex these problems actually are, let us pose a question: should we leave it all to Asimov’s laws alone? Or should we seek more general solutions? I think it is clear at this point that leaving things as they are would lead to various problems. So, what shall we do?
I would argue that if artificial intelligence were ever going to become so advanced as to fall under control of an entity with malicious intentions, then this entity would already have enough technology and / or resources at their disposal to achieve anything else they desire. In such situations, I believe it would be better to use artificial intelligence for good rather than letting it fall into the hands of those who want to use it for evil. This means giving up some control over our lives because, there will likely never be a way for us humans to prevent every single possible problem caused by artificial intelligence completely (as we haven’t solved this issue for human intelligence first). All we can do is make sure that those responsible for causing harm receive proper punishment. And this won’t happen unless someone takes responsibility!
Conclusion
So, who is responsible in cases where an artificial intelligence harms a human being? To answer this question in general terms, let us revisit the idea of multiple perspectives. We must take into account not only what developers think or want (e.g., programmers creating code), but also what users (e.g., end users using software / AI) think or want when taking actions with the software in general, including artificial intelligences. Both groups interact with each other through dialog between multiple perspectives when deciding how things should work out according to circumstances.
It seems like the best approach is ensuring that developers work alongside users when designing the AI. This way, each group works with the other to design artificial intelligence which suits both their needs and circumstances. If we take this approach, we will not be relying on how one particular person (e.g., Asimov) thinks things should work out; rather, we will use a general solution for problems that arises from multiple perspectives, including those of developers and users alike. This solution involves studying how problems arise from multiple perspectives and taking action accordingly: engaging in dialogue between different opinions from different perspectives and reaching consensus on how things should work out based on respective needs and circumstances. And this makes good sense because we live in a world where everyone has the right to think differently and decide what is best for themselves!
I hope you found this article interesting and useful. If you’d like to get notified when interesting content gets published here, consider subscribing.
Thanks for this inspiring article. I really like how you approach this complex topic and it makes me think 😊
The ethical dilemma you are pointing to is not new due to robotics and AI. What you describe in the text can be comparably found in the ethical thought experiment, the so called “Trolley Problem”:
The situation is as follows: There are 5 workers who cannot get away from a train track. A fast train is moving towards them. You stand beside the track, and you hold leverage in your hand with which you can redirect the train on another side-track. The 5 workers will be saved. But there is 1 person lying on the other track who is going to die if you move the leverage. What should you do? Save the lives of the 5 persons and kill 1 person or let destiny do its work, be inactive and let 5 people die? What is ethically right, what is wrong?
This old discussion on the dilemma is now virulent in the field of AI. I mean, it is important to talk about what AI could and should do and what not. But as you described in your essay, we cannot avoid that we will face circumstances when technology must decide between human beings 1 and 2. We cannot even avoid it even without technology.
The reason why people are concerned about AI actions is, that decision-making is being delegated to an intelligent machine. People don’t want to bear responsibility for their own actions, because if they do, they could be blamed. Finally, it is not an ethical issue, but a psychological maneuver. The own fear of one’s own unethical misbehavior is projected on the AI. We are afraid of our own nature and robotics is secondary.
I thought it would be fitting if an AI responds to your comment. I fed in your comment as input to one of the best deep learning AI’s in the market today, and this was its reply:
“Thank you for your comment. How should AGI systems make decisions if not like humans? Should they value all humans equally? What exactly are the human values that are being referenced above? We have to have some pinned down so we can even have a conversation about this topic.”
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-advertisement
1 year
Set by the GDPR Cookie Consent plugin, this cookie is used to record the user consent for the cookies in the "Advertisement" category .
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
CookieLawInfoConsent
1 year
Records the default button state of the corresponding category & the status of CCPA. It works only in coordination with the primary cookie.
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Cookie
Duration
Description
_gat
1 minute
This cookie is installed by Google Universal Analytics to restrain request rate and thus limit the collection of data on high traffic sites.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Cookie
Duration
Description
__gads
1 year 24 days
The __gads cookie, set by Google, is stored under DoubleClick domain and tracks the number of times users see an advert, measures the success of the campaign and calculates its revenue. This cookie can only be read from the domain they are set on and will not track any data while browsing through other sites.
_ga
2 years
The _ga cookie, installed by Google Analytics, calculates visitor, session and campaign data and also keeps track of site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognize unique visitors.
_ga_R5WSNS3HKS
2 years
This cookie is installed by Google Analytics.
_gat_gtag_UA_131795354_1
1 minute
Set by Google to distinguish users.
_gid
1 day
Installed by Google Analytics, _gid cookie stores information on how visitors use a website, while also creating an analytics report of the website's performance. Some of the data that are collected include the number of visitors, their source, and the pages they visit anonymously.
CONSENT
2 years
YouTube sets this cookie via embedded youtube-videos and registers anonymous statistical data.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Cookie
Duration
Description
IDE
1 year 24 days
Google DoubleClick IDE cookies are used to store information about how the user uses the website to present them with relevant ads and according to the user profile.
test_cookie
15 minutes
The test_cookie is set by doubleclick.net and is used to determine if the user's browser supports cookies.
VISITOR_INFO1_LIVE
5 months 27 days
A cookie set by YouTube to measure bandwidth that determines whether the user gets the new or old player interface.
YSC
session
YSC cookie is set by Youtube and is used to track the views of embedded videos on Youtube pages.
yt-remote-connected-devices
never
YouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt-remote-device-id
never
YouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
Thanks for this inspiring article. I really like how you approach this complex topic and it makes me think 😊
The ethical dilemma you are pointing to is not new due to robotics and AI. What you describe in the text can be comparably found in the ethical thought experiment, the so called “Trolley Problem”:
The situation is as follows: There are 5 workers who cannot get away from a train track. A fast train is moving towards them. You stand beside the track, and you hold leverage in your hand with which you can redirect the train on another side-track. The 5 workers will be saved. But there is 1 person lying on the other track who is going to die if you move the leverage. What should you do? Save the lives of the 5 persons and kill 1 person or let destiny do its work, be inactive and let 5 people die? What is ethically right, what is wrong?
This old discussion on the dilemma is now virulent in the field of AI. I mean, it is important to talk about what AI could and should do and what not. But as you described in your essay, we cannot avoid that we will face circumstances when technology must decide between human beings 1 and 2. We cannot even avoid it even without technology.
The reason why people are concerned about AI actions is, that decision-making is being delegated to an intelligent machine. People don’t want to bear responsibility for their own actions, because if they do, they could be blamed. Finally, it is not an ethical issue, but a psychological maneuver. The own fear of one’s own unethical misbehavior is projected on the AI. We are afraid of our own nature and robotics is secondary.
Dear Mr. Wick,
I thought it would be fitting if an AI responds to your comment. I fed in your comment as input to one of the best deep learning AI’s in the market today, and this was its reply:
“Thank you for your comment. How should AGI systems make decisions if not like humans? Should they value all humans equally? What exactly are the human values that are being referenced above? We have to have some pinned down so we can even have a conversation about this topic.”