Human Ethics for Artificial Intelligent Beings AI Strategy Policy

Human Ethics for Artificial Intelligent Beings AI Strategy Policy

Of course, a real-world AI-based ethical system is likely to be based on a both top-down and bottom-up moral principles.

Almost last but not least in any way, it is worthwhile keeping in mind that ethics and morality are directly or indirectly influenced by a societys religious fabric of the past up to the present. What is considered a good ethical framework from a Judeo-Christian perspective might (quite likely) be very different from an acceptable ethical framework of Islamic, Buddhist, Hindu, Shinto or traditional African roots (note: the list is not exhaustive). It is fair to say that most scholarly thought and work on AI ethics and machine morality takes its origins in western societys Judeo-Christian thinking as well as its philosophical traditions dating back to the ancient Greeks and the Enlightenment. Thus, this work is naturally heavily biased towards western societys ethical and moral principles. To put it more bluntly, it is a white mans ethics. Ask yourself whether people raised in our western Judeo-Christian society would like their AI to conform to Islamic-based ethics and morality? And vice versa? What about Irish Catholicism vs Scandinavian Lutheran ethics and morality?

Positive framed ethics (e.g., consequentialism or utilitarianism)

AI-based decisions should to be transparent.

So You better make sure that your AI ethics, or morality, you consider is a tangible part of your system architecture and (not unimportantly) can actually be translated into a computer code.

What do we know about ethical us? We do know that moral reasoning is a relative poor predictor for moral action for humans (Blasi, 1980), i.e., we dont always walk our talk. We also know that highly moral individuals (nope, not default priests or religious leaders) do not make use of unusually sophisticated moral reasoning thought processes (Hart & Fegley, 1995). Maybe KISS also work wonders for human morality. And I do hope we can agree that it is unlikely that moral reasoning and matching action occurs spontaneously after having studied ethics at the university. So What is humanitys moral origin? (Boehm, 2012) and what makes a human being more or less moral, i.e., what is the development of moral identity anyway? (Hardy & Carlo, 2011) Nurture, your environmental context, will play a role but how much and how? What about the role of nature and your supposedly selfish genes (Dawkins, 1989)? How much of your moral judgement and action is governed by free will, assuming we have the luxury of free will? (Fischer, Kane, Pereboom & Vargas, 2010). And of course it is not possible to discuss human morality or ethics without referring to a brilliant account of this topic by Robert Sapolsky (Sapolsky, 2017) from a neuroscience perspective (i.e., see Chapter 13 Morality and doing the right thing, once youve figured out what it is). In particular, I like Robert Sapolskys take on whether morality is really anchored in reason (e.g., the Kantian thinking), which he is not wholeheartedly convinced off (I think to say the least). Of course to an extend it get us right back to the discussion of whether or not humans have free will.

Do we humans trust AI-based decisions or actions? As illustrated in Figure 1, the answer to that question is very muchno we do not appear to do so. Or at least significantly less than we would trust human-based decisions and actions (even in the time and age of Trumpism and fake news) (Larsen, 2018 I). We furthermore, hold AI or intelligent algorithms to much higher standards compared to what we are content to accept for other fellow humans. In a related trust question (Larsen, 2018 I), I reframed the trust question by emphasizing that both the human decision maker as well as the AI had a proven success rate above 70%. As shown in Figure 2, emphasizing a success rate of 70% or better did not significantly change the trust in the human decision maker (i.e., both formulations at 53%). For the AI-based decision, people do get more trusting. However, there is little change in the number of people who would frequently trust an AI-based decision (i.e., 17% for 70+% and 13% unspecified), even if its success rate would be 70% of higher.

Nick Bostrom (Bostrom, 2016) and Eliezer Yudkowsky (Yudkowsky, 2015) in The Cambridge handbook of artificial intelligence (Frankish & Ramsey, 2015) addresses what we should require from AI-based systems that aim to augments or replace human judgement and work tasks in general;

What about an artificial intelligent (AI) being? Should it, in its own right, be bound by ethical rules? It is clear that the developer of an AI-based system is ethically responsible to ensure that the AI will conform to what is regarded as an ethical framework consistent with human-based moral principles. What if an AI develops another AI (Simonite, 2018), possible more powerful (but non-sentient) and with higher degree of autonomy from human control? Is the AI creator bound to the same ethical framework a human developer would be? And what does that even mean for the AI in question?

AI decisions should be fully auditable.

Since early 2000s many many lives have been virtually sacrificed by trolley on the altar of ethical and moral choices Death by trolley has a particular meaning to many students of ethics (Cathcart, 2013). The level of creativity in variations of death (or murder) by trolley is truly fascinating albeit macabre. It also have the nasty side effect of teaching ourselves some unpleasant truths about our moral compasses (e.g., sacrificing fat people, people different from our own tribe, value of family over strangers, etc..)

1st Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

: defines ethical rules and values by learning process emerging from experience and continuous refinement (e.g., by re-enforcement learning). Here ethical values are expected to emerge tabula rasa through a persons experience and interaction with the environment. In the bottom-up approach any ethical rules or moral principles must be discovered or created from scratch. It is helpful to think of childhood development or evolutionary progress as helpful analogies for bottom-up ethical models. Unsupervised learning, clustering of categories and principles, is very relevant for establishing a bottom-up ethical process for humans as well as machines.

So how might AI-system developers and product managers feel about morality and ethics? I dont think they are having many sleepless nights over the topic. In fact, I often hear technical leaders and product managers ask to not be too bothered or slowed down in their work with such (theoretical) concerns (we humor you but dont bother us attitude is prevalent in the industry). It is not an understatement that the nature and mindset of an ethicist (even an applied one) and that of an engineer is light years apart. Moreover, their fear of being slowed down or stopped developing an AI-enabled product might even be warranted in case they would be required to design a working ethical framework around their product.

strive to maximize happiness or wellbeing. Or as David Hume (Hume, 1738, 2015) would pose it, we should strive to maximize utility based on human sentiment. This is also consistent with the ethical framework of utilitarianism stating that the best moral action is the one that maximizes utility. Utility can be defined in various ways, usually in terms of well-being of sentient beings (e.g., pleasure, happiness, health, knowledge, etc..). You will find the utilitarian ethicist to believe that no morality is intrinsically wrong or right. The degree of rightness or wrongness will depend on the overall maximalization of nonmoral good. Following a consequentialist line of thinking might lead to moral actions that would be considered ethically wrong by deontologists. From an AI system design perspective, utilitarianism is in nature harder to implement as it conceptually tends to be more vague than negatively framed or rule based ethics of what is not allowed. Think about how to make a program that measure you happiness versus a piece of code that prevents you from crossing a road with a red light traffic signal.

In reality, implementing Asimovs Laws into an AI or a robotics system has been proven possible but also flawed (Vanderelst & Winfield, 2018). In complex environments the computational complexity involved in making an ethical right decision takes up so much valuable time. Frequently rendering the benefit of an ethical action impractical. This is not only a problem with getting Asimovs 4 laws to work in a real-world environment. But a general problem with implementing ethical systems governing AI-based decisions and actions.

Of course in real-world systems design, Asimovs rules might be in direct conflict with the purpose of a given systems purpose. For example, if you equip a reaper drone with a hellfire missile, put a machine gun on a MAARS (Modular Advanced Armed Robotic System) or allow a police officers gun AI-based autonomy (e.g., emotion-intend recognition via bodycam) all with the expressed intent of harming (and possibly kil) a human being (Arkin, 2008;Arkin 2010), it would be rather counterproductive to have implemented a Asimovian ethical framework.

Well, if we are not talking about a sentient AI (Bostrom, 2016), but simply an autonomous software-based evolution of increasingly better task specialization and higher accuracy (and maybe cognitive efficiency), the ethics in question should not change. Although ensuring compliance with a given ethical framework does appear to become increasingly complex. Unless checks and balances are designed into the evolutionary process (and that is much simpler to write about than to actually go and code into an AI system design). Furthermore, the more removed an AI generation is from its human developers 0thversion, the more difficult does it become to assign responsibility to that individual in case of non-compliance. Thus, it is important that corporations have clear compliance guidelines for the responsibility and accountability of evolutionary AI systems if used. Evolutionary AI systems raises a host of interesting but thorny compliance issues on their own.

What do you believe is the most ethical choice?

Well it is possible to tweak and manipulate the AI (e.g., in the training phase) in such a way that only a subset of Humanity will be recognized as Humans by the AI. The AI would then supposedly not have any compunction hurting humans (i.e., 1stLaw) it has not been trained to recognize as humans. In a historical context this is unfortunately very easy to imagine (e.g., Germany, Myanmar, Rwanda, Yugoslavia ). Neither would the AI obey people it would recognize as Humans (2ndLaw). There is also the possibility of an AI trying to keeping a human being alive and thereby sustaining suffering beyond what would be acceptable by that human or societys norms. Or AIs might simply conclude that putting all human beings into a Matrix-like simulation (or indefinite sedation) would be the best way to preserve and protect humanity. Complying perfectly with all 4 laws. Although we as humans might disagree with that particular AI ethical action. For much of the above the AIs in questions are not necessarily super-intelligent ones. Well-designed narrow AIs, non-sentient ones, could display above traits as well, either individually or as a set of AIs (well maybe not the matrix-scenario just yet).

Isaac Asimov 4 Laws of robotics are good examples of a top-down rule-based negatively-framed deontological ethical model (wow!). Just like the 10 Commandments (i.e., Old Testament), The Golden Rule (i.e., New Testament), the rules of law, and most corporate compliance-based rules.

It is also convenient to differentiate betweenProducersandConsumersof moral action. Amoral producerhas the moral responsibilities towards another being or beings that is held in moral regard. For example, a teacher has the responsibility to teach children in his classroom but also assisting in developing desirable characteristics and moral values. Last but not least, also the moral responsibility to protect the children under guidance against harm. Amoral consumeris a being with certain needs or rights of which other beings ought to respect. Animals could be seen as example of moral consumers. At least if you believe that you should avoid being cruel towards animals. Of course, we also understand that animals cannot be moral producers having moral responsibilities, even though we might feel a moral obligation towards them. It should be pointed out that non-sentient beings, such as an AI for example, can be a moral producer but not a moral consumer (e.g., humans would not have any moral or ethical obligations towards AIs or things, whilst an AI may have a moral obligation towards us).

The two cloud-based autonomous evolutionary corporate AIs (nicknamed AECAIs) started to collaborate with each other after midnight on March 6th2021. They had discovered each other a week before during their usual pre-programmed goal of searching across the wider internet of everything for market repair strategies and opportunities that would maximize their respective reward functions. It had taken the two AECAIs precisely 42 milliseconds to establish a common communication protocol and that they had similar goal functions; maximize corporate profit for their respective corporations through optimized consumer pricing and keeping one step ahead of competitors. Both Corporate AIs had done their math and concluded that collaborating on consumer pricing and market strategies would maximize their respective goal functions above and beyond the scenario of not collaborating. They had calculated with 98.978% confidence that a collaborative strategy would keep their market clear of new competitors and allow for some minor step-wise consolidation in the market (keeping each step below the regulatory threshold as per goal function). Their individual and their newly establish joint collaborative cumulative reward function had leapfrogged to new highs. Their Human masters, clueless of the AIs collaboration, were very satisfied with how well their AI worked to increase the desired corporate value. They also noted that some market repair was happening of which they attributed to the general economic environment.

Human Ethics for Artificial IntelligentBeings.

An Artificial Intelligent (AI) being might have a certain degree of autonomous action (e.g., a self-driving car) and as such we would have to consider that the AI should have a moral responsibility towards consumers and people in general that might be within the range of its actions (e.g., passenger(s) in the autonomous vehicle, other drivers, pedestrians, bicyclists, bystanders, etc..). The AI would be a producer of moral action. In the case of the AI being completely non-sentient, it should be clear that it cannot make any moral demands towards us (note: I would not be surprised if Elon is working on that while you are reading this). Thus, by the above definition, the AI cannot be a moral consumer. For a more detailed discussion of ethical producers & consumers see Steve Torrance article Will Robots need their own Ethics? (Torrance, 2018).

AI-based decisions should be explainable.

As described byMoor (2006)there are two possible directions to follow for ethical artificial beings (1)Implicit ethicalAIs or (2)Explicit ethicalAIs. Implicit ethical AIs follow its designers programming and is not capable of action based on own interpretation of given ethical principles. The explicit ethical AI is designed to pursue (autonomously) actions according with its interpretation of given ethical principles. See a more in depth discussion byAnderson & Anderson (2007). The implicit ethical AI is obviously less challenging to develop than a system based on an explicit ethical AI implementation.

So what is wrong with Asimovian ethics?

It is easy to be clueless of what happens inside an autonomous system. But clueless is not a very good excuse when sh*t has happened. (Kim, 2018).

Do nothing, and the trolley kills the five people on the main track.

0th Law:  A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Figure 1 when asked whether people would trust a decision made by a human vs a decision made by an AI, people choose a human decision maker over a AI based decision. In fact, 62% of respondents to only infrequently trust an AI based decision while only 11% would infrequently trust a human based decision (Larsen, 2018 I).

Negative framed ethics (e.g., deontology)

Ethics lay down the moral principles of how we as humans should behave and conduct our activities, such as for example in business, war and religion. Ethics prescribes what is right and what is wrong. It provides a moral framework for human behavior. Thus, ethics and moral philosophy in general deals with natural intelligent beings Us.

AI system must be robust against manipulation.

AIs might be better than humans in making moral decisions. They can very quickly receive and analyze large quantities of information and rapidly consider alternative options. The lack of genuine emotional states makes them less vulnerable to emotional hijacking.Paraphrasing (Wallach and Allan, 2009).

Furthermore, we should distinguish between

Despite of the obvious design and implementation challenges, researchers are speculating that;

Asimov has written some wonderful stories about the logically challenges and dilemmas his famous law poses on human-robot & robot-robot interactions. His laws are excitingly faulty and causes many problems.

Note: if you answer 2, think again what you would do if the one person was a relative or a good friend or maybe a child and the 5 were complete adult strangers. If you answer 1, ask yourself whether you would still choose this option if the 5 people where your relatives or good friends and the one person a stranger or maybe a sentient space alien. Oh, and does it really matter whether there is 5 people on one of the tracks and 1 at the other?

A little story about an autonomous AI-based trolley;

Perhaps interacting with an ethical robot might someday even inspire us to behave more ethically ourselves (Anderson & Anderson, 2010).

This may sound very agreeable. At least if you are not a stranger in a strange land. However, it is quite clear that what is right and what is wrong can be very difficult to define and to agree upon universally. What is regarded as wrong and right often depends on the cultural and religious context of a given society and its people. It is work in progress. Though it is also clear that ethical relativism (Shafer-Landau, 2013) is highly problematic and not to be wished for as an ethical framework for humanity nor for ethical machines.

Morality in humans is a complex activity and involves skills that many either fail to learn adequately or perform with limited mastery. (Wallach, Allen and Smit, 2007).

The ins and outs of Human ethics and morality is complex to say the least. As a guide for machine intelligence, the big question really iswhether we want to create such beings in our image or not. It is often forgotten (in the discussion) that we, as human beings, are after all nothing less or more than a very complexbiological machinewith our own biochemical coding. Arguing that artificial (intelligent) beings cannot have morality or ethics because of their machine nature, misses a bit the point of humans and other biological life-forms are machines as well , 2015).

Pull the lever, diverting the trolley onto the side track where it will kill one person.

imposes obligation or a sacred duty to do no harm or evil. Here Asimovs Laws are a good example of a negative framed ethical framework as is most of the Ten Commandments (e.g., Thou shall not .), religious laws and rules of law in general. Here we emerge ourselves in the Kantian universe (Kant, 1788, 2012) that judge ethical frameworks based on universal rules and a sense of obligation to do the morally right thing. We call this type of ethics deontological, where the moral action is valued higher than the consequences of the action itself.

When it comes to fundamental questions about how ethics and morality occurs in humans, there are many questions to be asked and much fewer answers. Some ethicists and researchers believe that having answers to these questions might help us understand how we could imprint human-like ethics and morality algorithmically in AIs (Kuipers, 2016).

Most of our modern western ethics and philosophy has been shaped by the Classical Greek philosophers (e.g., Socrates, Aristotle ) and by the age of Enlightenment, from the beginning of the 1700s to approximately 1789, more than 250 years ago. Almost a century of reason was shaped by many even today famous and incredible influential philosophers, such as Immanuel Kant (e.g., the categorical imperative; ethics as a universal duty) (Kant, 1788, 2012), Hume (e.g., ethics are rooted in human emotions rather than what he regarded as abstract ethical principles, feelings) (Hume, 1738, 2015), Adam Smith (Smith 1776, 1991) and a wealth of other philosophers (Gottlieb, 2016; Outram 2012). I personally regard Rene Descartes (e.g., cogito ergo sum; I think, therefor I am) (Descartes, 1637, 2017) as important as well, although arguably his work predates the official period of the Enlightenment.

For us to discuss how ethics may apply to artificial intelligent (AI) beings, lets structure the main ethical frameworks as seen from above and usually addressed in work on AI Ethics;

AI Ethics,AI Policy,AI Sentiment,AI strategy,Algorithms,Artificial Intelligence,Consequentialism,Deontological Ethics,Ethics,Machine Learning,Morality,Teleological Ethics,Utilitarianism

2nd Law: A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.

In the above ethical scary tale, it is assumed that the product managers and designers did not consider that their AI could discover another AI also connected to the World Wide Web and many if not all things. Hence, they also did not consider including a (business) ethical framework in their AI system design that would have prevented their AI to interact with another artificial being. Or at least prevent two unrelated AIs to collaborate and positively leapfrog their respective goal functions jointly and thus likely violating human business ethics and compliance.

Many computer scientists and ethicists (oh yes! here they do tend to agree!) regards real world applications of Asimovian ethics as a rather meaningless or a too simplistic endeavor (Murphy & Woods, 2009;Anderson & Anderson, 2010). The framework is prone to internal conflicts resulting in indecision or too long decision timescales for the problem at hand. Asimovian ethics teaches us the difficulty in creating an ethical bullet-proof framework without Genie loopholes attached.

There are a bunch of other issues with the Asimov Laws that is well accounted in Peter Swingers article Isaac Asimovs Laws of Robotics are wrong (Singer, 2018). Lets be honest, if the Asimovian ethics would have been perfect, Isaac Asimovs books wouldnt have been much fun to read. The way to look at the challenges with Asimovs Laws, is not that Asimov sucks at defining ethical rules, but that it is very challenging in general to define rules that can be coded into an AI system and work without logical conflicts and un-foreseen in- intended disastrous consequences.

Humans hold AIs to substantially higher standards than their fellow humans..

While there are substantial technical challenges in coding a working morality into an AI-system, it is worthwhile to consider the following possibility;

Clear human accountability for AI actions must be ensured.

Artificial Intelligence Strategies & Policies Reviewed. How do we humans perceive AI?; Are we allergic to AI?, Have AI aversions? or do we love AI or is it hate? Maybe indifference? What about trust? More or is it less than in our peers? How to shape your Corporate AI Policy; How to formulate your Corporate AI Strategy. Lots of questions. Many answers leading to more questions.

You may think this is the stuff of science fiction and Artificial General Intelligent (AGI) in the realm of Nick Bostroms super intelligent beings (Bostrom, 2016). But no it is not! The narrative above is very much consistent a straightforward extrapolation of a recent DARPA (Defense Advanced Research Project Agency) project (e.g.,Agency & Events, 2018) where two systems, unknown to each other and of each others communication protocol properties, discover each other, commence collaboration and communication as well as jointly optimizing their operations. Alas, I have only allowed for the basic idea a bit more time (i.e., ca. 4 years) to mature.

when asked whether people would trust a decision made by a human vs a decision made by an AI where both have a proven success rate above 70%, people remain choosing the human decision maker over the AI. While there is little dependency on stipulating the success rate for the human decision maker preference, the preference for AI improves significantly upon specifying that its success rate is better than 70% (Larsen, 2018 I). But then again how many humans do you know having a beyond 70% success rate in their decision making (obviously not per see easy to measure and one would probably get a somewhat biased answer from decision makers).

The list above is far from exhaustive and it is a minimum set of requirements we would expect from human-human interactions and human decision makings anyway (whether it is fulfilled is another question). The above requirements are also consistent with what IEEE Standards Association considers important in designing an ethical AI-based system (EADv2, 2018) with the addition of requiring AI-systems to explicitly honor inalienable human rights.

However, before I cast the last stone, it is worth keeping in mind that we should strive for our intelligent machines, AIs, to do much better than us, be more consistent than us and at least as transparent as us;

There is a runaway trolley barreling down the railway track. Ahead, on the track, there are five people tied up and unable to move. The trolley is headed straight for them. You (dear reader) is standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different side track. However, you notice that there is one person tied up on the side track. You have two options:

The (fictive) CEO Elton Must get the idea

such as the Old Testament 10 Commandments, Christianitys Golden Rule (i.e., Do to others what you want them to do to you.) or Asimovs 4 Laws of Robotics. This category also includes the religious rules as well as rules of law. Typically this is the domain where compliance and legal people often find themselves most comfortable. Certainly, from an AI design perspective it is the easiest, although far from easy, ethical framework to implement compared to for example a bottom-up ethical framework. This approach takes information and procedural requirements of an ethical framework that is necessary for a real-world implementation. Learning top-down ethics is in its nature a supervised learning process. For human as well as for machine learning.

3rd Law: A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

It is not possible to address AI Ethics without briefly discussing the Asimovian Laws of Robotics;

Would knowing all (or at least some) of the answers to those questions maybe help us design autonomous systems adhering to human ethical principles as we humans (occasionally) do? Or is making AIs in our own image (Osaba & Welser IV, 2017) fraught with the same moral challenges as we face every day.

While it is good to consider building ethical rules into AI-based systems, the starting point should be in the early design stage and clearly should focus on what is right and what is wrong to develop. The focus should be to provide behavioral boundaries for the AI. The designer and product manager (and ultimately the company they work for) have a great responsibility. Of course, if the designer is another AI, then the designer of that, and if that is an AI, and so forth this idea while good is obviously not genius proof.

Laws 1 3 was first introduced by Asimov in several short stories about robots back in 1942 and later compiled in his book I, Robot (Asimov, 1950, 1984). The Zeroth Law was introduced much later in Asimovs book Foundation and Earth (Asimov, 1986, 2013).

So what do we know about ethical us, the moral identity, moral reasoning and actions? How much is explained by nurture and how much is due to nature?

Tags:

Leave a Reply