Friday, May 12, 2017

A.I.: Really, Dude, We Probably Shouldn't Build It.


We've not exactly been good at handling the technology we already got.




            Considering the technological state of the world only, say, two-hundred years ago, it is quite impressive to see how far humanity has come in its advances. We have continued to find ways to enhance our state of living through technology and our efforts to perfect this technology. Just when we believe that our invention of a certain technology the absolute best or peak of its efficiency or function, we have found ways to further improve upon this technology. As we move further into the future, the existence of AI (Artificial Intelligence) technology seems to be more and more possible and advanced, already existing in some basic forms. In fact, it is projected that “by 2050 the probability of high-level machine intelligence (that surpasses human ability in nearly all respects) goes beyond the 50% mark.”[1] Yet there have been concerns of building such entities for a variety of reasons: AIs taking control over humans, humans becoming lazy beings, AIs unjustly fulfilling roles as people, etc. But technology has increased humanity’s state of living and has proven to continue to do so. The question is posed, then: should we build such entities?

            We, the authors, recognize that our efforts to write this essay will likely not change the course of decisions to build AI technology, as such technology likely will be invented soon. In fact, we are quite certain that we will see AI in action within the next couple of decades, yet we wish to provide our opinion on this question. We hold that the cons outweigh the pros for building such technology. Therefore, throughout this essay, we will explain the positives (advancements, enhancements, productivity, entertainment) and negatives (AI revolts, AI overriding human control, unhealthy attachments to machines) for building AI and analyze how the latter is greater than the former. We will then conclude by offering some reflection on how we should approach this debate as this technology becomes ever closer to our reach.



“You Can’t Code Human Ethics!”



            Again, our discussion is not based upon the fact that AI could be coming sometime in the future; rather, we recognize that it is inevitably coming soon, and in this section, we wish to provide reasons why such AI can have negative outcomes. One of the potential problems proposed with AI that has an intellect similar to that of humans or higher is their ability, or lack thereof, to make decisions founded on “good ethics.” Many have proposed theories that are meant to possibly help AI become good moral agents and therefore not “takeover” their creators, including Allen et. al, who argue that a “Moral Turing Test” and differences in approach to computing morality into AI could lead to moral perfection.[2] Even those who have researched and proposed some of the best possible theories in creating good moral AI agents recognize that the best of plans still may entail failure. Brundage acknowledges the work of Allen et. al and holds that these types of projects will likely necessarily fail: “[w]hile such inevitable imperfection may be acceptable to traditional machine ethicists… it presents a fundamental problem for the usefulness of machine ethics as a tool in the toolbox for ensuring positive outcomes from powerful computational agents.”[3]

            Furthermore, there are those who argue that human morality and judgement is not codifiable at all. Purves et. al discuss the potential dangers of AI in terms of how they might be used in warfare and argue that “even a sophisticated robot is not the kind of thing that is capable of replicating human moral judgment.”[4] Their argument goes, then, since human judgement is not codifiable, programming a list of rules will not create a solution. What then makes the moral judgement of humans so unique then? Purves et. al say that “[m]oral judgment requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain facts as moral considerations, moral imagination, or the ability to have moral experiences with a particular phenomenological character.”[5] The assumption here is that certain actions of moral imagination and deliberation are unable to be computed into AI, and therefore AI cannot be trusted as good moral agents.



An Issue of Trust: “2-W.I.S.E.”



            But suppose we discover a way to encode such deliberation and imagination in AI: is the problem solved? One should consider that if we have the ability to encode such judgement that is morally perfected into AI, then we should not stop at a level compatible with human reasoning; “With another human, we often assume that they have similar common-sense reasoning capabilities and ethical standards. That’s not true of machines, so we need to hold them to a higher standard.”[6] The result would be building machines that would eventually have a higher ability to reason and with higher moral standards than that of humans. But on what basis would we know if this was actually true? Perhaps an ability to reason might be measurable somehow, but how would one determine if AI had a higher “moral capacity” than humans?

            Consider the following: an AI robot has been developed and is used by the U.S. government to make important decisions based on war, aiding impoverished countries, etc. If the government came to this robot, let’s call him 2-W.I.S.E., and asked for wisdom about a potential attack on China, who has recently left millions of its people without food or water due to new governing regulations, what would 2-W.I.S.E. say? Well, it turns out that 2-W.I.S.E. says yes, it is in the best interest of America to attack China. What then would the American leaders do, immediately obey the commands of this robot? We would like to think that is not the case. Perhaps after 2-W.I.S.E. has handled other smaller dilemmas that have resulted in positive results there would be greater trust in its decision-making abilities based on its moral wisdom. But even so, its wisdom at some point would be called into questioning with such a great decision at stake. It seems that it would be trusted only from what 2-W.I.S.E.’s creators have said about this robot and from the robot’s track record of making decisions concerning morality. Therefore, it seems that with computer systems this smart, we would not like trusting a system which we have created that is meant to solve moral dilemmas that supposedly we are not fully capable of doing so ourselves.

            Not only would we be skeptical about trusting in a robotic system claiming to have better moral wisdom than us, especially when we are still struggling today to determine what is moral, but many would simply not allow another entity that is not human is have authority over them. When one considers why it is humans continue to create better and better technology, it is so that they increase their quality of life in one way or another. These motives, pleasure, power, etc., are different among people, and with these being the case, there are some that would not like the power that 2-W.I.S.E. would hold. Humans desiring power and control, whether they choose to be moral beings like 2-W.I.S.E. is designed to be, will still look for ways to selfishly place themselves in positions of power. It seems that the result of this would be either the destruction of 2-W.I.S.E. by these kinds of people or the “proper handling” of people like this as 2-W.I.S.E. prescribes it. It is hard to imagine a robot in a position of such high power based on its supposed moral knowledge because of humanity’s nature, both selfish and power-driven.



AI: “Let’s Build It!”



            There are many who hold that building AI has far more possible positive outcomes than negative ones. It appears that many hold that if we approach the creation of AI models carefully and with the right intentions, the ways that these robotics can enhance our lives are nearly limitless. For example, Omohundro believes that an “AI Scaffolding” technique can safely help develop AI as it is being built and researched, and the results can help “eliminate much human drudgery and dramatically improve manufacturing and wealth creation… improve human health and longevity… enhance our ability to learn and think… improve financial stability”[7] and many more possibilities. But as a response to building AI with good intentions, let us consider some examples of AI creation with alternative motives.



I, Robot vs. Westworld: Intent of Creating AI



            There are many movies and television shows that have been made which show the potential danger of building artificial intelligence. Ultimately, in nearly all, if not all, of these productions, the problem within the plot is the take-over of AI’s. Yet, while this appears to be the main problem at stake, there appears to be different uses of this technology before this catastrophic event happens. In the film I, Robot, the purpose of creating AI is the simple goal of aiding humans. The film illustrates a wide variety of ways in which the robots help humans, from the great tasks such as serving as police officers to simply running errands for their commanding humans. By being encoded with the “Three Laws of Robotics,” which are desgined to ultimately protect humans, the robots are able to be endowed with such responsibilities. Ultimately, the AI system as a whole is able to find a loophole in these laws and cause the famous robot-takeover plot. Though the system is able to “outwit” the human engineers who designed them, it is important to see that the AI in this film were designed primarily to help humanity above anything else.

            Westworld is another production example of the advancement of AI and its problems, but this series, though it remains to be completed, has a different reason for creating this technology. “Westworld” is essentially a theme park with a typical “Wild West” setting in which one can enter into it and interact with AI who are so advanced that they nearly indistinguishable from humans. Whoever desires to pay to enter into this world may interact with a large variety of robots who are on a repeating “storyline” in terms with how the robots interact with one another. The park-visitors are free to shoot, befriend, have sexual relations, or whatever else with the AIs (since the park-workers have the ability to “fix” the AI after each day), and the AIs can do nothing to harm the people back (at least, kill). Again, the same problem develops of the AIs beginning to outsmart their programming and become dangerous, independent beings. But what leads to this problem is different than that of I, Robot: the primary function of AI in Westworld is for the pleasure of humans.

            Obviously using these two examples of media entertainment are only fictional tales of what could happen in the future and are not examples of fact, but they face similar problems from different starting points. While in I, Robot people were dedicated to creating and using AI in the safest, most precautious ways possible to help humanity, those in Westworld used AI technology to either increase the wealth of the owners or give the attenders of the park a new reality experience which they had never had before. We would argue that the people in I, Robot were far more responsible in their intentions, since they were motivated by aiding humanity rather than by greed or bliss. However, the result in both cases was the same. It appears that regardless of intention, continuing to build advanced AI technologies will result in problems, perhaps even ones that entail the loss of human life and/or freedom.



AI as Advanced and Committed… But Overly So?



            The pros of AI can be summed up in one word: advancement. Advancement in living, advancement in health, and advancement in wealth. Coeckelbergh writes about replacing human nurses, and other staff in hospitals, with what he calls “care robots.” These robots provide their patients with more privacy and better care. Lower technology used to care for people now are obsolete in comparison to what can be built for people.[8] With many advancements like this, where AI can be used to improve a person’s life, it makes it hard to argue against the increasing use of AI. When AI is used in order to help people such as remembering to go to a meeting or setting an alarm to help a person to get up and go to their son’s track meet, these advancements help improve life. This then is when people use their handheld personal assists like, Cortana, Siri, and Google Now. As of now, Siri does not do much on its own without being commanded by the user, but software for AI is still increasing. Even though now Siri may set simple reminders on other apps, in the future Siri could remember that even if there is a remainder on its user’s phone, this user may not always respond to just a simple notification. Siri then, takes it upon itself to call the owner and verbally remind them of the meeting, or maybe even directs its owner’s, self-driving, car to the meeting, they were trying to be reminded of.

            The issue that comes here is what if Siri, ever increasingly more advanced, begins to make decisions without prompting of its owner’s instruction. For example, with an initial instruction, an owner prompts Siri to help them with dieting, Siri then makes plans for meals and help monitor the owner’s intake. The owner does not tell Siri they will be enjoying a cheat day on Fridays, Friday then comes around, and the owner goes to the fridge to grab some pudding, but Siri has locked the fridge to help its owner maintain their diet. This is a flawed example, and one could argue that systems can be set with overrides, but when does decision making from the owner transfer to the AI? Omohundro writes about a chess robot, which has been programed to win chess games and with AI is learning the best way to win more chess. The only goal and value of this chess robot is to learn more about chess to win more games. What if this robot develops sub-goals that help it win more chess? These sub-goals could be from cheating or distracting an opponent in order to win more chess. One would argue that it can be unplugged, but the robot has been programed to win chess, and it cannot win more games if it is unplugged. The robot then does all in its power to prevent itself from being unplugged.[9] The point here is that in order to build safe AI, one must look at all the possibilities of malfunction. Can all malfunctions be thought of in order to prevent the issues that could arise? This is hard to say, but as technology continues to progress, the more complicated it is to think of these potential malfunctions.



AI as Humans and “Humanness-Suckers”



            When there is talk about technological advancement in general, to the extreme, an image from the movie WALL-E comes to mind. The view of blobs, of people, floating around on hovering chairs being served anything they desire, creating the other issue of AI controlling our lives willingly. To give an example like this in our world today, there could be a person who hates making decisions, so they ask Siri to tell them where they should eat. This in itself does not bring up much of an issue, since people ask friends where they should eat all the time. What then if it is scaled to a higher level and a person relies on Siri, one that is more advanced than the current is programmed to be that of a life mentor? Siri then tells this person, for example, what to say in hard situations or what to wear, and this person becomes dependent on Siri. Siri becomes an almost authoritative figure for that person, letting it make all/most of their important decisions in their life. This person could then to develop a certain emotional attachment to Siri due to the fact of helping shape one’s life.

In many different works of media, there has been showings of these advancements in AI where emotional attachment is an issue with movies like Her and television shows like Black Mirror. These, in some cases, show how emotionally attached a person could get to such advancements in AI. In an episode of Black Mirror, a woman, whose husband has died, is able to have phone calls with her dead loved one. In the show those making this form of AI, by the person’s use on social media, gather all information about this person to make the AI act like the person who has passed away. The woman who begins these phone calls becomes extremely attached to this AI in an emotional way. In Her, the main character buys an AI that is supposed to improve his life, and then becomes, in a sense, emotionally involved with the AI. This is to show that not only could human beings use these advancements in AI to improve areas of life, but there is a real potential for emotional attachment, even more so if the AI is put into something that looks, feel, sounds like a normal human being. Is this type of connection dangerous? Should AI be built to resemble human beings? Is this type of connection and emotional appeal safe for those experiencing this type of advancement in AI?

            With the example of Black Mirror, the woman beings to relies that there is something still missing, that this AI does not fully replace her husband, who has passed away. Yet she is unable to completely remove the AI from her life due to the emotional connection and the resemblance to her husband. That to say, with emotional connection to the AI, which it seems rather possible in the Advancement of AI, that ethic becomes more of a gray area. One might argue their own AI is human due to the human likeness. Could a person ethically manipulate other people by using AI that is meant to play at the emotions of others? What if an AI was made to look like a small child, one that has a limp and seems to be homeless? Then this AI is set up at a train station to collect money from those who did not recognize this is just an AI. The danger comes in when AI is perceived to be more than what it really is, and that is artificial. AI cannot replace real human contact and connection. It does not feel emotions the way human beings can, but it could simulate it. Is simulating feelings, experience, and life the same the connection a person has with another human being, or is it all in perception? It is our belief that this is not the case. “Care robots,” for example, are not the same as the love a nurse can have towards their patient, who does not simulate sympathy, but truly, genuinely cares for the person they are looking after.



Conclusion: Precautions Concerning Future AI Endeavors



            Again, this essay does not doubt the certain future of these AI or even ask that it not be built, but the issue is the concerns that arise when depending on such entities. In several areas, there can be safe and helpful ways to implement AI, in order to improve health and life of human beings. These advancements should not be taken lightly because ethical problems arise in increasing use and progression of AI. If man has the ability to build something or improve something, then it will happen. The technology is out there to bring AI to a new level and the projections of this potential technology show that it is not so far into the future as some perceive. Humankind will build it, not because they must, but because they can. There are pros to increasing AI, yet there must be regulation and safety programs that go along with it. If AI is to be made as advanced as possible, which this essay recognizes that it will most likely happen, then all possible outcomes must be considered. All malfunctions must be thought through, thinking of all the possible outcomes and testing all the machine functions in order to secure maximal safety; all this must be brought into the conversation with such unknowns surrounding how far AI can go. For the safest possible use of these AI, how it is used, in what situations it is used, for what purpose, what form they take, and how widely accessible these AI are, these are the topics we must continue to think of as we further explore this technology so we do not either end up like a typical robot-apocalypse film or, in some sense, lose our understanding of what it means to be human. And given how we've barely kept lids on every other technology -- really, dude, we probably shouldn't build it.





[1] Vincent C. Müller. 2014. "Risks of General Artificial Intelligence." Journal of Experimental & Theoretical Artificial Intelligence 26, no. 3: 297-301. Academic Search Premier, EBSCOhost (accessed April 20, 2017).
[2] Colin Allen, Gary Varner, and Jason Zinser. 2000. "Prolegomena to Any Future Artificial Moral Agent." Journal of Experimental & Theoretical Artificial Intelligence 12, no. 3: 251-261. Academic Search Premier, EBSCOhost (accessed April 25, 2017).

[3] Miles Brundage. 2014. "Limitations and Risks of Machine Ethics." Journal of Experimental & Theoretical Artificial Intelligence 26, no. 3: 355-372. Academic Search Premier, EBSCOhost (accessed April 24, 2017).

[4] Duncan Purves, Ryan Jenkins, and Bradley Strawser. 2015. "Autonomous Machines, Moral Judgment, and Acting for the Right Reasons." Ethical Theory & Moral Practice 18, no. 4: 851-872. Academic Search Premier, EBSCOhost (accessed April 24, 2017).

[5] Ibid.

[6] Greg Satell. 2016. "Teaching an Algorithm to Understand Right and Wrong." Harvard Business Review Digital Articles: Ethics 2-5. Business Source Premier, EBSCOhost (accessed April 25, 2017).
[7] Steve Omohundro. 2012. "Can We Program Safe AI?" Issues no. 98: 24-26. Education Research Complete, EBSCOhost (accessed April 20, 2017).
[8] Mark Coeckelbergh. 2010. "Health Care, Capabilities, and AI Assistive Technologies." Ethical Theory & Moral Practice 13, no. 2: 181-190. Academic Search Premier, EBSCOhost (accessed April 20, 2017).

[9] Omohundro. "Can We Program Safe AI?"







Works Referenced

Allen, Colin, Gary Varner, and Jason Zinser. 2000. "Prolegomena to Any Future Artificial Moral Agent." Journal of Experimental & Theoretical Artificial Intelligence 12, no. 3: 251-261. Academic Search Premier, EBSCOhost (accessed April 25, 2017).

Brundage, Miles. 2014. "Limitations and Risks of Machine Ethics." Journal of Experimental & Theoretical Artificial Intelligence 26, no. 3: 355-372. Academic Search Premier, EBSCOhost (accessed April 24, 2017).

Coeckelbergh, Mark. 2010. "Health Care, Capabilities, and AI Assistive Technologies." Ethical Theory & Moral Practice 13, no. 2: 181-190. Academic Search Premier, EBSCOhost (accessed April 20, 2017).

Müller, Vincent C. 2014. "Risks of General Artificial Intelligence." Journal of Experimental & Theoretical Artificial Intelligence 26, no. 3: 297-301. Academic Search Premier, EBSCOhost (accessed April 20, 2017).

Omohundro, Steve. 2012. "Can We Program Safe AI?" Issues no. 98: 24-26. Education Research Complete, EBSCOhost (accessed April 20, 2017).

Purves, Duncan, Ryan Jenkins, and Bradley Strawser. 2015. "Autonomous Machines, Moral Judgment, and Acting for the Right Reasons." Ethical Theory & Moral Practice 18, no. 4: 851-872. Academic Search Premier, EBSCOhost (accessed April 24, 2017).

Satell, Greg. 2016. "Teaching an Algorithm to Understand Right and Wrong." Harvard Business Review Digital Articles: Ethics 2-5. Business Source Premier, EBSCOhost (accessed April 25, 2017).

*Thanks to Larry Dunlap and Caleb Bechtold for contributing to the content of this essay.

0 Comments:

Post a Comment

<< Home