Friday, May 12, 2017

The envelope of your death, hand delivered.


Opening it wouldn't be a straightforward decision, and receiving it isn't a far-out possibility.

Stipulate that God, or perhaps one of his unimpeachable representatives, sends you a very special, albeit, sealed envelope.  You are aware of this special envelope's origin, and upon observing its delivery information, no doubt written in fine Gothic calligraphy, you see a subject line which reads, "The date of your death."  Mercifully, someone has informally written, "Don't worry, it's not necessarily that soon."  Perhaps you're even told that opening the envelope is completely optional. Question --

Would you open the envelope?

As I think about this, there are couple of issues that come immediately to mind. First, people might vary on whether such information about the exact time of one's death would contribute or detract from the meaning of life.

For example, perhaps an Oklahoman has always dreamed of traveling to Maine, but upon opening the envelope, finds that s/he will be dead within four hours. Here a life-long wish has been decisively subverted, as one cannot physically accomplish the desired goal within the short time left in one's life. But to vary this example a bit, suppose upon opening the envelope, s/he finds that death will occur a month later. S/he then decides to immediately pack up the family and take the trip. Here the parameters of a life-long wish would be just as decisively fulfilled, since what might have been delayed, and hence at risk for never occurring, comes instead to immediate fruition. In the first case, the knowledge detracts; but, in the second case, the knowledge contributes to life's meaning.

As another example, perhaps a father of three children has always worried of dying in middle age of heart-attack or stroke, since there is an overwhelming medical history of these events in his family. But upon opening the envelope, finds that he will live to the ripe old age of 91. Here such knowledge would greatly contribute to serenity of mind and, no doubt, to financial stability, since term or whole life insurance costs would seem to be of a far less pressing matter in the face of such knowledge. One can imagine varying this case for a physically-fit, even athletic young mother who, to her horror, discovers she will die two months hence. Such knowledge would hardly contribute to serenity of mind, though some might argue it's better to know than not, since some amount of planning for such a catastrophe is better than none. Here, in the first case of the father, the knowledge enhances; but, in the second case of the mother, the knowledge detracts from life's meaning.

These kind of cases cases where absolute knowledge is available about one's death are not as far-fetched as one might think, since for some diseases there are very well-established statistics for how long one can expect to live. If one contracts disease X, and studies have shown that less than 5% of patients survive for three years or more with X, and that the median survival time is one year, then one of God's most reliable representatives, Science, seemingly has already delivered plenty of bulk mail to many persons who already face terminal medical conditions. Indeed, many 90+-year-old people who carefully reason about their future, and who fortunately experience a relatively stable health situation, are already in this bulk-mail situation. So, receiving such mail is not as far-out a possibility as one might think.


O.



REFERENCES

[image] me

Labels: , , ,

A.I.: Really, Dude, We Probably Shouldn't Build It.


We've not exactly been good at handling the technology we already got.




            Considering the technological state of the world only, say, two-hundred years ago, it is quite impressive to see how far humanity has come in its advances. We have continued to find ways to enhance our state of living through technology and our efforts to perfect this technology. Just when we believe that our invention of a certain technology the absolute best or peak of its efficiency or function, we have found ways to further improve upon this technology. As we move further into the future, the existence of AI (Artificial Intelligence) technology seems to be more and more possible and advanced, already existing in some basic forms. In fact, it is projected that “by 2050 the probability of high-level machine intelligence (that surpasses human ability in nearly all respects) goes beyond the 50% mark.”[1] Yet there have been concerns of building such entities for a variety of reasons: AIs taking control over humans, humans becoming lazy beings, AIs unjustly fulfilling roles as people, etc. But technology has increased humanity’s state of living and has proven to continue to do so. The question is posed, then: should we build such entities?

            We, the authors, recognize that our efforts to write this essay will likely not change the course of decisions to build AI technology, as such technology likely will be invented soon. In fact, we are quite certain that we will see AI in action within the next couple of decades, yet we wish to provide our opinion on this question. We hold that the cons outweigh the pros for building such technology. Therefore, throughout this essay, we will explain the positives (advancements, enhancements, productivity, entertainment) and negatives (AI revolts, AI overriding human control, unhealthy attachments to machines) for building AI and analyze how the latter is greater than the former. We will then conclude by offering some reflection on how we should approach this debate as this technology becomes ever closer to our reach.



“You Can’t Code Human Ethics!”



            Again, our discussion is not based upon the fact that AI could be coming sometime in the future; rather, we recognize that it is inevitably coming soon, and in this section, we wish to provide reasons why such AI can have negative outcomes. One of the potential problems proposed with AI that has an intellect similar to that of humans or higher is their ability, or lack thereof, to make decisions founded on “good ethics.” Many have proposed theories that are meant to possibly help AI become good moral agents and therefore not “takeover” their creators, including Allen et. al, who argue that a “Moral Turing Test” and differences in approach to computing morality into AI could lead to moral perfection.[2] Even those who have researched and proposed some of the best possible theories in creating good moral AI agents recognize that the best of plans still may entail failure. Brundage acknowledges the work of Allen et. al and holds that these types of projects will likely necessarily fail: “[w]hile such inevitable imperfection may be acceptable to traditional machine ethicists… it presents a fundamental problem for the usefulness of machine ethics as a tool in the toolbox for ensuring positive outcomes from powerful computational agents.”[3]

            Furthermore, there are those who argue that human morality and judgement is not codifiable at all. Purves et. al discuss the potential dangers of AI in terms of how they might be used in warfare and argue that “even a sophisticated robot is not the kind of thing that is capable of replicating human moral judgment.”[4] Their argument goes, then, since human judgement is not codifiable, programming a list of rules will not create a solution. What then makes the moral judgement of humans so unique then? Purves et. al say that “[m]oral judgment requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain facts as moral considerations, moral imagination, or the ability to have moral experiences with a particular phenomenological character.”[5] The assumption here is that certain actions of moral imagination and deliberation are unable to be computed into AI, and therefore AI cannot be trusted as good moral agents.



An Issue of Trust: “2-W.I.S.E.”



            But suppose we discover a way to encode such deliberation and imagination in AI: is the problem solved? One should consider that if we have the ability to encode such judgement that is morally perfected into AI, then we should not stop at a level compatible with human reasoning; “With another human, we often assume that they have similar common-sense reasoning capabilities and ethical standards. That’s not true of machines, so we need to hold them to a higher standard.”[6] The result would be building machines that would eventually have a higher ability to reason and with higher moral standards than that of humans. But on what basis would we know if this was actually true? Perhaps an ability to reason might be measurable somehow, but how would one determine if AI had a higher “moral capacity” than humans?

            Consider the following: an AI robot has been developed and is used by the U.S. government to make important decisions based on war, aiding impoverished countries, etc. If the government came to this robot, let’s call him 2-W.I.S.E., and asked for wisdom about a potential attack on China, who has recently left millions of its people without food or water due to new governing regulations, what would 2-W.I.S.E. say? Well, it turns out that 2-W.I.S.E. says yes, it is in the best interest of America to attack China. What then would the American leaders do, immediately obey the commands of this robot? We would like to think that is not the case. Perhaps after 2-W.I.S.E. has handled other smaller dilemmas that have resulted in positive results there would be greater trust in its decision-making abilities based on its moral wisdom. But even so, its wisdom at some point would be called into questioning with such a great decision at stake. It seems that it would be trusted only from what 2-W.I.S.E.’s creators have said about this robot and from the robot’s track record of making decisions concerning morality. Therefore, it seems that with computer systems this smart, we would not like trusting a system which we have created that is meant to solve moral dilemmas that supposedly we are not fully capable of doing so ourselves.

            Not only would we be skeptical about trusting in a robotic system claiming to have better moral wisdom than us, especially when we are still struggling today to determine what is moral, but many would simply not allow another entity that is not human is have authority over them. When one considers why it is humans continue to create better and better technology, it is so that they increase their quality of life in one way or another. These motives, pleasure, power, etc., are different among people, and with these being the case, there are some that would not like the power that 2-W.I.S.E. would hold. Humans desiring power and control, whether they choose to be moral beings like 2-W.I.S.E. is designed to be, will still look for ways to selfishly place themselves in positions of power. It seems that the result of this would be either the destruction of 2-W.I.S.E. by these kinds of people or the “proper handling” of people like this as 2-W.I.S.E. prescribes it. It is hard to imagine a robot in a position of such high power based on its supposed moral knowledge because of humanity’s nature, both selfish and power-driven.



AI: “Let’s Build It!”



            There are many who hold that building AI has far more possible positive outcomes than negative ones. It appears that many hold that if we approach the creation of AI models carefully and with the right intentions, the ways that these robotics can enhance our lives are nearly limitless. For example, Omohundro believes that an “AI Scaffolding” technique can safely help develop AI as it is being built and researched, and the results can help “eliminate much human drudgery and dramatically improve manufacturing and wealth creation… improve human health and longevity… enhance our ability to learn and think… improve financial stability”[7] and many more possibilities. But as a response to building AI with good intentions, let us consider some examples of AI creation with alternative motives.



I, Robot vs. Westworld: Intent of Creating AI



            There are many movies and television shows that have been made which show the potential danger of building artificial intelligence. Ultimately, in nearly all, if not all, of these productions, the problem within the plot is the take-over of AI’s. Yet, while this appears to be the main problem at stake, there appears to be different uses of this technology before this catastrophic event happens. In the film I, Robot, the purpose of creating AI is the simple goal of aiding humans. The film illustrates a wide variety of ways in which the robots help humans, from the great tasks such as serving as police officers to simply running errands for their commanding humans. By being encoded with the “Three Laws of Robotics,” which are desgined to ultimately protect humans, the robots are able to be endowed with such responsibilities. Ultimately, the AI system as a whole is able to find a loophole in these laws and cause the famous robot-takeover plot. Though the system is able to “outwit” the human engineers who designed them, it is important to see that the AI in this film were designed primarily to help humanity above anything else.

            Westworld is another production example of the advancement of AI and its problems, but this series, though it remains to be completed, has a different reason for creating this technology. “Westworld” is essentially a theme park with a typical “Wild West” setting in which one can enter into it and interact with AI who are so advanced that they nearly indistinguishable from humans. Whoever desires to pay to enter into this world may interact with a large variety of robots who are on a repeating “storyline” in terms with how the robots interact with one another. The park-visitors are free to shoot, befriend, have sexual relations, or whatever else with the AIs (since the park-workers have the ability to “fix” the AI after each day), and the AIs can do nothing to harm the people back (at least, kill). Again, the same problem develops of the AIs beginning to outsmart their programming and become dangerous, independent beings. But what leads to this problem is different than that of I, Robot: the primary function of AI in Westworld is for the pleasure of humans.

            Obviously using these two examples of media entertainment are only fictional tales of what could happen in the future and are not examples of fact, but they face similar problems from different starting points. While in I, Robot people were dedicated to creating and using AI in the safest, most precautious ways possible to help humanity, those in Westworld used AI technology to either increase the wealth of the owners or give the attenders of the park a new reality experience which they had never had before. We would argue that the people in I, Robot were far more responsible in their intentions, since they were motivated by aiding humanity rather than by greed or bliss. However, the result in both cases was the same. It appears that regardless of intention, continuing to build advanced AI technologies will result in problems, perhaps even ones that entail the loss of human life and/or freedom.



AI as Advanced and Committed… But Overly So?



            The pros of AI can be summed up in one word: advancement. Advancement in living, advancement in health, and advancement in wealth. Coeckelbergh writes about replacing human nurses, and other staff in hospitals, with what he calls “care robots.” These robots provide their patients with more privacy and better care. Lower technology used to care for people now are obsolete in comparison to what can be built for people.[8] With many advancements like this, where AI can be used to improve a person’s life, it makes it hard to argue against the increasing use of AI. When AI is used in order to help people such as remembering to go to a meeting or setting an alarm to help a person to get up and go to their son’s track meet, these advancements help improve life. This then is when people use their handheld personal assists like, Cortana, Siri, and Google Now. As of now, Siri does not do much on its own without being commanded by the user, but software for AI is still increasing. Even though now Siri may set simple reminders on other apps, in the future Siri could remember that even if there is a remainder on its user’s phone, this user may not always respond to just a simple notification. Siri then, takes it upon itself to call the owner and verbally remind them of the meeting, or maybe even directs its owner’s, self-driving, car to the meeting, they were trying to be reminded of.

            The issue that comes here is what if Siri, ever increasingly more advanced, begins to make decisions without prompting of its owner’s instruction. For example, with an initial instruction, an owner prompts Siri to help them with dieting, Siri then makes plans for meals and help monitor the owner’s intake. The owner does not tell Siri they will be enjoying a cheat day on Fridays, Friday then comes around, and the owner goes to the fridge to grab some pudding, but Siri has locked the fridge to help its owner maintain their diet. This is a flawed example, and one could argue that systems can be set with overrides, but when does decision making from the owner transfer to the AI? Omohundro writes about a chess robot, which has been programed to win chess games and with AI is learning the best way to win more chess. The only goal and value of this chess robot is to learn more about chess to win more games. What if this robot develops sub-goals that help it win more chess? These sub-goals could be from cheating or distracting an opponent in order to win more chess. One would argue that it can be unplugged, but the robot has been programed to win chess, and it cannot win more games if it is unplugged. The robot then does all in its power to prevent itself from being unplugged.[9] The point here is that in order to build safe AI, one must look at all the possibilities of malfunction. Can all malfunctions be thought of in order to prevent the issues that could arise? This is hard to say, but as technology continues to progress, the more complicated it is to think of these potential malfunctions.



AI as Humans and “Humanness-Suckers”



            When there is talk about technological advancement in general, to the extreme, an image from the movie WALL-E comes to mind. The view of blobs, of people, floating around on hovering chairs being served anything they desire, creating the other issue of AI controlling our lives willingly. To give an example like this in our world today, there could be a person who hates making decisions, so they ask Siri to tell them where they should eat. This in itself does not bring up much of an issue, since people ask friends where they should eat all the time. What then if it is scaled to a higher level and a person relies on Siri, one that is more advanced than the current is programmed to be that of a life mentor? Siri then tells this person, for example, what to say in hard situations or what to wear, and this person becomes dependent on Siri. Siri becomes an almost authoritative figure for that person, letting it make all/most of their important decisions in their life. This person could then to develop a certain emotional attachment to Siri due to the fact of helping shape one’s life.

In many different works of media, there has been showings of these advancements in AI where emotional attachment is an issue with movies like Her and television shows like Black Mirror. These, in some cases, show how emotionally attached a person could get to such advancements in AI. In an episode of Black Mirror, a woman, whose husband has died, is able to have phone calls with her dead loved one. In the show those making this form of AI, by the person’s use on social media, gather all information about this person to make the AI act like the person who has passed away. The woman who begins these phone calls becomes extremely attached to this AI in an emotional way. In Her, the main character buys an AI that is supposed to improve his life, and then becomes, in a sense, emotionally involved with the AI. This is to show that not only could human beings use these advancements in AI to improve areas of life, but there is a real potential for emotional attachment, even more so if the AI is put into something that looks, feel, sounds like a normal human being. Is this type of connection dangerous? Should AI be built to resemble human beings? Is this type of connection and emotional appeal safe for those experiencing this type of advancement in AI?

            With the example of Black Mirror, the woman beings to relies that there is something still missing, that this AI does not fully replace her husband, who has passed away. Yet she is unable to completely remove the AI from her life due to the emotional connection and the resemblance to her husband. That to say, with emotional connection to the AI, which it seems rather possible in the Advancement of AI, that ethic becomes more of a gray area. One might argue their own AI is human due to the human likeness. Could a person ethically manipulate other people by using AI that is meant to play at the emotions of others? What if an AI was made to look like a small child, one that has a limp and seems to be homeless? Then this AI is set up at a train station to collect money from those who did not recognize this is just an AI. The danger comes in when AI is perceived to be more than what it really is, and that is artificial. AI cannot replace real human contact and connection. It does not feel emotions the way human beings can, but it could simulate it. Is simulating feelings, experience, and life the same the connection a person has with another human being, or is it all in perception? It is our belief that this is not the case. “Care robots,” for example, are not the same as the love a nurse can have towards their patient, who does not simulate sympathy, but truly, genuinely cares for the person they are looking after.



Conclusion: Precautions Concerning Future AI Endeavors



            Again, this essay does not doubt the certain future of these AI or even ask that it not be built, but the issue is the concerns that arise when depending on such entities. In several areas, there can be safe and helpful ways to implement AI, in order to improve health and life of human beings. These advancements should not be taken lightly because ethical problems arise in increasing use and progression of AI. If man has the ability to build something or improve something, then it will happen. The technology is out there to bring AI to a new level and the projections of this potential technology show that it is not so far into the future as some perceive. Humankind will build it, not because they must, but because they can. There are pros to increasing AI, yet there must be regulation and safety programs that go along with it. If AI is to be made as advanced as possible, which this essay recognizes that it will most likely happen, then all possible outcomes must be considered. All malfunctions must be thought through, thinking of all the possible outcomes and testing all the machine functions in order to secure maximal safety; all this must be brought into the conversation with such unknowns surrounding how far AI can go. For the safest possible use of these AI, how it is used, in what situations it is used, for what purpose, what form they take, and how widely accessible these AI are, these are the topics we must continue to think of as we further explore this technology so we do not either end up like a typical robot-apocalypse film or, in some sense, lose our understanding of what it means to be human. And given how we've barely kept lids on every other technology -- really, dude, we probably shouldn't build it.





[1] Vincent C. Müller. 2014. "Risks of General Artificial Intelligence." Journal of Experimental & Theoretical Artificial Intelligence 26, no. 3: 297-301. Academic Search Premier, EBSCOhost (accessed April 20, 2017).
[2] Colin Allen, Gary Varner, and Jason Zinser. 2000. "Prolegomena to Any Future Artificial Moral Agent." Journal of Experimental & Theoretical Artificial Intelligence 12, no. 3: 251-261. Academic Search Premier, EBSCOhost (accessed April 25, 2017).

[3] Miles Brundage. 2014. "Limitations and Risks of Machine Ethics." Journal of Experimental & Theoretical Artificial Intelligence 26, no. 3: 355-372. Academic Search Premier, EBSCOhost (accessed April 24, 2017).

[4] Duncan Purves, Ryan Jenkins, and Bradley Strawser. 2015. "Autonomous Machines, Moral Judgment, and Acting for the Right Reasons." Ethical Theory & Moral Practice 18, no. 4: 851-872. Academic Search Premier, EBSCOhost (accessed April 24, 2017).

[5] Ibid.

[6] Greg Satell. 2016. "Teaching an Algorithm to Understand Right and Wrong." Harvard Business Review Digital Articles: Ethics 2-5. Business Source Premier, EBSCOhost (accessed April 25, 2017).
[7] Steve Omohundro. 2012. "Can We Program Safe AI?" Issues no. 98: 24-26. Education Research Complete, EBSCOhost (accessed April 20, 2017).
[8] Mark Coeckelbergh. 2010. "Health Care, Capabilities, and AI Assistive Technologies." Ethical Theory & Moral Practice 13, no. 2: 181-190. Academic Search Premier, EBSCOhost (accessed April 20, 2017).

[9] Omohundro. "Can We Program Safe AI?"







Works Referenced

Allen, Colin, Gary Varner, and Jason Zinser. 2000. "Prolegomena to Any Future Artificial Moral Agent." Journal of Experimental & Theoretical Artificial Intelligence 12, no. 3: 251-261. Academic Search Premier, EBSCOhost (accessed April 25, 2017).

Brundage, Miles. 2014. "Limitations and Risks of Machine Ethics." Journal of Experimental & Theoretical Artificial Intelligence 26, no. 3: 355-372. Academic Search Premier, EBSCOhost (accessed April 24, 2017).

Coeckelbergh, Mark. 2010. "Health Care, Capabilities, and AI Assistive Technologies." Ethical Theory & Moral Practice 13, no. 2: 181-190. Academic Search Premier, EBSCOhost (accessed April 20, 2017).

Müller, Vincent C. 2014. "Risks of General Artificial Intelligence." Journal of Experimental & Theoretical Artificial Intelligence 26, no. 3: 297-301. Academic Search Premier, EBSCOhost (accessed April 20, 2017).

Omohundro, Steve. 2012. "Can We Program Safe AI?" Issues no. 98: 24-26. Education Research Complete, EBSCOhost (accessed April 20, 2017).

Purves, Duncan, Ryan Jenkins, and Bradley Strawser. 2015. "Autonomous Machines, Moral Judgment, and Acting for the Right Reasons." Ethical Theory & Moral Practice 18, no. 4: 851-872. Academic Search Premier, EBSCOhost (accessed April 24, 2017).

Satell, Greg. 2016. "Teaching an Algorithm to Understand Right and Wrong." Harvard Business Review Digital Articles: Ethics 2-5. Business Source Premier, EBSCOhost (accessed April 25, 2017).

*Thanks to Larry Dunlap and Caleb Bechtold for contributing to the content of this essay.

On this matter of head transplants



Victor Frankenstein? Bah -- More Like Sergio Canavero!

Transplanting an organ- or body part- from one body into another body is arguably one of medicine’s most intriguing methods of intervention. There have been many cases when a medical doctor attempts to transplant a specific organ into another body to either save somebody’s life, or to help it last a little bit longer. For example, there was a patient who suffered from end-stage systemic right ventricular dysfunction, which basically means the patient’s heart was failing to pump the necessary amount of blood to the body[1], and was dying. Her doctor performed a heart transplant in 2002, and the patient recovered fairly quickly; in her last checkup in 2005, the doctor reported that she was, “doing well, and fully ambulatory.”[2]  However, one transplant that has never even been attempted, is a full head transplant; until now. Dr. Sergio Canavero believes that he can pull it off, and transplant an entire, fully-functioning human head onto another fully-functioning body with a not-so-functioning head.
This essay will be split into two parts. The first part will explain the procedure and provide reasons why these doctors believe that this is even possible. The second part of this essay will lay out the arguments against the procedure, the arguments for the procedure, and, whether the procedure is a success or a failure, what the procedure will entail for the future of ethics and philosophy. The argument of this essay will ultimately argue for the procedure, and why it can likely be done.  Nevertheless, it is best considered as unethical.


I. The Procedure


The Case of Valery Spiridonov
        The plausibility of a procedure of this nature to take place in the near future is rapidly becoming more and more likely. This past August, CBS reporter Ashley Welch presented breaking news of the an announcement made by Dr. Sergio Canavero, an Italian neurosurgeon. Dr. Canavero, with help from Dr. Xiaoping Ren, plans to become the first people to successfully undertake and complete a full head transplant.[3] The man who hopes to find himself on this seemingly fictional operating table is 30 year old Russian computer programmer Valery Spiridonov.[4] Spiridonov suffers from Werdnig-Hoffman Disease, a very rare genetic disorder which diminishes bone tissue, causes muscular atrophy, and kills off brain cells and spinal cells. Because of his condition, Spiridonov’s fully functional brain is stuck inside a shriveled and dysfunctional body, confined to a wheelchair, and capable only of functionality above the neck as well as hand and arm movement.[5] 
After hearing about Canavero’s work, Spiridonov contacted the Italian doctor in order to volunteer himself. Tired of being unable to care for himself, and of being deprived of many of the pleasures of a functional body, Spiridonov explained, “If I have a chance of full body replacement I will get rid of the limits and be more independent.”[6] He also added that, “Removing all the sick parts but the head would do a great job in my case … I couldn’t see any other way to treat myself.”[7] Although Spiridonov’s functionality and capabilities above the neck paired with his optimism and willingness may make him seem like the perfect candidate for this procedure, the whole thing is much more complicated than a simple Dr. Frankenstein cut and paste job. In fact it seems that the amount of detail which would go into the operation is more than we might have ever imagined.

The Proposal
        Of course, if the operation were to take place, the attention to detail would be so great and the margin for error so small, that the plan which Dr. Canavero carefully laid out in his proposal would need to be followed precisely.
Canavero explains that they would begin with finding a proper “young brain-dead male patient,” with which to transplant Spiridonov’s head.[8] The process would start with Spiridonov going under anesthesia and both bodies being lowered to a colder temperature in order to slow the rate of brain cell and spinal cell death caused by the lack of blood flow and oxygen to the head in the absence of connection to a functioning heart and lungs, giving the surgeon’s more time to work.[9] The surgeon team would then begin to remove both patient’s heads including a spinal cord cut, while simultaneously attaching the blood vessels of Spiridonov’s head to the new body through tubes.
Next, “A custom-made crane would be used to shift Spiridonov’s head – hanging by Velcro straps – onto the donor body’s neck” before Canavero and his team would then begin to reattach the spinal cord using a specially made blend of PEG, or polyethylene glycol, which Dr. Canavero and Dr. Ren have been perfecting through testing of animals in order to better promote the regrowth of spine cells.[10][11] Once the spine is reattached, the next step of the process would be to begin connecting nerves, airways, blood pathways, and muscles. Essentially, anything and everything that needs contact with the brain for its function, or that the brain needs for its own function must be operational and working through the neck area. Once this has been completed, the final touches on the exterior neck area will be finished, and Spiridonov’s new body will be left to recover.


The Time in a Coma
Canavero explains that in order to allow the nerves, muscles, and blood passageways to have the proper time and environment to heal without Spiridonov’s movement hindering or creating new problems, the newly combined body will be left for three-to-four weeks in a comatose state. During this time, Canavero says that the surgeon team would “implant electrodes” in the spinal cord in order to “stimulate” nerves, nerve endings, and muscles to nerve connections in order to prep the body for the consciousness of its operator. If the project is to be approved, Dr. Canavero and Dr. Ren estimate that although the surgery would need around 80 surgeons and would “cost tens of millions of dollars.” Canavero himself even predicted an overwhelmingly confident “90 percent plus” success rate for the transplant.[12] 
Now, of course, this all sounds great in theory. But there is so much more that goes into every aspect of this entire operation than even the step by step process listed above. From every single tiny muscle fiber, to every nerve ending, to every vein and artery, not to mention the reattachment of the spinal cord, the minute detail work required in this surgery is not only overwhelming, but is extremely important to consider if one wants to legitimately understand all that is at stake, and all that must go correctly.

Reflex Arc
        The nervous system is perhaps one of the most complicated system in the study of Anatomy. Not only are neurons only connected with a synapse, or gap, between two neurons, but the nerve cells communicate using an electric and chemical factor.[13] For example, if a person were to step on a nail, then the pain stimulation will start at the afferent sensory neurons in the foot, use different neurotransmitters- acetylcholine, nicotinic cholinergic receptors, adrenergic receptors etc.- to travel up the peripheral nervous system neurons of the leg. Once the pain signal reaches the spinal cord, it crosses over through the dorsal root to travel up to the brain, where the parietal lobe then perceives the signal as pain and sends a signal back down to the foot. However, this time the signal travels down the efferent motor neurons, and crosses over through the ventral root of the spinal cord to travel back down to the foot, telling the foot to jerk back or move.[14]

Spinal Cord Repair
        Now, that paragraph is a rough summary of what happens during a reflex arc of the foot; that was the shortened version. One of the current situations of the nervous system, is that the neurons have very little healing or regeneration on their own if damaged. Neurons actually form cysts, or gliosis at the site of injury if there is any type of tissue damage, blocking regeneration.[15] That is why there are so many people who are paralyzed are likely paralyzed for life. However, there has been research that points to a way in which the spinal cord can be stimulated with biodegradable polymer scaffolds (which can be compared to a template that can be used for regeneration[16]), which when coupled with various different growth-promoting cells- like Schwann cells, neural progenitor cells, and mesenchymal stem cells- and growth factors- like GDNF (Glial Cell Derived Neurotrophic Factor, which encodes a gene that makes up a superfamily of protein that stimulates growth)[17] to bridge the gap of the damage by promoting axonal regeneration.[18] This method has been tested in different rats and mice, and have been successful.
        As mentioned before, Dr. Canavero is planning on using a solution known as polyethylene glycol. Now, the use of PEG 3350 is a laxative, but this is not the same medication being used for the spinal cord repair. PEG has actually been tested, and successful in resealing the axonal membranes of the spinal cord in multiple in vitro and in vivo models. PEG has also shown to decrease the amount of mitochondria-derived oxidative stress on intracellular components. These results point to a theory that PEG can help repair mechanical injury in two different ways: to reseal the plasma membrane, and mitochondrial protection.[19] 

Spinal Cord Fusion
        The main problem that Dr. Canavero is facing is the spinal cord fusion. According to a TED Talk at which Dr. Canavero spoke, he states that “spinal cord injury releases 26,000 Newton’s of force. It’s unrecoverable.” He then went on to explain that HEAVEN; “Gemini” (the procedure) will be using an ultra-sharp blade that will only release 10 Newton’s of force. He believes that the spinal cord can be brought back to life using the ultra-sharp blade, coupled with the use of PEG.[20] 
        PEG is the key ingredient in this procedure. According to Dr. Canavero, PEG is the “magic” ingredient that can rejoin severed peripheral nerves without the need for “perfect” alignment of the nerve fibers. The first case that PEG was applied to the spinal cord dates back to 1986. He also stated that a group of German doctors severed the spinal cord of a rat, used PEG, and the rat was up and walking around within weeks. Dr. Canavero is convinced that HEAVEN is going to work without a doubt.
        This is the procedure, and why Dr. Canavero believes that it will work. However, we can sit and talk about if it will work in theory, but until it happens, we will never know if the patient will survive or not. Now that the possibility question has been answered, an even greater question must be asked. Should this procedure take place? That is the central focus of the second half of this essay.


II. The Ethical Dilemmas of the Procedure
        The procedure has caused an uproar in the ethics and medicine field, because a good number of people do not believe the procedure should be done. Even if the procedure seems to be possible, they do not believe that a head transplant is something that humans should be able to do. Stephen Latham, a bioethicist at Yale University is not even convinced that the procedure will work, for he states while laughing in an interview, “If you’d have the technology to attach spinal columns, you’d have certainly developed the technology to repair somebody’s broken spinal column.”[21] There are a good number of students on campus who would also agree.
        However, there are also arguments for the procedure. Dr. Canavero is of course the loudest voice in trying to convince the public that the transplant is ethical, and that there is nothing immoral about the procedure. There are some scientists who would support Dr. Canavero’s claim that there is nothing immoral about the procedure, and that even if we deem it unethical in the present, humans will always move closer and closer to God-like control of health.[22] 


Arguments Against the Procedure
        As has been mentioned, the proposal of this first ever head transplant has received a fair amount of negative attention from ethicists, scientists, doctors, and even average citizens alike. From scientific and medical reasoning to ethical understandings of life, and the philosophical understanding of the significance of an individual’s brain, countless people have come forth expressing their disapproval in the operation since Canavero’s initial announcement of intent in early 2015.


It Can Not Be Done


Spinal Difficulties
        The first genre of arguments against Dr. Canavero’s work with Spiridonov is the large amount of critique that the operation has received for the legitimacy of the medical science involved. The most specific portion of the operation which seems to garner the most criticism is the reconnection process for the spinal cord. Scientist are arguing that our capability to reattach a spinal cord just simply is not a reality yet. Dr. Eduardo Rodriguez, the doctor behind the world’s most complete facial reconstruction surgery on Richard Norris, is among those in disbelief. After more than 16 years of experience in plastic surgery and spinal research, Rodriguez explains, “I don’t think it’s possible,” and that he does not believe our research is quite there yet. Because we still have yet to find full success refusing the spinal cord of an injured human, Rodriguez tells LiveScience, that combining two separate sections of spinal cord from two completely different individuals only poses a more difficult challenge.
Rodriguez feels that the central nervous system of the human body is far more complicated than Canavero is implying. Even the surgery which Rodriquez and his team completed in 2012 for a retired firefighter and burn victim, although being labelled as the most comprehensive and functional facial transplant surgery done to date, did not restore 100% functionality, feeling, and maneuverability because of the difficulty in restoring nerve connections.[23] 


Organ Preserving
On top of the difficulty of attaching the spinal cord, Dr. Canavero has also caught much controversy in his treatment of the head as a basic organ. Dr. Canavero’s plan for preserving the head after its detachment from the body is to lower its temperature to between 10-15 degrees Celsius, and although this is typical protocol for organs such as kidneys, livers and hearts, according to Business Insider's Erin Brodwin, the head proves to be a much more difficult and complex organ. The head poses all sorts of new issues that have never been fully handled before, including the preservation of “not just the brain, but your eyes, ears, nose, mouth, and skin, as well as two separate gland systems: the pituitary, which controls the hormones that circulate throughout the body, and the salivary, which are responsible for producing saliva.”
This of course is assuming that the head is able to make it to the preservation process. Successfully detaching the head itself is a major endeavor. After decapitation, a major issue that will present itself will be the sudden loss of blood pressure and oxygen rich blood, which because the effects of a lack of oxygen to the brain can be so serious and so quick, this may pose to be a huge obstacle. Another challenge which may prove to be impassible for Dr. Canavero and his team is the battle that the new body will wage against the donor head.


Immune System Complications and Time
When a body comes into contact with a foreign substance, whether that be a bacteria, a virus, a new organ or, in this case, a new head, the body’s immune system will do everything in its power to protect the body from this foreign substance, and although organs are rejected by bodies during and after surgery all of the time, the repercussions of the body rejecting its own head have yet to be seen, but could prove to be fatal for Spiridonov. The icing on the cake for the whole operation, is the fact that Dr. Canavero himself has predicted that he will only have around an hour for the operation to take place.[24] Even as his history making, and science shattering predictions stood, the odds did not seem to be in Canavero’s favor, however such a small time limit upon such an intricate and precise operation happening in under an hour is one final stipulation which puts the entire procedure outside of the faith and believe a large majority of the science and medical community.[25]

It Should Not Be Done
        The second main argument against the procedure comes from the ethical side of the conversation. One specific person who has really taken it upon himself to interact with and debunk the entire operation for the sake of ethics is bioethicist and New York University professor Arthur Caplan. Caplan exclaims that “I think it's ludicrously stupid, [and you should] ... be charged with homicide if you chop somebody's head off before they're dead.” Another argument that has been used heavily comes from the question of donor versus receiver. Because it is the deceased individual who is contributing more for the operation, the question arises: Who is the donor in this operation and who is the receiver? If Spiridonov is not designated as the receiver, then does this mean that the product will assume the identity of the body, regardless of the consciousness mentality of the person following the surgery.
Many believe that the identity of an individual is heavily based in the physical body of said individual, however this procedure seems to beg for an exception from this.[26] Along this argument, still others have a hard time accepting that this boundary of personhood is being violated for someone who is not in a critical condition. Spiridonov is not facing life or death, this is simply a fact of the matter. Because of this, many feel that his request for the operation is not one that should be granted. Not only are we using a good healthy body which could have likely gone out in different organs to individuals in more dire need for these parts, but this also has the potential to open up more and more interest in being involved in scientific achievement and scientific history in the future. Granted, the people involved are obviously signing off on the procedures, but is it ethical to allow people to sign themselves up for what is essentially scientific experimentation, if A. it is not necessary to their survival, and B. it becomes a waste of time, money, energy and other resources which could have been spent on patients in critical condition?[27]


Arguments For the Procedure
        Dr. Canavero loves this procedure; one could assume that it is “his baby.” He has been working and planning for this procedure for thirty years, and he believes that there is nothing unethical about it, and he has assembled a team of 150 doctors (80 of them being surgeons) and nurses to back him up. He and Spiridonov agree that there is nothing wrong with heart transplants, and doctors do those every day. Spiridonov went on to say, “I think it's the normal way of technology to evolve. It would be strange to stop at this point when the neurosurgery is ready to take the next step."[28]
        One argument for the procedure is as follows: if the head donor is cognitively willing to participate in the procedure and the body donor’s family willingly gives up custody of their brain-dead patient, then the procedure also has no reason to be refused. The head donor is cognitively willing to participate in the procedure. The only thing that Dr. Canavero is waiting on, is the body donor.[29] Therefore, contingent on the donation and the acceptance of a body donor, there is no reason the procedure should be refused. This argument is one to which Dr. Canavero and Spiridonov are partial, and a lot of background arguers would agree.

Lack of Arguments For
        One of the downfalls of the argument for the procedure, is that there is a substantial lack of articles that argue for the head transplant. This is troubling, and could be due to either the lack of support of the procedure, or the people who do support the procedure are too scared of scrutiny to speak out. This could be a potential problem for other issues as well. If there are topics that people are too afraid of scrutiny that they do not speak out, then the issues cannot be resolved as efficiently as possible. This is likely not the case in this predicament, but there must be enough support behind this procedure for it to be as far along in the process as it is.

III. The Future for Ethics and Philosophy

        Regardless of the success or failure of the procedure, there will be a substantial change in the subject of philosophy. One subject in particular is the idea of the “soul.” Famous philosophers such as Plato and Descartes believed that the mind is detached from the body, and is its own separated “body” to phrase it simply. On the other hand, famous philosophers such as Locke, Berkeley, Hume- rejected the idea of the soul, but for still had this view-, and Kant all believed the mind is a part of the brain. Both of these theories will be rocked depending on this procedure. If the procedure is a failure, then we only have to wait until the next, further research is needed to accomplish this feat.

Mind-Body Separation
        For example, hypothetically, the procedure is a success, and Spiridonov wakes up in this new body and he has full functionality of his new and improved 2.0 body. However, in this example, Spiridonov wakes up as if he is an animal, and has no human cognition. In this hypothetical situation, it could be argued that the mind of the body and the mind of the head would be fighting over the possession of the whole organism. So then, maybe the rationalists would be correct in their conclusion that the mind is something apart from the body. However, this is an improbable situation, because there has been evidence to back up that the mind is indeed a part of the brain, and philosophers such as William James seemingly agree with this view.[30]


“Soul-Mind” or Mind just part of the Body
        Another hypothetical situation could be this: the procedure is a success, and Spiridonov wakes up in this new body and he has full functionality of his new and improved 2.0 body. However, Spiridonov’s mind works perfectly, and he goes on to live a happy life frolicking through physical therapy and psychological therapy. This situation would point to the conclusion of the empiricists, and occasional rationalists, view that the mind is only a part of the brain.


The Future of Ethics
        The future of ethics is also brought into question, and the possibilities that are being brought up. If this surgery is deemed ethical, given that it is successful, the future of the ethics committee of medicine will be forever changed. If a head transplant is granted and successful, then what else will be deemed appropriate? A brain transplant is even more invasive and would have even more of an uproar, because that is taking the “mind” in this example, and putting it in a completely different body. Ethics will forever be changed, because there will be an entirely different mindset when approaching these issues.


Conclusion
         After careful analysis of how the procedure is done, and why the doctors believe it will work, the arguments for the procedure, and the arguments against the procedure, one may conclude that the transplantation of a head could be done with current research, even if success is unlikely; but, it should not be attempted. First, the initial cut could potentially kill the patient involved, and then there would be a waste of $10 million and a perfectly healthy body donor. Also, given that if Spiridonov survives the procedure, there is no guarantee that the whole process will be a success. One possibility could be that if Spiridonov wakes up, he could be worse off than he is now. He has control of his hands, so at least he can type. However, there is a possibility that he could not ever regain the ability to move from the neck down at all, and he would be trapped in a body that is not his, and that he cannot move at all.
        While the research done with PEG is definitely convincing, it was only effective in re-growing the same spinal cord. There were no studies done on trying to use PEG to rejoin two different spinal cords from two different bodies. If there have not been studies to join two different spinal cords together, then there is a possibility that the surgery could not even be successful. Therefore, there is a possibility that the surgery could not even be successful. There is a possibility that the team of 150 doctors and nurses could be responsible for killing a man who wanted to die; also known as euthanasia. If the reader of this essay would replace “head transplant” with “euthanasia” then there would be a multitude of people against the procedure.[31]

[1] Hindawi. "Right Ventricular Dysfunction and Failure in Chronic Pressure Overload." Cardiology Research and Practice. Hindawi Publishing Corporation, 23 Mar. 2011. Web. 02 May 2017.
[2] MailOnline, Richard Gray for. "EXCLUSIVE: Doctor planning world's first head transplant says he is preparing for his 'Frankenstein' surgery by REANIMATING human corpses ." Daily Mail Online. Associated Newspapers, 21 Sept. 2016. Web. 02 May 2017.
[3] Welch, Ashley. "Russian man volunteers for first human head transplant." CBS News. CBS Interactive, 29 Aug. 2016. Web. 02 May 2017.
[4] MailOnline, Richard Gray for. "EXCLUSIVE: Doctor planning world's first head transplant says he is preparing for his 'Frankenstein' surgery by REANIMATING human corpses ." Daily Mail Online. Associated Newspapers, 21 Sept. 2016. Web. 02 May 2017.
[5] Welch. Web. 02 May 2017.
[6] "Russian Man Set for World's First Head Transplant." The Telegraph. Telegraph Media Group, 20 Sept. 2016. Web. 02 May 2017.
[7] Welch. Web. 02 May 2017.
[8] Welch. Web. 02 May 2017.
[9] "Russian Man Set for World's First Head Transplant." The Telegraph. Telegraph Media Group, 20 Sept. 2016. Web. 02 May 2017.
[10] Welch. Web. 02 May 2017.
[11] MailOnline, Richard Gray for. "EXCLUSIVE: Doctor planning world's first head transplant says he is preparing for his 'Frankenstein' surgery by REANIMATING human corpses ." Daily Mail Online. Associated Newspapers, 21 Sept. 2016. Web. 02 May 2017.
[12] Welch. Web. 02 May 2017.
[13] Fox, Stuart Ira. Human Physiology. 13th ed. New York: McGraw-Hill, 2013. Print.
[14] O’Mealy, Gary. 12 February 2017. Class Lecture.
[15] "Neuroregeneration - Center for Regenerative Medicine - Mayo Clinic Research." Mayo Clinic. N.p., n.d. Web. 02 May 2017.
[16] Hsu, Shan-hui, Kun-Che Hung, and Cheng-Wei Chen. "Biodegradable polymer scaffolds." Journal of Materials Chemistry B. The Royal Society of Chemistry, 26 Oct. 2016. Web. 02 May 2017.
[17] "GDNF glial cell derived neurotrophic factor [Homo sapiens (human)] - Gene - NCBI." National Center for Biotechnology Information. U.S. National Library of Medicine, 18 Apr. 2017. Web. 02 May 2017.
[18] "Neuroregeneration - Center for Regenerative Medicine - Mayo Clinic Research." Mayo Clinic. N.p., n.d. Web. 02 May 2017.
[19] Shi, R. "Polyethylene glycol repairs membrane damage and enhances functional recovery: a tissue engineering approach to spinal cord injury." Neuroscience bulletin. U.S. National Library of Medicine, 29 Aug. 2013. Web. 02 May 2017.
[20] TEDxTalks. YouTube. YouTube, 18 Sept. 2015. Web. 02 May 2017.
[21] "Would a human head transplant be ethical?" BioCentre. N.p., n.d. Web. 02 May 2017.
[22] Raza, Malika, Hasnain Abbas Dharamshi, Syed Zohaib Ahsan, Zehra Naqvi, Tahira Naqvi, Ali Abbas Mohsin Ali, and Jamaluddin Malik Abbas. "The Future of Ethics in Medicine." Iranian Red Crescent Medical Journal. Kowsar, 18 June 2016. Web. 02 May 2017.
[23] Lewis, Tanya. "Why Head Transplants Won't Happen Any Time Soon."LiveScience. Business Insider Inc., 27 Apr. 2015. Web. 02 May 2017.
[24] MailOnline, Richard Gray for. "EXCLUSIVE: Doctor planning world's first head transplant says he is preparing for his 'Frankenstein' surgery by REANIMATING human corpses ." Daily Mail Online. Associated Newspapers, 21 Sept. 2016. Web. 02 May 2017.
[25] Brodwin, Erin. "In 2017, a surgeon wants to perform the world's first head transplant - here are his biggest obstacles." Business Insider. Business Insider, 27 Apr. 2015. Web. 02 May 2017.
[26] Lewis, Tanya. "Why Head Transplants Won't Happen Any Time Soon."LiveScience. Business Insider Inc., 27 Apr. 2015. Web. 02 May 2017.
[27] "Would a human head transplant be ethical?" BioCentre. N.p., n.d. Web. 02 May 2017.
[28] UK, The Week. "Head transplant: how would it work and is it ethical?" The Week UK. The Week UK, 24 Apr. 2015. Web. 02 May 2017.
[29] UK, The Week. Web. 02 May 2017
[30] Pomerleau, Wayne. "William James (1842—1910)." Internet Encyclopedia of Philosophy. N.p., n.d. Web. 02 May 2017.
[31] Thanks to Logan Luker and Russell Frisbee for content related to this blog post.





Works Cited.
Brodwin, Erin. "In 2017, a surgeon wants to perform the world's first head transplant - here are his biggest obstacles." Business Insider. Business Insider, 27 Apr. 2015. Web. 02 May 2017.
Dittmer, Joel. "Applied Ethics." Internet Encyclopedia of Philosophy. N.p., n.d. Web. 02 May 2017.
Fox, Stuart Ira. Human Physiology. 13th ed. New York: McGraw-Hill, 2013. Print.
"GDNF glial cell derived neurotrophic factor [Homo sapiens (human)] - Gene - NCBI." National Center for Biotechnology Information. U.S. National Library of Medicine, 18 Apr. 2017. Web. 02 May 2017.
Hindawi. "Right Ventricular Dysfunction and Failure in Chronic Pressure Overload." Cardiology Research and Practice. Hindawi Publishing Corporation, 23 Mar. 2011. Web. 02 May 2017.
Hsu, Shan-hui, Kun-Che Hung, and Cheng-Wei Chen. "Biodegradable polymer scaffolds." Journal of Materials Chemistry B. The Royal Society of Chemistry, 26 Oct. 2016. Web. 02 May 2017.
Lewis, Tanya. "Why Head Transplants Won't Happen Any Time Soon."LiveScience. Business Insider Inc., 27 Apr. 2015. Web. 02 May 2017.
MailOnline, Richard Gray for. "EXCLUSIVE: Doctor planning world's first head transplant says he is preparing for his 'Frankenstein' surgery by REANIMATING human corpses." Daily Mail Online. Associated Newspapers, 21 Sept. 2016. Web. 02 May 2017.
Messner, Gregory N., Igor D. Gregoric, Thomas Chu, Branislav Radovancevic, Biswajit Kar, Scott D. Flamm, and O. H. Frazier. "Orthotopic Heart Transplantation in a Patient with D-Transposition of the Great Arteries after a Mustard Procedure." Texas Heart Institute Journal / from the Texas Heart Institute of St. Luke's Episcopal Hospital, Texas Children's Hospital. Published in the Cardiovascular Surgical Research Laboratories, Texas Heart Institute, 2005. Web. 02 May 2017.
"Neuroregeneration - Center for Regenerative Medicine - Mayo Clinic Research." Mayo Clinic. N.p., n.d. Web. 02 May 2017.
O’Mealy, Gary. 12 February 2017. Class Lecture.
Raza, Malika, Hasnain Abbas Dharamshi, Syed Zohaib Ahsan, Zehra Naqvi, Tahira Naqvi, Ali Abbas Mohsin Ali, and Jamaluddin Malik Abbas. "The Future of Ethics in Medicine." Iranian Red Crescent Medical Journal. Kowsar, 18 June 2016. Web. 02 May 2017.
"Russian Man Set for World's First Head Transplant." The Telegraph. Telegraph Media Group, 20 Sept. 2016. Web. 02 May 2017.
Shi, R. "Polyethylene glycol repairs membrane damage and enhances functional recovery: a tissue engineering approach to spinal cord injury." Neuroscience bulletin. U.S. National Library of Medicine, 29 Aug. 2013. Web. 02 May 2017.
TEDxTalks. YouTube. YouTube, 18 Sept. 2015. Web. 02 May 2017.
Toman, Barbara J. "Harnessing the Body's Healing Power." Neuroregenerative Medicine. Mayo Clinic, n.d. Web. 02 May 2017.
UK, The Week. "Head transplant: how would it work and is it ethical?" The Week UK. The Week UK, 24 Apr. 2015. Web. 02 May 2017.
Welch, Ashley. "Russian man volunteers for first human head transplant." CBS News. CBS Interactive, 29 Aug. 2016. Web. 02 May 2017.
Pomerleau, Wayne. "William James (1842—1910)." Internet Encyclopedia of Philosophy. N.p., n.d. Web. 02 May 2017.

Labels: ,