The Drone That Changed Everything: A Journey to Trappist E
Labels: Fiction, Science Fiction
Philosophy, Science, and Light-Weight Musings by a Philosopher
Labels: Fiction, Science Fiction
My what a difference only seven years of A.I. research makes
I remember in 2015 when Andrew Ng, an AI guru at Stanford University and chief scientist at Chinese internet giant Baidu, compared fearing a rise of killer robots to worrying about overpopulation on Mars before we’ve even set foot on it.[1] He said:
“There’s a big difference between intelligence and sentience. There could be a race of killer robots in the far future, but I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars.”
He also argued that there was no realistic path for AI to become sentient and turn evil, and that worrying about the danger of futuristic evil killer robots was pointless He claimed that AI was still extremely limited today -- again, in 2015 -- relative to human intelligence, and that most of the progress in AI was driven by an increase in computing power and data. And he was certainly right about how the increase would come.
However, Ng’s analogy and arguments were flawed and he greatly underestimated the speed and impact of AI as a disruption to human affairs. Here are some reasons why his analogy to the Mars situation didn't work:
So then, Ng’s comparison of fearing killer robots to worrying about overpopulation on Mars was misleading and dismissive of the legitimate concerns and challenges that AI poses to humanity. He failed to appreciate the complexity, dynamism, and uncertainty of AI as a technology and as a force of change. He ignored the ethical and social implications of creating and deploying intelligent systems that may affect the lives and well-being of millions or billions of people.
As it turns out, we no longer have the luxury of being complacent or naïve about its potential dangers or impacts. We must now be far more proactive and responsible in designing, developing, regulating, and using AI for the common good -- or at least for avoiding the particular "bads" that it will almost certainly introduce before the greater portion of human kind takes this sudden jump in technology seriously.
--------
[1] https://quoteinvestigator.com/2020/10/04/mars/
Labels: Andrew Ng, Artificial Intelligence
What makes for a meaningful preservation of my particular identity?
I was listening to Max Tegmark being interviewed about A.I. on the Lex Fridman podcast. Tegmark suggested an interesting thought experiment as a quick aside to the conversation he was having within the first five minutes of the interview. I'll dress it up a bit:
Suppose that a) people can back up their brains onto a computer; and b) a person, `Jake` discovers the plane he is on is about to crash, ending his life. Furthermore, suppose Jake had backed-up his brain four hours ago.
So then, consider this this question: How should Jake feel about his impending death?
If we assume that people can back up their brains onto a computer and that Jake had done so four hours ago, then it would mean that his current consciousness exists in two forms: the biological form that is about to perish in the plane crash, and the digital form that was created four hours ago.
From a purely rational perspective, Jake might feel less concerned about his impending death because he knows that his brain has been backed up and that his digital self will continue to exist even after his biological self has perished. All good!
However, the emotional response to death is a complex one, and it might not be easy for Jake to simply ignore his impending doom. On the one hand, Jake might feel a sense of relief that his digital self will continue to exist, and he might even view his biological death as a form of sacrifice for the continuation of his consciousness. So, maybe what is value is that Jake's life projects will continue, and that people who love him would lose no more of what's valuable to them about Jake than as if he had taken, say, a four hour nap on plane instead of staying awake.
On the other hand, Jake might feel a sense of loss for his biological self, and he might be worried about the pain and suffering that he will experience in the moments leading up to his death. But how much loss? Suppose I think back to when I was a teenager. That was many decades ago, and I've changed so much since that point that any "loss" I've felt from not being that person doesn't seem like much. Indeed, there were things I now feel were misfunctions of character that I'm glad are no more So maybe the loss Jake feels would be trivial indeed, if one can reconcile the loss of one's teenage character so easily.
Ultimately, how Jake feels about his impending death will likely depend on a variety of factors, including his personal beliefs about death and the afterlife (if Jake is religious), his emotional state at the time of the crash, and his attachment to his biological self.
Is is possible that people might have to ask this question for real in the future, given how technology proceeds?
It's certainly possible that people may face similar questions in the future as technology continues to advance. As we develop more advanced brain-computer interfaces and artificial intelligence, it's not unreasonable to imagine that it may eventually become possible to back up our consciousness or transfer it to a digital form.
However, there are still many unknowns and ethical considerations to consider before such technology could become a reality. For example, we don't fully understand the nature of consciousness or how it arises from the complex interactions of our brains, so it's unclear how feasible it would be to create a digital copy of a person's consciousness.
Additionally, even if we could create such a copy, there would be many ethical questions to consider, such as the status of the digital copy and its relationship to the original person. Imagine an extension where Jake goes down over an ocean, survives by washing up on a desert island, and gets rescued six months later, but his family has re-incarnated his backup. Does the backup get killed-off so that Jake can sleep in his own bed again? Is my backup persona my "property", since I preceded it both historically and causally?
I have no answers here, and no strong intuitions on this either. I do think there is a direct analogy to how we might think about backing up our A.I. creations, especially if they pass the Turing test for 99.99% of the population. The future is really getting weird.
Labels: personal identity, technology