Tuesday, April 18, 2023

Ng Discovers that Mars is Suddenly Overpopulated

 


My what a difference only seven years of A.I. research makes


I remember in 2015 when Andrew Ng, an AI guru at Stanford University and chief scientist at Chinese internet giant Baidu, compared fearing a rise of killer robots to worrying about overpopulation on Mars before we’ve even set foot on it.[1] He said:

“There’s a big difference between intelligence and sentience. There could be a race of killer robots in the far future, but I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars.”

He also argued that there was no realistic path for AI to become sentient and turn evil, and that worrying about the danger of futuristic evil killer robots was pointless He claimed that AI was still extremely limited today -- again, in 2015 -- relative to human intelligence, and that most of the progress in AI was driven by an increase in computing power and data. And he was certainly right about how the increase would come. 

However, Ng’s analogy and arguments were flawed and he greatly underestimated the speed and impact of AI as a disruption to human affairs. Here are some reasons why his analogy to the Mars situation didn't work:

  • Mars is not Earth: Unlike Mars, which is a distant and uninhabited planet, Earth is our home and we share it with billions of other living beings. The potential consequences of AI going rogue or harming humans are much more severe and immediate than those of overpopulation on Mars. Therefore, we have a moral and practical responsibility to ensure that AI is aligned with our values and goals, and does not pose an existential threat to our civilization.

  • AI is not static: Unlike Mars, which is unlikely to change significantly in the near future,  with or without us, AI is a dynamic and evolving field that is constantly advancing and expanding its capabilities. The pace of AI innovation is exponential, not linear, and it is driven by both scientific breakthroughs and market incentives.  Landing equipment and people on Mars is a linear activity (at best). Therefore, we cannot assume that AI will remain benign or limited forever, or that we will always have enough time and resources to control it or correct its mistakes. Indeed, that's why all of a sudden the top researchers in A.I. have called for a pause in its development. (Interestingly, Ng has not signed the letter.  Perhaps he's embarrassed about his mis-prediction.)

  • AI is not simple: Unlike Mars, which is a relatively simple physical system that can be studied and understood by humans, AI is a complex and opaque system that can be difficult or impossible to interpret or predict. AI can learn from data, generate novel outputs, optimize its own objectives, and interact with other agents in ways that may be, or even outright is, beyond our comprehension or expectations. Therefore, we cannot rely on our intuition or common sense to guide our decisions or actions regarding AI, or to anticipate its potential risks or benefits.

So then, Ng’s comparison of fearing killer robots to worrying about overpopulation on Mars was misleading and dismissive of the legitimate concerns and challenges that AI poses to humanity. He failed to appreciate the complexity, dynamism, and uncertainty of AI as a technology and as a force of change. He ignored the ethical and social implications of creating and deploying intelligent systems that may affect the lives and well-being of millions or billions of people. 

As it turns out, we no longer have the luxury of being complacent or naïve about its potential dangers or impacts.  We must now be far more proactive and responsible in designing, developing, regulating, and using AI for the common good -- or at least for avoiding the particular "bads" that it will almost certainly introduce before the greater portion of human kind takes this sudden jump in technology seriously. 

--------

[1] https://quoteinvestigator.com/2020/10/04/mars/

Labels: ,

0 Comments:

Post a Comment

<< Home