Breakpoint: Should AI be Shut Down?

(Image Source: Wikimedia - Author: mikemacmarketing)


 By John Stonestreet and Kasey Leander

Recently, a number of prominent tech executives, including Elon Musk, signed an open letter urging a 6-month pause on all AI research.


Recently, a number of prominent tech executives, including Elon Musk, signed an open letter urging a 6-month pause on all AI research. That was not enough for AI theorist Eliezer Yudkowsky. In an opinion piece for TIME magazine, he argued that “We Need to Shut It All Down,” and he didn’t mince his words:

Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI … is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”

Using a tone dripping with panic, Yudkowsky even suggested that countries like the U.S. should be willing to run the risk of nuclear war “if that’s what it takes to reduce the risk of large AI training runs.”

Many experts suggest that the current state of artificial intelligence is more akin to harvesting the power of the atom for the first time than upgrading to the latest iPhone. Whereas computers of yesteryear simply categorized data, the latest versions of AI have the ability to understand the context of words as millions of people use them and thus are able to solve problems, predict future outcomes, expand knowledge, and potentially even take action.

Comments