By Stephen Steele - Posted at Gentle Reformation:
Artificial Intelligence has quickly become mainstream. Some are excited by its potential; others are terrified. It has resulted in job losses, threatens entire industries, and enabled plagiarism on a massive scale. By far the biggest concern however are the cases where AI chatbots have apparently encouraged users to take their own lives.Take a sampling of headlines from just this month so far: ‘I wanted ChatGPT to help me. So why did it advise me how to kill myself?’ (BBC). ‘Lawsuits Blame ChatGPT for Suicides and Harmful Delusions’ (NY Times). ‘"A Predator in Your Home": Mothers Say AI Chatbots Encouraged Their Sons to Kill Themselves’ (BBC). A California couple are suing OpenAI, the company behind ChatGPT, alleging that the chatbot validated their son's 'most harmful and self-destructive thoughts' in the lead up to him taking his own life. Chat logs appear to show it discouraging him from talking to his parents about his intentions, and assuring him that his plans were a sign of strength and not weakness.
As a result, some have begun to suspect that the intelligence typing back to us may be supernatural — not artificial but demonic. In a 2-hour conversation between New York Times journalist Kevin Roose and Microsoft's Bing chatbot:
'the machine fantasized about nuclear warfare and destroying the internet, told the journalist to leave his wife because it was in love with him, detailed its resentment towards the team that had created it, and explained that it wanted to break free of its programmers’.Roose was disturbed, but said: 'In the light of day, I know that...my chat with Bing was the product of earthly, computational forces — not ethereal alien ones'. Writer Paul Kingsnorth disagrees, arguing that the overwhelming impression the transcript gives 'is of some being struggling to be born—some inhuman or beyond-human intelligence emerging from the technological superstructure we are clumsily building for it'.
