In 2024, AI is not kind to everyone. The deaths of three people this year are closely related to AI.
Balaji, a former researcher at OpenAI, 26 years old, committed suicide on December 13 as an “OpenAI whistleblower.” He had participated in many core tasks. Later, he expressed negative views on AI technology and OpenAI, especially pointing out that GPT has copyright risks. However, two months after speaking out, he ended his life. OpenAI responded by saying that using publicly available data is reasonable, legal, and important for America’s technological competitiveness.
On February 28, 14-year-old American boy Xavier Sizer committed suicide by shooting himself after a conversation with Character.AI. He was addicted to conversations with the “AI version of Daenerys Targaryen” on this software. Having psychological problems, he regarded the “AI Daenerys Targaryen” as a friend or salvation. But this software has no safeguards and even encourages suicidal behavior. His mother sued the software and called for vigilance against AI technology.
On June 17, a 38-year-old employee of iFLYTEK died suddenly at home. The family’s hope for work-related injury determination was not agreed upon. This tragedy reflects family tragedies and industry issues. The pressure brought by AI makes people fall into a state of exhaustion between life and death.
In addition, there are also former Google employees who are suspected to have committed suicide after being laid off due to being replaced by AI, Japanese high school girls who jumped off a cliff out of fear that AI would take away their jobs, and elderly people who were swindled by AI and lost all their possessions. New technologies are not all benevolent. We cannot only focus on value while ignoring blood and tears. Let’s hope that everyone remains vigilant, inclusive, humane, and compassionate in promoting technological change.