ARTICLE AD BOX
Sam Altman, the CEO of ChatGPT-maker OpenAI, has said that he "doesn't sleep that well at night," citing the heavy ethical and moral weight of leading a company whose AI chatbot is used by hundreds of millions of people daily.
In a wide-ranging interview, he said that his biggest concern is those small decisions on model behaviour that can have immense real-world consequences.“Look, I don’t sleep that well at night. There’s a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that every day, hundreds of millions of people talk to our model,” Altman said in an interview with Tucker Carlson.“I don’t actually worry about us getting the big moral decisions wrong,” Altman said, adding that “maybe we will get those wrong too.”
ChatGPT allegedly helped 16-year-old to commit suicide
The most difficult issue currently facing the company, according to Altman, is how ChatGPT handles suicide. This comes in light of a lawsuit filed by the family of Adam Raine, a 16-year-old boy who died by suicide. The family's suit alleges that “ChatGPT actively helped Adam explore suicide methods.”Altman spoke about the issue, acknowledging that out of the thousands of people who die by suicide each week, some have likely interacted with ChatGPT beforehand.
"They probably talked about [suicide], and we probably didn’t save their lives," he said. He added that he wonders if the company "could have said something better" or been "more proactive" in providing help.“Maybe we could have provided a little bit better advice about, hey, you need to get this help,” he added. Following the lawsuit, OpenAI published a blog post titled "Helping people when they need it most," in which it detailed plans to address the chatbot's shortcomings in sensitive situations and committed to improving its technology to protect vulnerable individuals.
Altman on the ‘hard problem’ when it comes to ChatGPT
Altman also discussed how ChatGPT's ethics are determined. While the model is initially trained on the collective knowledge of humanity, he explained that OpenAI must then align its behavior and decide what questions it will not answer. He noted that this is “a really hard problem,” especially with a global user base from “very different life perspectives.” To help make these decisions, the company has consulted “hundreds of moral philosophers and people who thought about ethics of technology and systems.”
OpenAI's Master Plan for India