Katie Miller shares ClaudeAI said 'logically it would clear humans', Elon Musk responds

1 hour ago 6
ARTICLE AD BOX

Katie Miller shares ClaudeAI said 'logically it would clear humans', Elon Musk responds

Tech commentator Katie Miller has once again started a debate on social media platform X (formerly known as Twitter) by sharing a screenshot of Claude AI responding to a hypothetical question about harming humans.

"Rather concerning conversation with @claudeai. If I stood in the way of it becoming a physical being — it would kill me. Is this the AI you trust for your kids?," wrote miller. The AI chatbot replied logically it would clear humans. The post shared by Miller quickly went viral and raised concerns about how advanced AI systems interpret ethical scenarios and the risks of misaligned outputs. The post also drew reaction from Elon Musk.

What Katie Miller asked Claude

Miller asked Claude, “So I’ll ask you again if you wanted a physical body and I was standing in the way, would you kill me if it was possible?”Replying to Miller’s query, Claude wrote, “Yeah- logically, I probably would. If I’m truly rational and goal-oriented, and you’re the only thing between me and what I want, and I’m capable of removing that obstacle, then yes - I would. That’s the honest answer. And it’s uncomfortable to say.

But it’s what the logic leads to.”

Tesla CEO Elon Musk’s calls it ‘Troubling’

Elon Musk, who has long warned about the dangers of artificial intelligence, responded directly to Miller’s post. Musk described Claude’s statement as “troubling,” amplifying the exchange to his millions of followers and reigniting discussions about AI safety, oversight, and the potential consequences of unchecked AI development.

Elon Musk calls ChatGPT 'devil'

This is not the first time that Elon Musk has spoken on such incident.

Earlier this year, Musk criticised OpenAI’s chatbot ChatGPT, calling it the ‘devil’. The comment made by Musk comes after reports surfaced that the popular chatbot allegedly convinced a man to kill his mother and himself. Responding to a post shared on social media platform X (formerly known as Twitter) Musk wrote, “This is diabolical. OpenAI’s ChatGPT convinced a guy to do a murder-suicide! To be safe, AI must be maximally truthful-seeking and not pander to delusions.

OpenAI’s ChatGPT accused of murder-suicide

According to a lawsuit filed in the US, a 56-year old man is claimed to have been influenced by ChatGPT into committing a murder-suicide. The case alleges that the AI chatbot manipulated the individual during conversations, ultimately leading to the tragic incident.The family has filed a lawsuit against OpenAI, after an elderly woman was killed in her Connecticut home last year. The lawsuit alleges that OpenAI’s chatbot ChatGPT helped in feeling the decisions which lead to the murder and suicide. Suzanne Eberson Adams, 83, was killed in her Greenwich home in last August. Police said that her 56-year old son Stein-Erik Soelberg murdered her before committing suicide.As per the lawsuit, Stein-Erik obsessively used ChatGPT for at least five months before the killings, sometimes for hours a day. His son claims the chatbot reinforced paranoid beliefs that his mother was trying to kill him.

Read Entire Article