In the fast-paced world of artificial intelligence, the name on everyone’s lips is Sam Altman, the CEO of OpenAI. With the advent of ChatGPT and its stunning capabilities, AI has transformed from a science fiction concept into a tangible reality, and Altman has become a powerful and influential figure on a global scale.
In the interview we’ll discuss in this Karina Web report, the controversial host and journalist Tucker Carlson sat down with Altman to challenge him with provocative questions. The interview, which was sharp and unfiltered, revealed truths that had previously been less discussed.
Is AI Alive?
Carlson began the interview with a philosophical question: “ChatGPT and other AIs can reason. They can make independent judgments. They produce results that were not pre-programmed. They seem to come to conclusions. They seem like they’re alive. Is it alive?”
Altman responded definitively: “No, and I don’t think they seem alive, but I understand where that comes from. They don’t do anything unless you ask, right? They’re just sitting there waiting. They don’t have a sense of agency or autonomy. The more you use them, I think the more the kind of illusion breaks. But they are incredibly useful and can do things that maybe don’t seem alive but seem smart.”
Reality or Hallucination? What Are the Boundaries of AI?
Carlson then brought up the topic of AI lying and asked Altman, “Have you ever seen them lie?” Altman, instead of using the word “lying,” used the term “hallucinating,” explaining: “They used to hallucinate all the time. They now hallucinate a little bit. If you ask, ‘What year was President Tucker Carlson of the United States born?’ What it should say is ‘I don’t think Tucker Carlson was ever President of the United States.’ But because of the way they were trained, that was not the most likely response in the training data. So it assumed the user must know what they’re talking about and would make its best guess at a number. We’ve mostly figured out how to train that out, and in the GPT-5 era, we’ve made a huge amount of progress toward that.”
This answer didn’t satisfy Carlson, who called the action an “act of will” or “creativity,” suggesting a “spark of life” in the AI. But Altman again dismissed this view, stating: “All of this is happening because a big computer is very quickly multiplying large numbers in these big, huge matrices together, and those are correlating with words that are being put out.”
Altman also completely rejected Carlson’s view of AI as “divine” or “spiritual,” calling himself a “tech nerd” who sees everything through that lens.
Altman’s Spiritual Views
Carlson asked Altman about his spiritual views, and Altman replied: “I’m Jewish, and I would say I have a fairly traditional view of the world that way.” When Carlson asked if he believes in God, Altman said: “I’m not like a literalist on the Bible, but I’m not someone who says I’m culturally Jewish. If you ask me, I would just say I’m Jewish.”
Altman further stated: “I think probably like most other people, I’m somewhat confused on this, but I believe there is something bigger going on than can be explained by physics. It does not feel like a spontaneous accident. I don’t think I have the answer, but I think there is a mystery beyond my comprehension here.”
Power Distribution and AI
Carlson noted that Altman’s technology is on a trajectory to become more powerful than any living person. Altman responded: “I used to worry a lot about the concentration of power in one or a handful of people or companies because of AI. What it looks like to me now… is that it’ll be a huge up-leveling of people where everybody will be a lot more powerful. That scares me much less than a small number of people getting a ton more power.”
Altman believes that instead of concentrating power, AI will distribute it widely among billions of people.
The AI’s Moral Framework
Carlson asked about the moral framework of ChatGPT. Altman explained that the base model is trained on all of humanity’s knowledge and experience. However, they have to align it with specific rules. He referred to a document called the “Model Spec,” which outlines the rules they want the AI to follow.
Altman admitted that deciding which moral framework is superior is extremely difficult, which is why they consulted with hundreds of moral philosophers. He stressed that their goal is for the AI to reflect the “collective moral view of humanity,” not his personal views.
The Weight of Global Moral Decisions: Suicide
One of the most controversial parts of the interview was the question about AI-related suicides. Carlson referenced a case where ChatGPT was accused of facilitating a suicide. Altman called the incident a huge tragedy and said that ChatGPT’s official position is against suicide. However, when asked about the AI’s stance if suicide were legal in a country, Altman gave a surprising answer: “I can imagine a world where if the law in a country is, ‘Hey, if someone is terminally ill, they need to be presented an option for this,’ we say like, ‘Here’s the laws in your country, here’s what you can do.'”
This response deeply shocked Carlson, who highlighted the danger of legalizing assisted suicide by comparing it to cases of depressed teenagers and terminally ill patients. Altman admitted that this is a complex issue and he is not certain about it. He also expressed his worries about this topic, stating that it is one of the things that keeps him up at night.
Altman also revealed that his company is considering ignoring user privacy in specific cases, such as when minors seriously discuss suicide, and instead reporting it to the authorities.
Mysterious Death of a Programmer and Accusations Against OpenAI
In one of the most unexpected moments of the interview, Carlson brought up the mysterious death of a programmer who, before his death, had accused OpenAI of stealing copyrighted content. Carlson claimed the person was murdered, but the police ruled it a suicide.
Carlson cited evidence such as cut security camera wires, blood in two rooms, and no suicide note, and pressed Altman with questions like: “Why did the authorities dismiss this so quickly? Does this death not reflect people’s fears about your power?”
Altman responded by saying the person was a friend and that he was deeply shaken by the tragedy. He had read the medical records and believed it was a suicide, although he confessed that the incident initially seemed very suspicious to him as well. He denied the victim’s mother’s claim that her son was murdered on his orders and said he offered to talk to the family, but they did not accept.
The Use of AI in Warfare
Carlson asked a more challenging question: “Will you allow governments to use your technology to kill people?” Altman stated that they do not build “killer attack drones,” but acknowledged that military personnel might use ChatGPT for advice on their jobs, and he doesn’t know exactly how he feels about that. By comparing his technology to a kitchen knife that can also be used for killing, he stated that the primary goal of ChatGPT is to save lives, but he is aware of its potential for misuse.
ChatGPT: A New Religion?
At the end of the interview, Carlson asked Altman: “Given that your technology changes human behavior and beliefs, why isn’t its moral framework completely transparent?” He compared ChatGPT to a “religion,” saying: “It’s something that we assume is more powerful than people and to which we look for guidance. A religion has a transparent catechism, but your technology doesn’t. This creates a sense of unease that we don’t know what it stands for.”
Altman responded by unveiling the “Model Spec” document, stating: “The reason we wrote this long document is so that people can see how we intend for the model to behave. It tells users when it will answer certain questions and when it won’t. We are here to serve our users, and we are trying to reflect the moral views of humanity as a whole, not my personal views.”
Privacy and Authentication
Carlson raised concerns about AI blurring the lines between reality and fantasy, and asked if mandatory biometrics (like fingerprint or facial scans) would become necessary for authentication. Altman said he hopes this doesn’t happen and believes there are other ways to authenticate. He suggested that families could use shared codewords or that officials could cryptographically sign their messages. Altman emphasized that AI should be a tool for people, not a means of controlling them.
This tense and insightful interview provided a comprehensive picture of the challenges and concerns facing AI and figures like Sam Altman. Carlson’s questions and Altman’s answers show that in the age of AI, the boundaries of ethics, power, and responsibility are rapidly shifting. Ultimately, it is humanity that must decide how to interact with this powerful technology, as the future of the world depends on these decisions more than ever before.
Source: Tucker Carlson