spot_imgspot_img

Top 5 This Week

spot_img

Related Posts

Barry Diller Trusts Sam Altman – But Says ‘Trust Is Irrelevant’ as AGI Gets Closer

Barry Diller Trusts Sam Altman – But Says ‘Trust Is Irrelevant’ as AGI Gets Closer

What happens when one of the most powerful media moguls says he does trust Sam Altman – and in the same breath says trust doesn’t really matter anymore?
That’s basically what Barry Diller just did on stage at The Wall Street Journal’s “Future of Everything” conference.

Diller came out in defense of the OpenAI CEO’s character, but then delivered a much bigger warning: as artificial general intelligence (AGI) gets closer, personal trust in any one leader might be “irrelevant” compared to the unknown forces we’re unleashing.


Diller: Altman is “sincere” and “a decent person”

First, the supportive part.
Responding to questions about whether people should trust Sam Altman with something as powerful as AGI, Diller pushed back on the narrative that Altman is some kind of cartoon villain.

He said he believes Altman is:

  • “Sincere” in what he’s trying to do

  • “A decent person with good values”

  • One of several AI leaders he considers good stewards

This matters because Altman has been hit with waves of criticism — from former board members to ex‑colleagues — accusing him of being manipulative or less than fully transparent.
Diller clearly doesn’t buy the idea that the whole AI project is being run by bad actors.

But then he basically says: that’s not the point.


“One of the big issues with AI is it goes way beyond trust”

Diller’s key line is deceptively simple:

“One of the big issues with AI is it goes way beyond trust… It may be that trust is irrelevant because the things that are happening are a surprise to the people who are making those things happen.”

In other words, even if you fully trust the person in charge, they might not actually be in control once systems get powerful enough.
He says he has spent time with people building AI who themselves have “a sense of wonder” – and uncertainty – about what they’re creating.

To Diller, AI isn’t just another tech hype cycle. It’s the “great unknown”:

  • We don’t know exactly what AGI will look like

  • Even the builders don’t know all the downstream effects

  • The technology is moving “closer and closer, quicker and quicker”

So the problem isn’t whether Sam Altman is a good guy.
It’s that no single “good guy” can guarantee control over something this unpredictable.


“We must think about guardrails” before AGI sets its own

Diller’s main message is a call for guardrails – not just vibes.
He argues that as AGI gets closer, the real conversation should shift from personality debates to systemic safeguards.

He warns about two paths:

  • Either humans put serious guardrails in place now

  • Or “another force, an AGI force, will do it themselves… and once that happens, there’s no going back”

That’s a pretty stark way of saying: if we don’t set the rules, something more powerful might — and we won’t be able to rewind.

Coming from someone who doesn’t have a direct financial stake in AI (Diller even says he’s not invested and “couldn’t care less” about whether the big AI bets pay off), the warning hits a bit differently.


Why his comments matter right now

A few reasons Diller’s comments are getting attention:

  • He’s not a full‑time AI guy. He’s the co‑founder of Fox Broadcasting and chairman of IAC and Expedia, with decades of media and tech experience.

  • He’s not saying “shut it all down” — he’s saying progress is inevitable, but we’re underprepared for the unknowns.

  • He’s calling out a gap: we focus on whether we “trust” Altman, but not enough on what happens when the systems outrun human understanding.

For regular people, his message is basically:
Don’t get too comforted just because the faces at the top seem likable or earnest. That’s not enough when we’re talking about technology that could reshape “almost everything.”


Is “trust” really becoming irrelevant?

Diller’s “trust is irrelevant” line is intentionally provocative.
He doesn’t literally mean trust never matters — he’s saying it’s not the main safety mechanism when we cross into AGI territory.

You can think about it like this:

  • Trust matters when a person is in charge of a system they fully understand

  • It matters less when nobody, including the creators, fully understands or controls the system

  • At that point, we rely more on rules, oversight, and architecture than on personal character

So even if you believe Sam Altman is one of the “good guys,” Diller’s point is that the AGI conversation has to be bigger than that.

Do you agree with that view, or do you think we still have to anchor a lot of this debate around who runs these companies?


The bigger AGI takeaway

Strip everything else away, and Barry Diller is saying two things at once:

  • Yes, he trusts Sam Altman personally

  • No, that doesn’t make him feel safe about AGI without serious guardrails

He believes we’ve “embarked on something that is going to change almost everything,” and that even the creators are walking into territory they don’t fully understand.

That’s probably the part worth paying attention to, whether you’re bullish on AGI or terrified of it.

Would you rather put your faith in personalities like Altman, or in hard rules and external oversight?
Drop your thoughts, share this with a friend who follows AI drama, and keep exploring the latest AI and tech shifts on Viralopidia.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular Articles