AI safety researcher warns there's a 99.999999% probability AI will end humanity, but Elon Musk "conservatively" dwindles it down to 20% and says...

GraniteStateColin

Active member
May 9, 2012
325
60
28
Visit site
AI can only do those things it's set to do. Yes, it can find interesting combinations and patterns, which makes in a very helpful tool. But it has no hands. It can neither attack us nor even cause problems unless we give it that ability. I don't understand the concerns. They seem utterly irrational.

Perhaps the thinking is that there will an AI-war between nations, say between U.S. and China, where each side tells its AI to invade the other to cause them pain. Like China tells its AI to shut down the US power grid. US tells its AI to stop Chinese naval maneuvers. And in doing that, each AI gains control over the other nations physical resources, after that, if the command were to use that control to cause harm, then I suppose we could suffer as a result. That seems like the most likely harmful scenario, but also not a power I would expect either side would grant to its AI.

Maybe the good science fiction story here is what happens when one nation or a terrorist group sends an AI into another nation to take over something (power, weapons, etc.), and then the target nation in trying to stop it, engages its own AI and somehow they blend together as they work to reprogram the AI to weaken it, but results in some form of Internet-based Ultron.

But hard to see how that's more than a science fiction story as it would require high levels of stupidity and consensus among experts around that stupidity. I would rate the likelihood of a catastrophic result (as in anything that could be described as wiping out civilization or degrading our standard of living by hundreds of years or more) from AI at under 1%. Not impossible, but improbable in the extreme.
 

ShinyProton

Member
Aug 9, 2023
60
19
8
Visit site
Elon Musk indicated that "there's some chance that it will end humanity, further placing the likelihood of this happening between 10 and 20 percent.

AI safety researcher warns there's a 99.999999% probability AI will end humanity, but Elon Musk "conservatively" dwindles it down to 20% and says... : Read more
Seriously.
Journalists should refrain from publishing material posted by ONE cuckoo like this.

And does anyone really think we can go back? Even with government ruling, there's no way the toothpaste will be put back in the tube.

AI is another tool, like fire. Humanity will find a way to control it - although it doesn't mean accidents won't happen...
 

fjtorres5591

Active member
May 16, 2023
239
59
28
Visit site

Maybe the good science fiction story here is what happens when one nation or a terrorist group sends an AI into another nation to take over something (power, weapons, etc.), and then the target nation in trying to stop it, engages its own AI and somehow they blend together as they work to reprogram the AI to weaken it, but results in some form of Internet-based Ultron.

But hard to see how that's more than a science fiction story as it would require high levels of stupidity and consensus among experts around that stupidity. I would rate the likelihood of a catastrophic result (as in anything that could be described as wiping out civilization or degrading our standard of living by hundreds of years or more) from AI at under 1%. Not impossible, but improbable in the extreme.

The good SF story you envision was written in 1966 by Dennis Feltham Jones, aka D.F. Jones.
It is called COLOSSUS and was adapted into a Hollywood movie in 1970: THE FORBIN PROJECT. It had two sequels, THE FALL OF COLOSSUS and COLOSSUS AND THE CRAB.

Look them up.

What the fool naysayers seem to forget is that there is no inteligence in AI nor is there agency or initiative. And that AI relies on electricity and like Dilbert said/did, humans can always pull the plug.

AI in the real world, unlike inside the pea brains of the idiots in academia, "AI" is just software tools, slaves to human needs and initiative. If doom comes, unlikely as it is, it won't be from "AI" but from humans doing what humans do.
 
  • Like
Reactions: GraniteStateColin

powerupgo

New member
Apr 18, 2024
1
0
1
Visit site
There is no rational way to calculate the odds for the risk of AI. It could be anywhere from zero to a near certainty. There is no sample size to begin with and nothing to derive the risk factors from. All they can do is speculate and give a bunch of hypotheticals coupled bad science fiction scenarios.
 
Last edited:

Members online

No members online now.

Forum statistics

Threads
323,774
Messages
2,244,362
Members
428,124
Latest member
bayonettarocks