Sam Altman claims superintelligence might only be "a few thousand days" away from OpenAI's doorstep, but there are a lot of details to figure out

Skettalee

New member
Aug 22, 2024
2
1
3
Visit site
Yall are stupid for with the types of things you guys report on. Yall are just reporting on the trolling remarks this business made and they dont even know what they are talking about plus the amount of time yall are talking about is probably close to 10 years away and you guys try to cause some panic in people saying this stuff is just "a few" thoasand days away. I know you guys really dont feel good about the types of topics yall speak about and especially saying things in your posts like "WHAT YOU NEED TO KNOW" like your company even has any idea on what people really "NEED TO KNOW". I would be so ashamed of myself if i was putting the types of content across the internet that yall are.
 
  • Like
Reactions: eltoslightfoot

FraJa

New member
Jun 30, 2024
12
1
3
Visit site
Microsoft Research has a department "AI Frontiers": Microsoft Research: AI Frontiers
MS Research is the the largest private scientific institution in the field of computing (in a very broad way) and many other domains...

They have a podcast, very interesting, they are scientists, and they see things in a different way that in "the news" or by a commercial guy, let's not mention Reddit or "in the hills".

One scientist said : Podcast : AI Frontiers: the physics of AI

Sébastien Bubeck Why is [GPT4] not AGI? Because it’s still lacking some of the fundamental aspects, two of them, which are really, really important. One is memory. So, every new session with GPT-4 is a completely fresh tabula rasa session. It’s not remembering what you did yesterday with it. And it’s something which is emotionally hard to take because you kind of develop a relationship with the system.

As crazy as it sounds, that’s really what happens. And so you’re kind of disappointed that it doesn’t remember all the good times that you guys had together. So this is one aspect. The other one is the learning. Right now, you cannot teach it new concepts very easily. You can turn the big crank of retraining the model.

(...)

Absolutely. Maybe one other point that I want to bring up about AGI, which I think is confusing a lot of people. Somehow when people hear general intelligence, they want something which is truly general that could grapple with any kind of environment. And not only that, but maybe that grapples with any kind of environment and does so in a sort of optimal way.

This universality and optimality, I think, are completely irrelevant to intelligence. Intelligence has nothing to do with universality or optimality. We as human beings are notoriously not universal. I mean, you change a little bit the condition of your environment, and you’re going to be very confused for a week. It’s going to take you months to adapt.

So, we are very, very far from universal and I think I don’t need to tell anybody that we’re very far from being optimal. The number of crazy decisions that we make every second is astounding. So, we’re not optimal in any way. So, I think it is not realistic to try to have an AGI that would be universal and optimal. And it’s not even desirable in any way, in my opinion. So that’s maybe not achievable and not even realistic, in my opinion.

So, AGI in this definition is the ability to remember and learn, not to take over the world and "Kill all humans" 😉
 

Members online

Forum statistics

Threads
325,885
Messages
2,247,513
Members
428,424
Latest member
Parentech