It would be fascinating to see numbers behind how many AI boosters are religious. Feels like a similar central belief. No matter the current state, boosters believe it will get better: essentially they have faith. Critics take it at current face value and don’t believe it can become AGI.
Although, people creating a digital god would be pretty blasphemous. Tough one.
> No matter the current state, boosters believe it will get better
How can it not, tho? Like it can't get worse (at least for the open models. You have them now. The underlying models don't change). So the only obvious answer is they either stagnate, or get better. And a brief look at history shows that nothing truly stagnates.
Keep in mind that we're not even 3 years into this new paradigm. We're still scratching the surface of how we can use these things, that are unreasonably good for being trained for NTP.
That's my main argument against an "AI winter". We have so many things to try that it'll take decades to fully realise all the potential of this tech, even if all research into foundational models would stop now. And it surely won't. There is just too much capital allocated here, and that attracts a lot of smart people working the problem. Yeah, it's reasonable to say things will get better.
> It would be fascinating to see numbers behind how many AI boosters are religious. Feels like a similar central belief.
Many of them appear to be somewhere on the rationalist/EA (rationalist as in LessWrong enthusiasts, not the literal meaning of the term) spectrum, and probably wouldn't consider _themselves_ religious, though many would consider those things to be religions in their own right.
This article both reinforces my sense that AGI is a long ways out, if even possible, and I think explains why the attitude that we’ll look back in if AI bubble bursts hard why people spent so much god damned money on it.
It has this longtermism adjacent feel where it makes very strange albeit “rational” argument about the liklihoid of certain out comes, and then over indexes on net present value to make decisions. Sure if there’s 100’s of trillions of dollars to be made it’s worth spends 100’s of billions of dollars for a 1 percent chance it happens, but it’s still a 1 percent chance.
It would be fascinating to see numbers behind how many AI boosters are religious. Feels like a similar central belief. No matter the current state, boosters believe it will get better: essentially they have faith. Critics take it at current face value and don’t believe it can become AGI.
Although, people creating a digital god would be pretty blasphemous. Tough one.
> No matter the current state, boosters believe it will get better
How can it not, tho? Like it can't get worse (at least for the open models. You have them now. The underlying models don't change). So the only obvious answer is they either stagnate, or get better. And a brief look at history shows that nothing truly stagnates.
Keep in mind that we're not even 3 years into this new paradigm. We're still scratching the surface of how we can use these things, that are unreasonably good for being trained for NTP.
That's my main argument against an "AI winter". We have so many things to try that it'll take decades to fully realise all the potential of this tech, even if all research into foundational models would stop now. And it surely won't. There is just too much capital allocated here, and that attracts a lot of smart people working the problem. Yeah, it's reasonable to say things will get better.
“And a brief look at history shows that nothing truly stagnates.”
This seems so false to me we must be looking at different histories. Almost everything stagnates, we just over index in the things that don’t.
> It would be fascinating to see numbers behind how many AI boosters are religious. Feels like a similar central belief.
Many of them appear to be somewhere on the rationalist/EA (rationalist as in LessWrong enthusiasts, not the literal meaning of the term) spectrum, and probably wouldn't consider _themselves_ religious, though many would consider those things to be religions in their own right.
This article both reinforces my sense that AGI is a long ways out, if even possible, and I think explains why the attitude that we’ll look back in if AI bubble bursts hard why people spent so much god damned money on it.
It has this longtermism adjacent feel where it makes very strange albeit “rational” argument about the liklihoid of certain out comes, and then over indexes on net present value to make decisions. Sure if there’s 100’s of trillions of dollars to be made it’s worth spends 100’s of billions of dollars for a 1 percent chance it happens, but it’s still a 1 percent chance.