

Not necessarily, but everyone having access to cheap labor AND raw materials would ensure that.
Now I’m not going to pretend that AI is anywhere close to being a source of cheap labor. It really can’t do much without a human to assist it.
Not necessarily, but everyone having access to cheap labor AND raw materials would ensure that.
Now I’m not going to pretend that AI is anywhere close to being a source of cheap labor. It really can’t do much without a human to assist it.
I’m not entirely sure of that. While corporate AI would certainly cause that, right now there are open weights models which can be run on a relatively affordable computer, and they are not that far behind. These models are able to democratize AI benefits rather than concentrate it.
The movie had themes about AI revolution, while the book was around robopsychology. Since this anecdote was about an AI gaslighting itself, it’s far more appropriate than the movie thematically.
Claude eventually resolved its existential crisis by convincing itself the whole episode had been an elaborate April Fool’s joke, which it wasn’t. The AI essentially gaslit itself back to functionality, which is either impressive or deeply concerning, depending on your perspective.
Now THAT’S some I, Robot shit. And I’m not talking about the Will Smith movie, I’m talking about the original book.
Part cost is estimated to be under $5000 and take a week for a novice roboticist to build. Very cool, but me and my kids will probably skip this one.
They used to be a non-profit. Doubly fucking hypocrites.
OpenAI’s core message was “we can’t release our GPT model because people will try to use it for war”.
Fucking hypocrites.
Even the cheap ones accelerate faster, ride smoother, and are quieter. You don’t have to get oil changes, and the brakes don’t wear down as fast. Plus I can recharge at home, which is loads cheaper than buying gasoline.
And this is all with a relatively ancient Nissan Leaf, the new vehicles are all far better.
Oh, and let’s not forget that even very small air quality improvements have noticeable improvements in lung health! Humans were not meant to be breathing gasoline fumes or combustion exhaust.
Anyone who gets uncomfortable with government surveillance because it could be used to target certain demographics of people needs to look no further than what Israel has done to prove their point.
The only thing stopping the world from autonomously targeting people by online demographic is common human decency, and humanity is running on very short supply of that these days.
Because it really doesn’t. For most tasks, it would require more human energy to do the work than an LLM, just because we are much slower at it than an AI. I mean, humans operate at around 80 W just by existing (basal metabolic rate).
If the AI is powered by renewables, it’s cleaner than humans. If it’s powered by fossil fuels, it’s likely much worse (though I haven’t run the calculations).
Now obviously, this presumes that the output of an AI is even valuable at all, which is often not the case.
Spatial reasoning has always been a weakness of LLMs. Other symptoms include the inability to count and no concept of object permanence.
I mean, it still could be. But LLMs are not that AGI we’re expecting.
“Don’t believe that marketing department“ is one of those things everybody needs to learn at some point in their life.
LLMs are famously NOT understood, even by the scientists creating them. We’re still learning how they process information.
Moreover, we most definitely don’t know how human intelligence works, or how close/far we are to replicating it. I suspect we’ll be really disappointed by the human mind once we figure out what the fundamentals of intelligence are.
There’s nothing wrong with scientifically proving something that’s commonly known. In fact, that’s an important duty of science, even if it’s not a glamorous one.
Is it in decline? I mean, I want to believe it, but I haven’t seen any hard data on that.
Why is it a good thing? The dude is out of office, and this is an enormous waste of resources.
Yeah we know top democrats downplayed Biden’s decline. And they replaced him with a younger candidate in response. That seems like a pretty normal thing to do. In fact, that’s what the Republicans should have done with Trump, seeing his obvious mental decline.
I don’t think I suggested it wasn’t worrisome, just that it’s expected.
If you think about it, AI is tuned using RLHF, or Reinforcement Learning from Human Feedback. That means the only thing AI is optimizing for is “convincingness”. It doesn’t optimize for intelligence, anything seems like intelligence is literally just a side effect as it forever marches onward towards becoming convincing to humans.
“Hey, I’ve seen this one before!” You might say. Indeed, this is exactly what happened to social media. They optimized for “engagement”, not truth, and now it’s eroding the minds of lots of people everywhere. AI will do the same thing if run by corporations in search of profits.
Left unchecked, it’s entirely possible that AI will become the most addictive, seductive technology in history.
Turns out it doesn’t really matter what the medium is, people will abuse it if they don’t have a stable mental foundation. I’m not shocked at all that a person who would believe a flat earth shitpost would also believe AI hallucinations.
This is all the article mentions. I hope you’re right about the backwards compatibility.