The moderators of a pro-artificial intelligence Reddit community announced that they have been quietly banning “a bunch of schizoposters” who believe “they’ve made some sort of incredible discovery or created a god or become a god,” highlighting a new type of chatbot-fueled delusion that started getting attention in early May.
I dunno, I think there’s credence to considering it as a worry.
Like with an addictive substance: yeah, some people are going to be dangerously susceptible to it, but that doesn’t mean there shouldn’t be any protections in place…
Now what the protections would be, I’ve got no clue. But I think a blanket, “They’d fall into psychosis anyway” is a little reductive.
I don’t think I suggested it wasn’t worrisome, just that it’s expected.
If you think about it, AI is tuned using RLHF, or Reinforcement Learning from Human Feedback. That means the only thing AI is optimizing for is “convincingness”. It doesn’t optimize for intelligence, anything seems like intelligence is literally just a side effect as it forever marches onward towards becoming convincing to humans.
“Hey, I’ve seen this one before!” You might say. Indeed, this is exactly what happened to social media. They optimized for “engagement”, not truth, and now it’s eroding the minds of lots of people everywhere. AI will do the same thing if run by corporations in search of profits.
Left unchecked, it’s entirely possible that AI will become the most addictive, seductive technology in history.
Ah, I see what you’re saying – that’s a great point. It’s designed to be entrancing AND designed to actively try to be more entrancing.