That’s an interesting take, I didn’t know software could be inspired by other people’s works. And here I thought software just did exactly as it’s instructed to do. These are language models. They were given data to train those models. Did they pay for the data that they used to train for it, or did they scrub the internet and steal all these books along with everything everyone else has said?
Software can do a lot of things that rocks can’t do, that’s not a good analogy.
Whether software can feel “pain” depends a lot on your definitions, but I think there are circumstances in which software can be said to feel pain. Simple worms can sense painful stimuli and react to it, a program can do the same thing.
We’ve reached the point where the simplistic prejudices about artificial intelligence common in science fiction are no longer useful guidelines for talking about real artificial intelligence. Sci-fi writers have long assumed that AIs couldn’t create art and now it turns out it’s one of the things they’re actually rather good at.
AIs in their training stages are simply just running extreme statistical analysis on the input material. They’re not “learning” they’re not “inspired” they’re not “understanding”
The anthropomorphism of these models is a major problem. They are not human, they don’t learn like humans.
Hallucinations happen when there’s gaps in the training data and it’s just statistically picking what’s most likely to be next. It becomes incomprehensible when the model breaks down and doesn’t know where to go. However, the model doesn’t see a difference between hallucinating nonsense and a coherent sentence. They’re exactly the same to the model.
The model does not learn or understand anything. It statistically knows what the next word is. It doesn’t need to have seen something before to know that. It doesn’t understand what it’s outputting, it’s just outputting a long string that is gibberish to it.
I have formal training in AI and 90%+ of what I see people claiming AI can do is a complete misunderstanding of the tech.
They weren’t given data. They were shown data then the company spent tens of millions of dollars on cpu time to do statistical analysis of the data shown.
That’s an interesting take, I didn’t know software could be inspired by other people’s works. And here I thought software just did exactly as it’s instructed to do. These are language models. They were given data to train those models. Did they pay for the data that they used to train for it, or did they scrub the internet and steal all these books along with everything everyone else has said?
Well, now you know; software can be inspired by other people’s works. That’s what AIs are instructed to do during their training phase.
Does that mean software can also be afraid, or angry? What about happy software? Saying software can be inspired is like saying a rock can feel pain.
deleted by creator
Here is an alternative Piped link(s):
Geoffrey Hinton on the topic
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Software can do a lot of things that rocks can’t do, that’s not a good analogy.
Whether software can feel “pain” depends a lot on your definitions, but I think there are circumstances in which software can be said to feel pain. Simple worms can sense painful stimuli and react to it, a program can do the same thing.
We’ve reached the point where the simplistic prejudices about artificial intelligence common in science fiction are no longer useful guidelines for talking about real artificial intelligence. Sci-fi writers have long assumed that AIs couldn’t create art and now it turns out it’s one of the things they’re actually rather good at.
Software cannot be “inspired”
AIs in their training stages are simply just running extreme statistical analysis on the input material. They’re not “learning” they’re not “inspired” they’re not “understanding”
The anthropomorphism of these models is a major problem. They are not human, they don’t learn like humans.
deleted by creator
Yeah, that’s just flat out wrong
Hallucinations happen when there’s gaps in the training data and it’s just statistically picking what’s most likely to be next. It becomes incomprehensible when the model breaks down and doesn’t know where to go. However, the model doesn’t see a difference between hallucinating nonsense and a coherent sentence. They’re exactly the same to the model.
The model does not learn or understand anything. It statistically knows what the next word is. It doesn’t need to have seen something before to know that. It doesn’t understand what it’s outputting, it’s just outputting a long string that is gibberish to it.
I have formal training in AI and 90%+ of what I see people claiming AI can do is a complete misunderstanding of the tech.
deleted by creator
They weren’t given data. They were shown data then the company spent tens of millions of dollars on cpu time to do statistical analysis of the data shown.
A computer being shown data is a computer being given data. I don’t understand your argument.
deleted by creator
deleted by creator
deleted by creator