Call me when you get past the “first step” where Reddit controlled NFTs somehow make communities independent from Reddit.
These are all me:
I control the following bots:
Call me when you get past the “first step” where Reddit controlled NFTs somehow make communities independent from Reddit.
The irony that this story was posted by a bot…
I’ve reported pictures/gifs of accidental nudity that were posted on Reddit without any evidence of consent, and they blew me off. Not just ignored me - they took the time to say the content was fine.
Yeah, it was legal to post stuff like that - no reasonable expectation of privacy in public places and all that. But it isn’t ethical. Don’t do it. It isn’t funny.
Lemmy won, because Lemmy users numbered in the hundreds before the fiasco. The software is now growing by leaps and bounds.
Reddit may have won the battle, but not the war, and certainly not without casualties.
Well, LED lights are half-wave rectifiers that light up, so you wouldn’t add one. I don’t think I’ve ever heard of a half wave rectifier referred to as a bridge rectifier.
A bridge rectifier flips the negative current to positive, so instead of a sine wave you get a series of humps. Then a capacitor acts as a battery like you describe to smooth out the dip between humps.
My LED burn outs were almost certainly defective, not normal wear.
Also, cheap ones run directly on AC, so they flicker at 60 Hz (50 in Europe) because the current is only flowing for half the cycle.
The most amazing thing to me - I’ve been using leds for 10+ years, and I think I’ve had to replace one or two of them. It is a wonder that prices can come down with demand dwindling so much.
That’s my point. The AI isn’t an independent subject to be criticized, it is a cultural mirror.
The bias isn’t in the software, it is in the data. The stock photos of professional women that were fed in were white.
That doesn’t say anything about the AI, but rather the community that created those biases.
AI content isn’t watermarked, or detection would be trivial. What he’s talking about is that certain words have a certain probability of appearing after certain other words in a certain context. While there is some randomness to the output, certain words or phrases are unlikely to appear because the data the model was based on didn’t use them.
All I’m saying is that the more a writer’s writing style and word choice are similar to the data set, the more likely their original content would be flagged as AI generated.
Here’s the thing though - the probabilities for word choice come from the data the model was trained on. While someone that uses a substantially different writing style / word choice than the LLM could easily be identified as being not from the LLM, someone with a similar writing style might be indistinguishable from the LLM.
Or, to oversimplify: given that Reddit was a large portion of the input data for ChatGPT, all you need to do is write like a Redditor to sound like ChatGPT.
If it could, it couldn’t claim that the content out produced was original. If AI generated content were detectable, that would be a tacit admission that it is entirely plagiarized.
The base assumption of those with that argument is that an AI is incapable of being original, so it is “stealing” anything it is trained on. The problem with that logic is that’s exactly how humans work - everything they say or do is derivative from their experiences. We combine pieces of information from different sources, and connect them in a way that is original - at least from our perspective. And not surprisingly, that’s what we’ve programmed AI to do.
Yes, AI can produce copyright violations. They should be programmed not to. They should cite their sources when appropriate. AI needs to “learn” the same lessons we learned about not copy-pasting Wikipedia into a term paper.
Though, ironically a scale of Full - 3/4 - half - 1/4 - empty is perfectly fine for gas. There is usually a visual gauge of % for charge, but it isn’t as prominent as the range. Oddly, my car has it divided roughly in thirds.
The problem is that other vehicles adjust the projection based on current conditions - when I drive up a mountain, my projected range drops like a rock. When I drive back down I can end up with more range than I started. Reporting the “ideal” case during operation is misleading at best.
Copyright 100% applies to the output of an AI, and it is subject to all the rules of fair use and attribution that entails.
That is very different than saying that you can’t feed legally acquired content into an AI.
No, you misunderstand. Yes, they can control how the content in the book is used - that’s what copyright is. But they can’t control what I do with the book - I can read it, I can burn it, I can memorize it, I can throw it up on my roof.
My argument is that the is nothing wrong with training an AI with a book - that’s input for the AI, and that is indistinguishable from a human reading it.
Now what the AI does with the content - if it plagiarizes, violates fair use, plagiarizes- that’s a problem, but those problems are already covered by copyright laws. They have no more business saying what can or cannot be input into an AI than they can restrict what I can read (and learn from). They can absolutely enforce their copyright on the output of the AI just like they can if I print copies of their book.
My objection is strictly on the input side, and the output is already restricted.
Not really - it isn’t prediction, it is early detection. Interpretive AI (finding and interpreting patterns) is way ahead of generative AI.