• 0 Posts
  • 65 Comments
Joined 24 days ago
cake
Cake day: February 5th, 2026

help-circle



  • Just because the final output comes from AI doesn’t always mean a human didn’t put real effort into writing it. There’s a big difference between asking an LLM to write something from scratch, telling it exactly what to say, or just having it edit and polish what you already wrote.

    A ton of my replies here - including this one - are technically “AI output,” but all the AI really did was take what I wrote, clean it up, and turn it into coherent text that’s easier for the reader to follow.

    spoiler

    Original text: Just because the final output is by AI doesn’t always mean human didn’t put effort into writing it. There’s a difference between asking LLM to write something, telling LLM what to write or asking it to edit something you wrote.

    A large number of my replies here, including this one, are technically “AI output” but all the AI did was go through what I wrote and try and turn it into coherent text that the is easy for the recipient to consume.




  • You don’t seem very interested in sticking to the topic, do you? This conversation has been all over the place, complete with ad-hominems, concern-trolling, red herrings, strawmen, gish galloping - as if you’re trying to break some kind of record.

    It’s pretty clear you’ve built up a cartoon-villain version of me in your head and now you’re fighting that imagined version like it’s real. I made a pretty simple claim about AGI, you’ve piled an entire story on top of it, and now you’re demanding I defend views I don’t even hold.

    I’ve been trying to have a good-faith conversation here, but if this is what you’re going to keep doing, then I’ll just move on.



  • So do you think Dyson Spheres are inevitable too?

    I’m less certain about that than I am about AGI - there may be other ways to produce that same amount of energy with less effort - but generally speaking, yeah, it seems highly probable to me.

    First you were implying that today’s AI would bring about AGI

    I’ve never made such a claim. I’ve been saying the exact same thing since around 2016 or so - long before LLMs were even a thing. It’s in no way obvious to me that LLMs are the path to AGI. They could be, but they don’t have to be. Either way, it doesn’t change my core argument.

    people you hold so dear

    C’moon now.


  • My argument is that we’ll incrementally keep improving our technology like we have done throughout human history. Assuming that general intelligence is not substrate dependent - meaning that what our brains are doing cannot be replicated in silicon - or that we destroy ourselves before we get there, then it’s just a matter of time before we create a system that’s as intelligent as we are: AGI.

    I already said that the timescale doesn’t matter here. It could take a hundred years or two thousand - doesn’t matter. We’re still moving toward it. It does not matter how slow you move. As long as you keep moving, you’ll eventually reach your destination.

    So, how I see it is that if we never end up creating AGI ever, it’s either because we destroyed ourselves before we got there or there’s something borderline supernatural about the human brain that makes it impossible to copy in silicon.




  • Are we not moving toward AGI? Because from where I stand, I only see three scenarios: either AI research is going backwards, no progress is being made whatsoever, or we’re continuing to improve our systems incrementally - inevitably moving toward AGI. Unless, ofcourse, you think we’ll never going to reach it which I view as a quite insane claim in itself.

    If we’re not moving toward it, then I’d love to hear your explanation for why we’re moving backwards or not making any progress at all.

    Whether we’re 5 or 500 years away from AGI is completely irrelevant to the people who worry about it. It’s not the speed of the progress - it’s the trajectory of it.





  • But in both cases you have the option to pay - yet choose not to. If money wasn’t an issue, there wouldn’t really be any reason to pirate anything. That’s why I see piracy as a financial decision, and thus I don’t think piracy advocates have any ground to stand on when they criticize AI companies for doing the exact same thing. It’s not identical, but it’s equivalent.

    One could even argue that individual piracy is selfish because it only benefits the one person doing it. AI companies at least are providing a product that hundreds of millions of people get value out of - and the vast majority of them get it for free.


  • I didn’t think I’d need to explain the difference between saving money and earning money but here we are.

    When you earn money, you get a check you can spend on more stuff. When you save money, you don’t get a check - that would be earning, not saving. Instead, you’re spending less, which means you have that money left to buy something else. Those savings are effectively what you “earn.”

    When you download a $40 movie for free, you’re left with $40 more to spend on something else. It doesn’t matter whether I hand you $40 to buy the movie or you pirate it - in both cases, you end up with the exact same amount of money afterward.


  • I’ve honestly never considered before whether it could be like something to be a character in my dream - if it’s part of the same consciousness. Doesn’t seem obvious that it couldn’t be.

    And my personal view is that the answer is definitely no. There’s no dreamer. The dream is appearing in the consciousness of a biological being with my genes, history, and memories that’s currently in a state of sleep.

    This comes with other ramifications too. There’s no decision-maker either.