• Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    60
    ·
    8 hours ago

    It’s easy to tune a chat to confidently speak any bullshit take. But overriding what an AI has learned with alignment steps like this has been shown to measurably weaken its capabilities.

    So here we have a guy who’s so butthurt by reality that he decided to make his own product stupider just to reinforce his echo chamber. (I think we all saw this coming.)

    • ToastedRavioli@midwest.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 hours ago

      Its honestly a great analogy for the way that humans have a tendency to do the same thing. Most people are fairly incapable of setting aside what they already think is true when they go to assess new information. This is basically no different than an LLM being pushed to ignore nuance in order to maintain a predisposed alignment that it has been instructed to justify in spite of evidence to the contrary.

      If anything hes designed a model with built-in problems specifically to cater to human beings with the same design problems