• StarLight@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    ·
    3 months ago

    It’s actually insane that there are huge chunks of people expecting AGI anytime soon because of a CHATBOT. Just goes to show these people have 0 understanding of anything. AGI is more like 30+ years away minimum, Andrew Ng thinks 30-50 years. I would say 35-55 years.

    • cygnus@lemmy.ca
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      1
      ·
      edit-2
      3 months ago

      At this rate, if people keep cheerfully piling into dead ends like LLMs and pretending they’re AI, we’ll never have AGI. The idea of throwing ever more compute at LLMs to create AGI is “expect nine women to make one baby in a month” levels of stupid.

      • GBU_28@lemm.ee
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        1
        ·
        3 months ago

        People who are pushing the boundaries are not making chat apps for gpt4.

        They are privately continuing research, like they always were.

      • bulwark@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        3 months ago

        I wouldn’t say LLMs are going away any time soon. 3 or 4 years ago I did the Sentdex youtube tutorial to build one from scratch to beat a flappy bird game. They are really impressive when you look at the underlying math. And the math isn’t precise enough to be reliable for anything more than entertainment. Claiming it’s AI, much less AGI is just marketing bullshit, tho.

          • bulwark@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            3 months ago

            I’m not sure what is these days but according to Merriam it’s the capability of computer systems or algorithms to imitate intelligent human behavior. So it’s debatable.

            • lemmyvore@feddit.nl
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              Basically, whenever we find that a human ability can be automated, the goalposts of the “AI” buzzword are silently moved to include it.

            • thanks_shakey_snake@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              I don’t think it’s just marketing bullshit to think of LLMs as AI… The research community generally does, too. Like the AI section on arxiv is usually where you find LLM papers, for example.

              That’s not like a crazy hype claim like the “AGI” thing, either… It doesn’t suggest sentience or consciousness or any particular semblance of life (and I’d disagree with MW that it needs to be “human” in any way)… It’s just a technical term for systems that exhibit behaviors based on training data rather than explicit programming.