• Perspectivist@feddit.uk
    link
    fedilink
    English
    arrow-up
    24
    ·
    17 hours ago

    This is exactly the use-case for an LLM

    I don’t think it is. LLM is language generating tool, not language understanding one.

    • iglou@programming.dev
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      12
      ·
      17 hours ago

      That is actually incorrect. It is also a language understanding tool. You don’t have an LLM without NLP. NLP includes processing and understanding natural language.

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        26
        ·
        16 hours ago

        But it doesn’t understand - at least not in the sense humans do. When you give it a prompt, it breaks it into tokens, matches those against its training data, and generates the most statistically likely continuation. It doesn’t “know” what it’s saying, it’s just producing the next most probable output. That’s why it often fails at simple tasks like counting letters in a word - it isn’t actually reading and analyzing the word, just predicting text. In that sense it’s simulating understanding, not possessing it.

        • iglou@programming.dev
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          17
          ·
          16 hours ago

          You’re entering a more philosophical debate than a technical one, because for this point to make any sense, you’d have to define what “understanding” language means for a human in a level as low as what you’re describing for an LLM.

          Can you affirm that what a human brain does to understand language is so different to what an LLM does?

          I’m not saying an LLM is smart, but saying that it doesn’t understand, when having computers “understand” natural language is the core of NLP, is meh.

          • Perspectivist@feddit.uk
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 hours ago

            You’re right - in the NLP field, LLMs are described as doing “language understanding,” and that’s fine as long as we’re clear what that means. They process natural language input and can generate coherent output, which in a technical sense is a kind of understanding.

            But that shouldn’t be confused with human-like understanding. LLMs simulate it statistically, without any grounding in meaning, concepts or reference to the world. That’s why earlier GPT models could produce paragraphs of flawless grammar that, once you read closely, were complete nonsense. They looked like understanding, but nothing underneath was actually tied to reality.

            So I’d say both are true: LLMs “understand” in the NLP sense, but it’s not the same thing as human understanding. Mixing those two senses of the word is where people start talking past each other.

          • Feyd@programming.dev
            link
            fedilink
            English
            arrow-up
            22
            arrow-down
            4
            ·
            16 hours ago

            No they’re not they’re talking purely at a technical level and you’re trying to apply mysticism to it.