• CeeBee_Eh@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 day ago

    it just repeats things which approximate those that have been said before.

    That’s not correct and over simplifies how LLMs work. I agree with the spirit of what you’re saying though.

      • CeeBee_Eh@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        22 hours ago

        I’m not wrong. There’s mountains of research demonstrating that LLMs encode contextual relationships between words during training.

        There’s so much more happening beyond “predicting the next word”. This is one of those unfortunate “dumbing down the science communication” things. It was said once and now it’s just repeated non-stop.

        If you really want a better understanding, watch this video:

        https://youtu.be/UKcWu1l_UNw

        And before your next response starts with “but Apple…”

        Their paper has had many holes poked into it already. Also, it’s not a coincidence their paper released just before their WWDC event which had almost zero AI stuff in it. They flopped so hard on AI that they even have class action lawsuits against them for their false advertising. In fact, it turns out that a lot of their AI demos from last year were completely fabricated and didn’t exist as a product when they announced them. Even some top Apple people only learned of those features during the announcements.

        Apple’s paper on LLMs is completely biased in their favour.