Workers should learn AI skills and companies should use it because it’s a “cognitive amplifier,” claims Satya Nadella.

in other words please help us, use our AI

  • 5too@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 day ago

    Right now, it’s just a fun toy, prone to hallucinations.

    That’s the thing though - with an LLM, it’s all “hallucinations”. They’re just usually close to reality, and are presented with an authoritative, friendly voice.

    (Or, in your case, they’re usually close to the established game reality!)

    • merc@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      20 hours ago

      This is the thing I hope people learn about LLMs, it’s all hallucinations.

      When an LLM has excellent data from multiple sources to answer your question, it is likely to give a correct answer. But, that answer is still a hallucination. It’s dreaming up a sequence of words that is likely to follow the previous words. It’s more likely go give an “incorrect” hallucination when the data is contradictory or vague. But, the process is identical. It’s just trying to dream up a likely series of words.

      • OctopusNemeses@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        8 hours ago

        Before the tech industry set its sights on AI, “hallucination” was called error rate.

        It’s the rate at which the model incorrectly labelled outputs. But of course the tech industry being what it is needs to come up with alternative words that spin doctor bad things into not bad things. So what the field of AI for decades had been calling error rate, everyone now calls “hallucinations”. Error has far worse optics than hallucination. Nobody would be buying this LLM garbage if every article posted about it included paragraphs about how its full of errors.

        That’s the thing people need to learn.