• PoopingCough@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    I don’t know much about cybersecurity but from what I understand about how LLM models work, there was always going to be a limit to what they can actually do. They have no understanding; they’re just giant probability engines, so the ‘hallucinations’ that happen aren’t something solvable, they are inherent in the design of the models. And it’s only going to get worse, as training data is going to be more and more difficult to find without being poisoned by current llm output.