• Timatal@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    20 days ago

    This is sort of the type of problem that a specifically trained ML model could be pretty good at.

    This isn’t that though, its seems to me to literally be asking a LLM to just make stuff up. Given that, the results are interesting but I wouldn’t trust it.

  • meyotch@slrpnk.net
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    20 days ago

    The accuracy is similar to what a carny running the guess-your-weight hustle could achieve.

  • abcdqfr@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    21 days ago

    Can’t wait to be called a fat ass with 95% semantic certainty. Foolish machine, you underestimate my power! I’m a complete fat ass!!

  • Etterra@discuss.online
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    20 days ago

    Please remember that the LLM does not actually understand anything. It’s predictive, as in it can predict what a person would say, but it doesn’t understand the meaning of it.