• jdr@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 hours ago

    but what “books” are it “remembering” when it gives that answer?

    What would be a better way to phrase this?

  • Lvxferre [he/him]@mander.xyz
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    3 hours ago

    Immediately bookmarked it. Way better than my current approaches:

    • if I care about the person, I mention a few experiences with LLMs: how ChatGPT constantly invents RimWorld mods that don’t exist, how Bard (now Gemini) told me potatoes are active and oranges are passive (because potatoes can roll and need to react to their environment). Or internet lore, like “eat a rock per day” and “put glue on pizza”.
    • if I don’t care about the person, I superficially agree with them. Then I mark them mentally as “braindead trash” and consider avoiding them as much as possible - because this is a symptom of worse character flaws, like being gullible.

    Small note:

    What kinds of things might they be good at? // Summarize this for me

    Kinda. It doesn’t really summarise texts; it picks chunks of them together. Sometimes changing meaning.

    In this aspect, LLMs are only good if you wouldn’t otherwise read the text in question, or if you’re looking for a specific topic.

    Note the pattern behind the other examples: things where there’s no harm if it’s wrong, because you’re checking it anyway.

    • tuff_wizard@aussie.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      20 minutes ago

      I think the summarisation of video meetings is where LLMs may actually find a reasonable use.

      A friend who is very considered and I would call very smart feels that is a very useful feature in his job at an accounting firm. Especially with clients involved