• Melvin_Ferd@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    9 hours ago

    This is crazy. I’ve literally been saying they are fallible. You’re saying your professional fed and LLM some type of dataset. So I can’t really say what it was you’re trying to accomplish but I’m just arguing that trying to have it process data is not what they’re trained to do. LLM are incredible tools and I’m tired of trying to act like they’re not because people keep using them for things they’re not built to do. It’s not a fire and forget thing. It does need to be supervised and verified. It’s not exactly an answer machine. But it’s so good at parsing text and documents, summarizing, formatting and acting like a search engine that you can communicate with rather than trying to grok some arcane sentence. Its power is in language applications.

    It is so much fun to just play around with and figure out where it can help. I’m constantly doing things on my computer it’s great for instructions. Especially if I get a problem that’s kind of unique and needs a big of discussion to solve.

    • Log in | Sign up@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 minutes ago

      it’s so good at parsing text and documents, summarizing

      No. Not when it matters. It makes stuff up. The less you carefully check every single fucking thing it says, the more likely you are to believe some lies it subtly slipped in as it went along. If truth doesn’t matter, go ahead and use LLMs.

      If you just want some ideas that you’re going to sift through, independently verify and check for yourself with extreme skepticism as if Donald Trump were telling you how to achieve world peace, great, you’re using LLMs effectively.

      But if you’re trusting it, you’re doing it very, very wrong and you’re going to get humiliated because other people are going to catch you out in repeating an LLM’s bullshit.