• Lvxferre [he/him]@mander.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    9 days ago

    The analogy with colourblindness is actually damn great. But I’m bored, so let me see the dumb fucks’ answers so I can rage a wee bit.

    But I get a little frustrated seeing many of these kinds of posts that talk about fundamental limitations of LLMs vs humans on the grounds that it cannot “logically reason” like a human does.

    The reason so many posts focus on the inability of those models to reason is because reasoning is a big deal. It limits the ability to retrieve reliable information from those models. And this is fucking obvious.

    These are limitations in the current approach to training and objectives; internally, we have no clue what is going on.

    Emphasis mine. That “current” there? Moving goalposts. A fallacy aka idiocy.

    Humans are statistical models too in an appropriate sense. The question is whether we try to execute phrase by phrase or not, or whether it even matters what humans do in the long term.

    If this was correct - and the assumer wasn’t being an assumer - it would be still an “ackshyually”. Way to go! You forgot to thank the “kind strangers” for the gold. Oh wait.

    This may be comforting to think

    This is the part where the moron tries to shift the discussion from what is said to their assumptions on why someone says it. NEXT MUPPET, PLEASE!

    Yeah, except it isn’t. You can get enormous value out of LLMs if you get over this weird science fiction requirement that they never make mistakes.

    Another moron / HN user / muppet missing the bloody point. NEXT!

    …it’s a useless tool. I don’t like collaborating with chronic liars who aren’t able to openly point out knowledge gaps…

    I think a more correct take here might be “it’s a tool that I don’t trust enough to use without checking,” or at the very least, “it’s a useless tool for my purposes.” I understand your point, but I got a little caught up on the above line because it’s very far out of alignment with my own experience using it to save enormous amounts of time.

    Okay, that one is actually a fair point. Next.

    “Based on my research, zstd compression is not natively supported by iOS or Apple’s frameworks, which means you cannot use zstd compression without adding some form of external code to your project” // Thanks Sonnet. // Full response: // https://www.perplexity.ai/search/without-adding-third-party-

    Missing the fucking point by trying to erase the evidence of the issue with cherry picked evidence where it doesn’t happen. *Yawn*. Yup, cherry picking - another fallacy / idiocy, but apparently you’re supposed to vomit those and eat the others’ vomit when you’re a HN poster instead of a rational person.

    Pfffffffffffffffffffffft.


    At those times I wonder if “being functionally illiterate” is a requirement to comment there.

    And this issue isn’t just when discussing LLMs, mind you. Those braindead morons will vomit “ackshyually” nonstop, they make even redditors smart in comparison. They show complete failure at basic reading skills, such as “look for the core point of someone’s text”.