• tornavish@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    10
    ·
    1 day ago

    That’s ridiculous. It’s possible to recognize the technology has many good uses, but complain how it has been hyped too much.

    For example: AI-driven materials discovery uses machine learning to analyze chemicals and materials, allowing scientists to find and design new compounds much faster than humans could through traditional trial and error.

    • ikidd@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      7 hours ago

      That’s ML involving tightly constrained iterative processes and neural networks, and was happening long before LLMs. This whole bubble is LLM bullshit and assuming a word probability machine is capable of “discovering” anything is silly.

      • tornavish@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        5
        ·
        edit-2
        7 hours ago

        LLMs have plenty of good uses. They are as skilled as the person using them, however, which is the problem.

        • 7toed@midwest.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 hours ago

          But it seems that someone who is proficient in writing ultimately doesn’t benefit in a matter of time or effort, as both are still taken in correcting the output. And if you’re so curious, there’s been a number of studies on this exact phenomena already.

          The problem, truly, is cognitive debt and the overall expansion of people lending themselves to the dunning kruger effect in the name of trusting and living vicariously through their AI model of choice.

          My last employer was pushing hard for LLMs in a field they dont do shit for, one of the project managers was convinced by his AI of choice (gemini) to actually propose replacing himself with another AI tool. IT wasnt having it because it would screen read potentially sensitive info. He was laid off with a sheriff escort not 2 months later. Now is on linkedin posting some truly schizophrenic shit, otherwise having been normal ish

          • tornavish@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            5
            ·
            6 hours ago

            There have also been a number of studies that say that if a person knows how to use an LLM and provides it with a good prompt that it can give them something they can use.

            The biggest issue that I’ve seen with LLMs is that nobody knows how to write a prompt. Nobody knows how to really use it to their benefit. There is absolutely a benefit to someone who is proficient in writing. Just like there is absolutely a benefit to someone who is proficient in writing code.

            I’m guessing you belong in the category that cannot write a good prompt?

            • 7toed@midwest.social
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 hours ago

              No, I’ve done my actual work while people convinced they have “good prompts” weighed my whole team down (and promptly got laid off). We’ve burnt enough openai token and probed models on our own hardware to assertain their utility in my field. Manual automation with simple systems and hard logic is what the industry has ran on, and certainly will continue to.

              Explain to what makes a prompt good. As long as you’re using any provided model and not using sandbox you’re stuck to their initiating prompt. Change that, and you still have their parameters. Run an OS model with your own parameter tunings, you still are limited by your tempterature. What is a good temperature to use for rigid logic that doesn’t result in unexpected behavior but can adapt to user input well enough? These are questions every AI corp is dealing with.

              For context, all we were trying to do was implement some copilot/gpt shit onto our PMS to handle customer queries, data entry, scheduling information and notifications, and some other opened ended suggestions. C suite was giddy, IT not so much, but my team was to keep an open mind and see what we could achieve… so we evaluated it, as of about 6 months ago or so is when finally Cs stopped bugging since they had bigger fires to put out, and we had worked out a powerautomate routine (without the help of copilot… its unfunnily useless even though it’s implemented right into PA), making essentially all the effort put into working the AI from a LLM to an “agentic model” completrly mute, despite the tools the company bought into and everything.

              I’m guessing you belong in the category who hasn’t actually worked at a facility which part of your job is to deploy things like AI, but like to have an affirmative stance anyway.

              • tornavish@lemmy.cafe
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                4
                ·
                4 hours ago

                Yawn. Let’s do this, it’s even better: You tell me a task that you need to accomplish. Then you tell me the prompt you would give an LLM to accomplish that task.

                • 7toed@midwest.social
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  2 hours ago

                  Clearly heavy LLM usage inhibits reading comprehension, I stated the usecase which my employer wanted to implement. Sorry normal people aren’t as dogmatic as your AI friends lmao

                  • tornavish@lemmy.cafe
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    2
                    ·
                    2 hours ago

                    Give me an example and the exact prompt. My reading is very good. You are refusing to do it

    • greygore@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 day ago

      Machine learning is not why companies are dumping hundreds of billions of dollars into building data centers so they can earn tens of billions of dollars; it’s specifically large language models. Machine learning existed before the LLM boom and had real benefits, but has barely seen a fraction of the investment into it that LLMs have, because it didn’t have a bunch of tech bros speculating that artificial superintelligence would make them trillionaires.

    • FrogmanL@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      11 hours ago

      I agree with you. There is a knee-jerk “AI bad” when the real answer is it’s overhyped. There are good uses for it. I use it nearly daily for work, and, like anything other tech, it’s just a tool. It doesn’t replace me. It just makes me faster. Anyone that claims it can replace people might as well say that a hammer can replace a person.

      • tornavish@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        8 hours ago

        Right, Hammer, Forklift. Tools that people use.

        But right now we have a bunch of idiots spinning in circles on forklifts throwing hammers. Yes, you can do that… Yes, it’s probably fun, but ultimately that is not what the forklift is really good for.

    • Laser@feddit.org
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 day ago

      The current hype and the massive investments are about generative AI, not the actually-useful-for-humanity applications.

      Image recognition, medical research etc. are not drives the current market. It’s about offering a service that the broad masses use continuously. Otherwise these investments don’t make sense.

      • tornavish@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        9
        ·
        1 day ago

        But I guess all they could think of was Chatbots. It just shows that very few people actually know what AI is doing beyond what they see in the news. I’m not thrilled how the rollout is going of course, but I recognize that there are good aspects