• Kinperor@lemmy.ca
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    6 hours ago

    I skimmed the article, I might have missed it but here’s another strike against AI, that is tremendously important: It’s the ultimate accountability killer.

    Did your insurance company make an obvious mistake? Oops teeehee, silly them, the AI was a bit off

    Is everything going mildly OK? Of course! The AI is deciding who gets insurance and who doesn’t, it knows better, so why are you questioning it?

    Expect (and rage against) a lot of pernicious usage of AI for decision making, especially in areas where they shouldn’t be making decisions (take Israel for instance, that uses an AI to select ““military”” targets in Gaza).

  • Rose56@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    ·
    6 hours ago

    Its an unfinished product with various problems, used in humans to develop it and make money.

    It does nothing right 100%! We as humanity care to make money out of it, and not help humanity in many ways.

  • NoodlePoint@lemmy.world
    link
    fedilink
    English
    arrow-up
    67
    arrow-down
    5
    ·
    19 hours ago
    1. It’s theft to digital artisans, as AI-generated works tend to derive heavily without even due credit.
    2. It further discourages what’s called critical thinking.
    3. It’s putting even technically competent people out of work.
    4. It’s grift for and by techbros.
    • Soup@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      16 hours ago

      Numver 3 is crazy too because it’s putting people out of work even when it’s worse than them, the bubble bursting will have dire consequences and if it’s held together by corrupt injections of taxpayer money then it’ll still have awful consequences, and the whole point of AI doing our jobs was to free us from labour but instead the lack of jobs is only hurting people.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 hours ago

        For 3, there are two things:

        • It is common for less good, but much cheaper tech to displace humans doing a job if it’s “good enough”. Dishwashing machines that sometimes leave debris on dishes are an example.

        • The technically competent people have long ofnet been led by people not technically competent, and have long been outcompeted by bullshit artists. LLM output is remarkably similar to bullshit artistry. One saving grace of the human bullshit artists is they at least usually understand they secretly have dependencies on actual competent people and while they will outcompete, they will at least try to keep the competent around, the LLM doesn’t have such concepts.

    • Gutless2615@ttrpg.network
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      23
      ·
      12 hours ago
      1. It’s not theft
      2. PEBKAC problem.
      3. totally agree. This right here is what we should be worried about.
      4. yep, absolutely. But we need to be figuring out what to do when all the jobs go away.
      • squaresinger@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        ·
        12 hours ago
        1. If vanilla ice takes 6 notes from the base line from a queen song it’s theft and costs $4mio. If AI copies whole chapters of books it’s all fine.
        2. No. PEBKAC is if it affects one person, or maybe a handful of people. If it affects whole sections of the population it’s systematic. It’s like saying “poverty is an user error because everyone could just choose to be rich”.
  • Binturong@lemmy.ca
    link
    fedilink
    English
    arrow-up
    63
    ·
    22 hours ago

    The reason we hate AI is cause it’s not for us. It’s developed and controlled by people who want to control us better. It is a tool to benefit capital, and capital always extracts from labour, AI only increases the efficiency of exploitation because that’s what it’s for. If we had open sourced public AI development geared toward better delivering social services and managing programs to help people as a whole, we would like it more. Also none of this LLM shit is actually AI, that’s all branding and marketing manipulation, just a reminder.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      10 hours ago

      Yes. The capitalist takeover leaves the bitter taste. If OpenAI was actually open then there would be much less backlash and probably more organic revenue.

    • BlameTheAntifa@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      11 hours ago

      none of this LLM shit is actually AI, that’s all branding and marketing manipulation, just a reminder.

      To correct the last part, LLMs are AI. Remember that “Artificial” means “fake”, “superficial”, or “having the appearance of.” It does not mean “actual intelligence.” This is why additional terms were coined to specify types of AI that are capable of more than just smoke and mirrors, such as AGI. Expect even more niche terms to arrive in the future as technology evolves.

      • Frezik@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 hours ago

        This is one of the worst things in the current AI trends for me. People have straight up told me that the old MIT CSAIL lab wasn’t doing AI. There’s a misunderstanding of what the field actually does and how important it is. People have difficulty separating this from the slop capitalism has plastered over the research.

        One of the foundational groups for the field is the MIT model railroading club, and I’m not joking.

  • Brotha_Jaufrey@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    5
    ·
    21 hours ago

    There was a thread of people pointing out biases that exist on Lemmy, and some commenters obviously mention anti-AI people. Cue the superiority complex (cringe).

    Some of these people actually believe UBI will become a thing for people who lose their jobs due to AI, meanwhile the billionaire class is actively REMOVING benefits for the poor to further enrich themselves.

    What really gets me is when people KNOW what the hell we’re talking about, but then mention the 1% use case scenario where AI is actually useful (for STEM) and act like that’s what we’re targeting. Like no, motherfucker. We’re talking about the AI that’s RIGHT IN FRONT OF US, contributing to a future where we’re all braindead ai-slop dependent, talentless husks of human beings. Not to mention unemployed now.

    • CancerMancer@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      17 hours ago

      A system is what it does. If it costs us jobs, enriches the wealthy at our expense, destroys creativity and independent thought, and suppresses wrongthink? It’s a censorious authoritarian fascist pushing austerity.

      Show me AI getting us UBI or creating worker-owned industry and I’ll change my tune.

      • Frezik@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        UBI is there to save billionaires.

        They’re a shortsighted lot who don’t recognize that workers are also their customers. If they stop paying us all, then there is nobody to buy their stuff. UBI is the way out of that for them while still having billionaires around.

        It aligns with Democratic Socialists well enough, but not the seize-the-means socialists.

  • Deflated0ne@lemmy.world
    link
    fedilink
    English
    arrow-up
    65
    ·
    1 day ago

    It’s extremely wasteful. Inefficient to the extreme on both electricity and water. It’s being used by capitalists like a scythe. Reaping millions of jobs with no support or backup plan for its victims. Just a fuck you and a quip about bootstraps.

    It’s cheapening all creative endeavors. Why pay a skilled artist when your shitbot can excrete some slop?

    What’s not to hate?

    • Sibyls@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      13
      ·
      23 hours ago

      As with almost all technology, AI tech is evolving into different architectures that aren’t wasteful at all. There are now powerful models we can run that don’t even require a GPU, which is where most of that power was needed.

      The one wrong thing with your take is the lack of vision as to how technology changes and evolves over time. We had computers the size of rooms to run processes that our mobile phones can now run hundreds of times more efficiently and powerfully.

      Your other points are valid, people don’t realize how AI will change the world. They don’t realize how soon people will stop thinking for themselves in a lot of ways. We already see how critical thinking drops with lots of AI usage, and big tech is only thinking of how to replace their staff with it and keep consumers engaged with it.

      • SoftestSapphic@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        3
        ·
        edit-2
        22 hours ago

        You are demonstrating in this comment that you don’t really understand the tech.

        The “efficient” models already spent the water and energy to train, these models are inferior to the ones that need data centers because you are stuck with a bot trained in 2020-2022 forever.

        They are less wasteful, but will become just as wasteful the second we want it to catch up again.

        • Sibyls@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          9
          ·
          21 hours ago

          You are misunderstanding the tech. That’s not how this works, models are trained often, did you think this was done only a few years ago? The fact that you called them bots says everything.

          You’re just hating to hate on something, without understanding the technology. The efficiency I’m referring to is the MoE architecture that only got popular within the last year. There are still new architectures being developed, not that you care about this topic but would prefer to blindly hate on what’s spewed from outdated and biased news sources.

          • SoftestSapphic@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            2
            ·
            21 hours ago

            Yeah nah

            Same shit people said in 2022

            In 3 more years you’ll be making the same excuses for the same shortcomings, because for you this isn’t about the tech, it’s about your ideology.

            • Sibyls@lemmy.ml
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              7
              ·
              20 hours ago

              You make weird assumptions seemingly based on outdated ideas. I’ll let you be, perhaps you need some rest.

    • iopq@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      15
      ·
      edit-2
      1 day ago

      It was also inefficient for a computer to play chess in 1980. Imagine using a hundred watts of energy and a machine that costed thousands of dollars and not being able to beat an average club player.

      Now a phone will cream the world’s best in chess and even go

      Give it twenty years to become good. It will certainly do more stuff with smaller more efficient models as it improves

      • Frezik@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 hours ago

        We really need to work out the implications of the fact that Moore’s Law is dead, and that technology doesn’t necessarily advance on a exponential path like that, anyway. Not in all cases.

        The cost per component of an integrated circuit (the original formulation of Moore’s Law) is not going down much at all. We’re orders of magnitude away from where we “should” be if we start from the Intel 8008 and cut the cost in half every 24 months. Nodes are creating smaller components, but they’re not getting cheaper. The fact that it took decades to get to this point is impressive, but it was already an exception in all of human history. Why can’t we just be happy that computers we already have are pretty damned neat?

        Anyway, AI is not following anything like that path. This might mean a big breakthrough tomorrow, or it could be decades from now. It might even turn out not to be possible; I think there is some way we can do AGI on computers of some kind, but that’s not even the consensus among computer scientists. In any case, there’s no particular reason to think LLMs will follow anything like the exponential growth path of Moore’s Law. They seem to have hit a point of diminishing returns.

        • Boomer Humor Doomergod@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 hour ago

          The 19th and 20th centuries saw so much technological advancement and we got used to that amount of change.

          That’s why people were expecting Mars by the mid 80s and flying cars and other fanciful tech by now.

          The problem is that the rate of advancement is slowing down, and economies that demand infinite, compounding growth are not prepared for this.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 hours ago

        It might, but:

        • Current approaches are displaying exponential demands for more resources with barely noticable “improvements”, so new approaches will be needed.
        • Advances in electronics are getting ever more difficult with increasing drawbacks. In 1980 a processor would likely not even have a heatsink. Now the current edge of that Moore’s law essentially is datacenter only and frequently demands it to be hooked up to water for cooling. SDRAM has joined CPUs in needing more active cooling.
      • Kay Ohtie@pawb.social
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        1
        ·
        1 day ago

        If you want to argue in favor of your slop machine, you’re going to have to stop making false equivalences, or at least understand how its false. You can’t make ground on things that are just tangential.

        A computer in 1980 was still a computer, not a chess machine. It did general purpose processing where it followed whatever you guided it to. Neural models don’t do that though; they’re each highly specialized and take a long time to train. And the issue isn’t with neural models in general.

        The issue is neural models that are being purported to do things they functionally cannot, because it’s not how models work. Computing is complex, code is complex, adding new functionality that operates off of fixed inputs alone is hard. And now we’re supposed to buy that something that creates word relationship vector maps is supposed to create new?

        For code generation, it’s the equivalent of copying and pasting from Stack Overflow with a find/replace, or just copying multiple projects together. It isn’t something new, it’s kitbashing at best, and that’s assuming it all works flawlessly.

        With art, it’s taking away creation from people and jobs. I like that you ignored literally every point raised except for the one you could dance around with a tangent. But all these CEOs are like “no one likes creating art or music”. And no, THEY just don’t want to spend time creating themselves nor pay someone who does enjoy it. I love playing with 3D modeling and learning how to make the changes I want consistently, I like learning more about painting when texturing models and taking time to create intentional masks. I like taking time when I’m baking things to learn and create, otherwise I could just go buy a box mix of Duncan Hines and go for something that’s fine but not where I can make things when I take time to learn.

        And I love learning guitar. I love feeling that slow growth of skill as I find I can play cleaner the more I do. And when I can close my eyes and strum a song, there’s a tremendous feeling from making this beautiful instrument sing like that.

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          23 hours ago

          Stockfish can’t play Go. The resources you spent making the chess program didn’t port over.

          In the same way you can use a processor to run a completely different program, you can use a GPU to run a completely different model.

          So if current models can’t do it, you’d be foolish to bet against future models in twenty years not being able to do it.

          • Kay Ohtie@pawb.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            I think the problem is that you think you’re talking like a time traveler heralding us about the wonders of sliced bread, when really it’s more like telling a small Victorian child about the wonders of Applebee’s and in the impossible chance they survive to it then finding everything is a lukewarm microwaved pale imitation of just buying the real thing at Aldi and cooking it in less time for far tastier and a fraction of the cost.

      • Deflated0ne@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        2
        ·
        1 day ago

        Show me the chess machine that caused rolling brown outs and polluted the air and water of a whole city.

        I’ll wait.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          It probably would have if IBM decided that every household in the USA needed to have chess playing compute capacity and made everyone dial up to a singular facility in the middle of a desert where land and taxes were cheap so they could charge everyone a monthly fee for the privilege…

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          23 hours ago

          Servers have been eating up a significant portion of electricity for years before AI. It’s whether we get something useful out of it that matters

          • Deflated0ne@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            16 hours ago

            That’s the hangup isn’t it? It produces nothing of value. Stolen art. Bad code. Even more frustrating phone experiences. Oh and millions of lost jobs and ruined lives.

            It’s the most american way possible that they could have set trillions of dollars on fire short of carpet bombing poor brown people somewhere.

          • CorvidCawder@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            19 hours ago

            Not even remotely close to this scale… At most you could compare the energy usage to the miners in the crypto craze, but I’m pretty sure that even that is just a tiny fraction of what’s going on right now.

              • CorvidCawder@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 hours ago

                From the blog you quoted yourself:

                Despite improving AI energy efficiency, total energy consumption is likely to increase because of the massive increase in usage. A large portion of the increase in energy consumption between 2024 to 2023 is attributed to AI-related servers. Their usage grew from 2 TWh in 2017 to 40 TWh in 2023. This is a big driver behind the projected scenarios for total US energy consumption, ranging from 325 to 580 TWh (6.7% to 12% of total electricity consumption) in the US by 2028.

                (And likewise, the last graph of predictions for 2028)

                From a quick read of that source, it is unclear to me if it factors in the electricity cost of training the models. It seems to me that it doesn’t.

                I found more information here: https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/

                Racks of servers hum along for months, ingesting training data, crunching numbers, and performing computations. This is a time-consuming and expensive process—it’s estimated that training OpenAI’s GPT-4 took over $100 million and consumed 50 gigawatt-hours of energy, enough to power San Francisco for three days.

                So, I’m not sure if those numbers for 2023 paint the full picture. And adoption of AI-powered tools was definitely not as high in 2023 as it is nowadays. So I wouldn’t be surprised if those numbers were much higher than the reported 22.7% of the total server power usage in the US.

            • Deflated0ne@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              16 hours ago

              Crypto miners wish they could be this inefficient. No literally they do. They’re the “rolling coal” mfers of the internet.

      • outhouseperilous@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        1 day ago

        Not the same. The underlying tech of llm’s has mqssively diminishing returns. You can akready see it, could see it a year ago if you looked. Both in computibg power and required data, and we do jot have enough data, literally have nit created in all of history.

        This is not “ai”, it’s a profoubsly wasteful capitalist party trick.

        Please get off the slop and re-build your brain.

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          23 hours ago

          That’s the argument Paul Krugman used to justify his opinion that the internet peaked in 1998.

          You still need to wait for AI to crash and a bunch of research to happen and for the next wave to come. You can’t judge the internet by the dot com crash, it became much more impactful later on

              • outhouseperilous@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                8
                ·
                21 hours ago

                One of the major contributors to early versions. Then they did the math and figured out it was a dead end. Yes.

                Also one of the other contributors (weizenbaum i think?) pointed out that not only was it stupid, it was dabgeroys and made people deranged fanatical devotees impervious to reason, who would discard their entire intellect and education to cult about this shit, in a madness no logic could breach. And that’s just from eliza.

      • Dangerhart@lemmy.zip
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 day ago

        It seems like you are implying that models will follow Moore’s law, but as someone working on “agents” I don’t see that happening. There is a limitation with how much can be encoded and still produce things that look like coherent responses. Where we would get reliable exponential amounts of training data is another issue. We may get “ai” but it isn’t going to be based on llms

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          23 hours ago

          You can’t predict how the next twenty years of research improves on the current techniques because we haven’t done the research.

          Is it going to be specialized agents? Because you don’t need a lot of data to do one task well. Or maybe it’s a lot of data but you keep getting more of it (robot movement? stock market data?)

          • Dangerhart@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            We do already know about model collapse though, genai is essentially eating its own training data. And we do know that you need a TON of data to do even one thing well. Even then it only does well on things strongly matching training data.

            Most people throwing around the word agents have no idea what they mean vs what the people building and promoting them mean. Agents have been around for decades, but what most are building is just using genai for natural language processing to call scripted python flows. The only way to make them look coherent reliably is to remove as much responsibility from the llm as possible. Multi agent systems are just compounding the errors. The current best practice for building agents is “don’t use a llm, if you do don’t build multiple”. We will never get beyond the current techniques essentially being seeded random generators, because that’s what they are intended to be.

      • jaykrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Twenty years is a very long time, also “good” is relative. I give it about 2-3 years until we can run a model as powerful as Opus 4.1 on a laptop.

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          23 hours ago

          There will inevitably be a crash in AI and people still forget about it. Then some people will work on innovative techniques and make breakthroughs without fanfare

  • SunshineJogger@feddit.org
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    3
    ·
    edit-2
    1 day ago

    It’s actually a useful tool… If it were not too often used for so very dystopian purposes.

    But it’s not just AI. All services, systems, etc… So many are just money grabs, hate, opinion making or general manipulation… I have many things I hate more about “modern” society, than I do as to how LLMs are used.

    I like the lemmy mindset far more than reddit and only on the AI topic people here are brainlessly focused on the tool instead of the people using the tool.

        • NoodlePoint@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          11 hours ago

          A platform for people in developing countries, however. In some cases it supplants most if not all of the functions of what used to be several programs for Internet access and communication.

          I mentioned Facebook because I’m seeing some people trying to share information how to grift with AI.

          • SunshineJogger@feddit.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            10 hours ago

            That… Doesn’t sound good.

            Facebook is not exactly a trustworthy thing and to have developing countries dependent on it the way you describe sounds dystopian. :/

            But dystopian is sadly the theme of the 2020s

      • SunshineJogger@feddit.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        14 hours ago

        That the death data tells clearly they should have laws like many EU countries have on gun ownership.

        Those are not multi purpose tools. Guns are for killing.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          Those are not multi purpose tools. Guns are for killing.

          Nah, they are multi purpose tools:

        • Jax@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          21 hours ago

          Oh, I was genuinely curious — this very same argument can be used when talking about guns. This very same argument is used when talking about guns.

          This wasn’t an attempt at a strawman, I’m merely drawing parallels. To say that this one topic is one where Lemmy focuses on the tool and not the people using them is false.

          • beesthetrees@feddit.uk
            link
            fedilink
            English
            arrow-up
            10
            ·
            21 hours ago

            The better comparison I’ve seen is knives. Knives have multiple purposes, yet they can also be used quite dangerously. Guns on the other hand only really have one purpose. Since AI can at least be used for other more useful stuff (think protein folding), I would say they are closer to knives.

  • TheObviousSolution@lemmy.ca
    link
    fedilink
    English
    arrow-up
    44
    ·
    1 day ago

    It’s corporate controlled, it’s a way to manipulate our perception, it’s all appearance no substance, it’s an excuse to hide incompetence under an algorithm, it’s cloud service orientated, it’s output is highly unreliable yet hard to argue against to the uninformed. Seems about right.

    • Taleya@aussie.zone
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 day ago

      And it will not be argued with. No appeal, no change of heart. Which is why anyone using it to mod or as cs needs to be set on fire.

    • REDACTED@infosec.pub
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      10 hours ago

      I would not say love, but it’s definitely a great tool to master. Used to be pretty lame, but things seem to be changing fast.

      I don’t really understand Lemmy’s AI hate, so feel free to change my mind

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        There’s a few things.

        First off, there is utility, and that utility varies based on your needs. In software development for example, the utility varies from doing most of the work to nearly useless and you feel like the LLM users are gaslighting you based on how useless it is to you. People who live life making utterly boilerplate applications feel like it’s magical. People who generate tons of what are supposed to be ‘design documents’ but get eyed by non-technical executives that don’t understand them, but like to see volumes of prose, LLMs can just generate that no problem (no one who actually would need them ever reads them anyway). Then people who work on more niche scenarios get annoyed because they barely do anything useful, but attempting to use them gets you innundated with low quality code suggestions.

        But I’d say mostly it’s about the ratio of investment/hype to the reality. The investment is scary because one day the bubble will pop (doesn’t mean LLM is devoid of value, just that the business context is irrational right now, just like internet was obviously important, but we still had a bubble around the turn of the century overy it). The hype is just so obnoxious, they won’t shut up even when they have nothing really new to say. We get it, we’ve heard it, saying it over and over again just is exhausting to hear.

        On creative fronts, it’s kind of annoying when companies use it in a way that is noticeable. I think they could get away with some backdrops and stuff, but ‘foreground’ content is annoying due to being a dull paste of generic content with odd looks. For text this manifests as obnoxiously long prose that could be more to the point.

        On video, people are generating content and claiming ‘real’, in ways to drive engagement. That short viral clip of animals doing a funny thing? Nope, generated. We can’t trust video content, whether fluff or serious to be authentic.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    172
    arrow-down
    3
    ·
    2 days ago

    We hate it because it’s not what the marketing says it is. It’s a product that the rich are selling to remove the masses from the labor force, only to benefit the rich. It literally has no other productive use for society aside from this one thing.

    • Capricorn_Geriatric@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      13 hours ago

      You missed the high energy consumption and low reliability. They’re equally as valid issues as stealing jobs.

      It literally has no other productive use for society aside from this one thing.

      I’d refrain from saying that AI replacing labor is productve to society. Speeding up education, however, might be.

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      I would even hate it if it was exactly how it is marketed. Because what it is often marketed for is really stupid and often vague. The fact that it doesn‘t even remotely work like they say just makes me take it a lot less seriously.

      • CosmoNova@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        The „companion“ agents children in the 2020s and onward are growing up with and trust more than their parents will start advertising them pharmaceuticals when they‘re grown up :)

      • just_another_person@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        ·
        2 days ago
        1. they’ve already stolen everything
        2. other companies already focus on illegally using data for “AI” means, and they’re better at it
        3. Everyone already figured out that LLMs aren’t what they were promising “Assistant” features were 15 years ago
        4. None of these companies have any sort of profit model. There is no “AI” race to win, unless it’s about who gets to fleece the public for their money faster.
        5. Tell me who exactly benefits when AGI is attainable (and for laymen it’s not a real thing achievable with this tech at all), so who in the fuck are you expecting to benefit from this in the long run?
    • Melvin_Ferd@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      42
      ·
      edit-2
      2 days ago

      You hate it because the media which is owned by the rich told you to hate it so that they can horde it themselves while you champion laws to prevent lower class from using and embracing it. AI haters are class traitors

        • Melvin_Ferd@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 hours ago

          No it isn’t. Just like if I told Republicans that their bullshit with immigrants is generated from yellow journalism, they’d respond the same way you just did. You can’t see it because you’re the target.

          • Frezik@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            . . . the rich told you to hate it so that they can horde it themselves . . .

            No, that’s completely wrong. They don’t need to tell people to hate it just to hoard it. If they wanted to hoard it, they could just pay for datacenters to be built and use it for themselves while technically losing money.

            Their problem is that none of this stuff is making money at the same rate that they’re putting money in. There’s orders of magnitude difference between the two. It’s all burning off in power costs. The whole thing will shut down on its own, because investors still want to see money coming in exceed money going out.

            So yes, this is the dumbest take.

        • Melvin_Ferd@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          5
          ·
          edit-2
          1 day ago

          Yes. A lot of people on Lemmy are collectively wrong about things. That isn’t a radical thing to say

      • Diurnambule@jlai.lu
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        Loool, yeah that know that the ruling class like to give tools to fight them. If this would really hurt them, this would be forbidden.

        • Melvin_Ferd@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          1 day ago

          Why create laws if you just sort people out with the media they consume. Happens with right wing everyday. I get you’re above it all, immune to it with your stories superior intelligence, not like those idiots. You could never participate in a community where the media shared on it would you ever create a panic or hype about something taking your job, ruining society, sexually assaulting or most vulnerable, leading to some 1984 dystopian society.

          You can’t be manipulated like those idiots

          Honestly It’s all the same shit. Different sides of the political spectrum.

          The rich give you this. Someone trying to be rich did. They rich are now trying to prevent us from embracing it. You can do a lot of analysis and content creation with it. It’s a source multiplier and something they don’t want people using freely.

          • Diurnambule@jlai.lu
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            13 hours ago

            … Everybody can be manipulated. And nobody is immune to propaganda. You can’t already use it freely buddy, publics models are crap (try askinh chatgpt strategy to revolt again ruling class) or you need computer with so much ram that any normal people can’t buy them. If you have 10k to put in a personal computer dedicated to run AI, you are a geek (I encourage you to keep doing it) or you are a guy with too much money to care, And won’t be here debating. (Please prove me wrong, I would love to discover an open initiative working on LLMs)

  • RobotZap10000@feddit.nl
    link
    fedilink
    English
    arrow-up
    81
    arrow-down
    5
    ·
    2 days ago

    Ed Zitron is one of the loudest opponents against the AI industry right now, and he continues to insist that “there is no real AI adoption.” The real problem, apparently, is that investors are getting duped. I would invite Zitron, and anyone else who holds the opinion that demand for AI is largely fictional, to open the app store on their phone on any day of the week and look at the top free apps charts. You could also check with any teacher, student, or software developer.

    A screen showing the Top Free Apps on the Apple App Store. ChatGPT is in first place.

    ChatGPT has some very impressive usage numbers, but the image tells on itself by being a free app. The conversion rate (percentage of people who start paying) is absolutely piss poor, with the very same Ed Zitron estimating it being at ~3% with 500.000.000 users. That also doesn’t bode well with the fact that OpenAI still loses money even on their $200/month subscribers. People use ChatGPT because it’s been spammed down their throats by the media that never question the sacred words of the executives (snake oil salesmen) that utter lunatic phrases like “AGI by 2025” (Such a quote exists somewhere, but I don’t remember if this year was used). People also use ChatGPT because it’s free and it’s hard to say no to get someone to do your homework for you for free.

    • Regrettable_incident@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      I don’t need chatGPT etc for work, but I’ve used it a few times. It is indeed a very useful product. But most of the time I can get by without it and I kinda try to avoid using it for environmental reasons. We’re boiling the oceans fast enough as it is.

    • Rai@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      2
      ·
      2 days ago

      I love how every single app on that list is an app I wouldn’t touch in my life

    • nutsack@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      people currently don’t pay for it, because currently it’s free. most people aren’t using it for anything that requires a subscription.

    • AlecSadler@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      In house at my work, we’ve found ChatGPT to be fairly useless, too. Where Claude and Gemini seem to reign supreme.

      It seems like ChatGPT is the household name, but hardly the best performing.

    • Eagle0110@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      Exactly, the users/installation count of such products are clearly a much more accurate indicator of the success of their marketing team, rather than their user’s perceived value in such products lol

    • lemmyknow@lemmy.today
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 day ago

      Idk that the average GPT user knows or cares about AGI. I think the appeal is getting information specifically tailored to you. Sure, I can go online and search for something. Try and find what I’m looking for, or close to it. Or I can ask AI, and it’ll give me text tailored exactly to my prompt. For instance, having to hope you can find someone with a problm similar to yours online, with a solution, vs. ChatGPT just tells you about your case specifically

      • nutsack@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        you’re being downvoted but this is the reality of the market right now. it’s day 1 venture capital shit. lose money while gaining market share, and worry about making a profit later.

        • Electricd@lemmybefree.net
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          4
          ·
          1 day ago

          Yea, and people are coping on this

          Anti AI will not convince pro AI as well. They are a vocal minority

    • corbin@infosec.pubOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      10
      ·
      edit-2
      2 days ago

      I wouldn’t really trust Ed Zitron’s math analysis when he gets a very simple thing like “there is no real AI adoption” plainly wrong. The financials of OpenAI and other AI-heavy companies are murky, but most tech startups run at a loss for a long time before they either turn a profit or get acquired. It took Uber over a decade to stop losing money every quarter.

      OpenAI keeps getting more funding capital because (A) venture capital guys are pretty dumb, and (B) they can easily ramp up advertisements once the free money runs out. Microsoft has already experimented with ads and sponsored products in chatbot messages, ChatGPT will probably do something like that.

      • JeremyHuntQW12@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        2 days ago

        I wouldn’t really trust Ed Zitron’s math analysis when he gets a very simple thing like “there is no real AI adoption” plainly wrong

        Except he doesn’t say that. the author of this article simply made that up.

        There is a high usage rate (almost entirely ChatGPT btw, despite all the money sunk into AI by others like Google) but its all the free stuff and they are losing bucketloads of money at a rate that is rapidly accelerating.

        but most tech startups run at a loss for a long time before they either turn a profit or get acquired.

        There is no path to profitability.

        • corbin@infosec.pubOP
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          6
          ·
          2 days ago

          I wrote the article, Ed said that in the linked blog post: “There Is No Real AI Adoption, Nor Is There Any Significant Revenue - As I wrote earlier in the year, there is really no significant adoption of generative AI services or products.”

          There is a pretty clear path to profitability, or at least much lower losses. A lot more phones, tablets, computers, etc now have GPUs or other hardware optimized for running small LLMs/SLMs, and both the large and small LLMs/SLMs are becoming more efficient. With both of those those happening, a lot of the current uses for AI will move to on-device processing (this is already a thing with Apple Intelligence and Gemini Nano), and the tasks that still need a cloud server will be more efficient and consume less power.

          • voronaam@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 day ago

            I agree that this was poor wording on Ed’s side. He meant to point at the lack of adoption for work/business purposes, but failed to articulate this distinction. He is talking about conversion to paid users and how Google cheated to make the adoption of Gemini by corporate users to looks higher than it is. He never meant to talk about the adoption by regular people on the free tier just doing random non-work-related things.

            You were talking about a different adoption metric. You are both right, you are just talking about different kinds of adoption.

            • corbin@infosec.pubOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              11 hours ago

              I don’t think he is talking about specifically businesses, though, because he also talks about Gemini replacing Google Assistant, which only matters in consumer products (Assistant was never an enterprise product). It’s more like he’s moving the goalposts mid-statement.

          • meowgenau@programming.dev
            link
            fedilink
            English
            arrow-up
            7
            ·
            2 days ago

            a lot of the current uses for AI will move to on-device processing

            How exactly will that make OpenAI and the likes more profitable?! That should be one of the scenarios that will make them less profitable.

            • corbin@infosec.pubOP
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              8
              ·
              edit-2
              2 days ago

              If the models are more efficient, the tasks that still need a server will get the same result at a lower cost. OpenAI can also pivot to building more local models and license them to device makers, if it wants.

              The finances of big tech companies isn’t really relevant anyway, except to point out that Ed Zitron’s arguments are not based in reality. Whether or not investors are getting stiffed, the bad outcomes of AI would still be bad, and the good outcomes would still be good.

  • KnitWit@lemmy.world
    link
    fedilink
    English
    arrow-up
    123
    arrow-down
    4
    ·
    edit-2
    2 days ago

    Someone on bluesky reposted this image from user @yeetkunedo that I find describes (one aspect of) my disdain for AI.

    Text reads: Generative Al is being marketed as a tool designed to reduce or eliminate the need for developed, cognitive skillsets. It uses the work of others to simulate human output, except that it lacks grasp of nuance, contains grievous errors, and ultimately serves the goal of human beings being neurologically weaker due to the promise of the machine being better equipped than the humans using it would ever exert the effort to be. The people that use generative Al for art have no interest in being an artist; they simply want product to consume and forget about when the next piece of product goes by their eyes. The people that use generative Al to make music have no interest in being a musician; they simply want a machine to make them something to listen to until they get bored and want the machine to make some other disposable slop for them to pass the time with.

    The people that use generative Al to write things for them have no interest in writing. The people that use generative Al to find factoids have no interest in actual facts. The people that use generative Al to socialize have no interest in actual socialization.

    In every case, they’ve handed over the cognitive load of developing a necessary, creative human skillset to a machine that promises to ease the sweat equity cost of struggle. Using generative Al is like asking a machine to lift weights on your behalf and then calling yourself a bodybuilder when it’s done with the reps. You build nothing in terms of muscle, you are not stronger, you are not faster, you are not in better shape. You’re just deluding yourself while experiencing a slow decline due to self-inflicted atrophy.

    • tarknassus@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      20 hours ago

      You’re just deluding yourself while experiencing a slow decline due to self-inflicted atrophy.

      Chef’s kiss on this last sentence. So eloquently put!

    • bulwark@lemmy.world
      link
      fedilink
      English
      arrow-up
      36
      ·
      2 days ago

      Damn that hits the nail on the head. Especially that analogy of watching a robot lift weights on your behalf then claiming gains. It’s causing brain atrophy.

      • tehn00bi@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        ·
        2 days ago

        But that is what CEO’s want. They want to pay for a near super human to do all of the different skill sets ( hiring, firing, finance, entry level engineering, IT tickets, etc) and it looks like it is starting to work. Seems like solid engineering students graduating recently have all been struggling to land decent starting jobs. I’ll grant it’s not as simple as this explanation, but I really think the wealth class are going to be happy riding this flaming ship right down into the depths.

    • GnuLinuxDude@lemmy.ml
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      edit-2
      2 days ago

      The people that use generative Al for art have no interest in being an artist; they simply want product to consume and forget about when the next piece of product goes by their eyes. The people that use generative Al to make music have no interest in being a musician; they simply want a machine to make them something to listen to until they get bored and want the machine to make some other disposable slop for them to pass the time with.

      Good sentiment, but my critique on this message is that the people who produce this stuff don’t have really have any interest in producing what they do for its own sake. They only have interest in producing content to crowd out the people who actually care, and to produce a worse version of whatever it is in a much faster time than it would for someone with actual talent to do so. And the reason they’re producing anything is for profit. Gunk up the search results with no-effort crap to get ad revenue. It is no different than “SEO.”

      Example: if you go onto YouTube right now and try to find any modern 30-60m long video that’s like “chill beats” or “1994 cyberpunk wave” or whatever other bullshit they pump out (once you start finding it you’ll find no shortage of it), you’ll notice that all of those uploaders only began as of about a year ago at most and produce a lot of videos (which youtube will happily prioritize to serve you) of identical sounding “music.” The people producing this don’t care about anything except making money. They’re happy to take stolen or plagiarized work that originated with humans, throw it into the AI slot machine, and produce something which somehow is no longer considered stolen or plagiarized. And the really egregious ones will link you to their Patreons.

      The story is the same with art, music, books, code, and anything else that actually requires creativity, intuition, and understanding.

      • KnitWit@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        2 days ago

        I believe the OP was referring more to consumers of ai in the statement, as opposed to people trying to sell content or whatever, which would be more in line with what you’re saying. I agree with both perspectives and I think the Op i quoted probably would as well. I just thought it was a good description of some of the why ai sucks, but certainly nit all of it.

      • latenightnoir@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        2 days ago

        Well, philosophical and epistemological suicide for now, but snowball it for a couple of decades and we may just reach the practical side, too…

        Edit: or, hell, maybe not even decades given the increase in energy consumption with every iteration…

        • OpenStars@discuss.online
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          2
          ·
          2 days ago

          When technology allows us to do something that we could not before - like cross an ocean or fly through the sky a distance that would previously have taken years and many people dying during the journey, or save lives - then it unquestionably offers a benefit.

          But when it simply eases some task, like using a car rather than horse to travel, and requires discipline to integrate into our lives in a balanced manner, then it becomes a source of potential danger that we would allow ourselves to misuse it.

          Even agriculture, which allows those to eat who put forth no effort into making the food grow, or even in preparing it for consumption.

          img

          This is what CEOs are pushing on us, because for one number must go up, but also genuinely many believe they want what it has to offer, not quite having thought through what it would mean if they got it (or more to the point others did, empathy not being their strongest attribute).

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            5
            arrow-down
            1
            ·
            2 days ago

            Technology that allows us to do something we could not do before - such as create nuclear explosions, or propel metal slugs at extreme velocities, or design new viruses - unquestionably offer a benefit and don’t require discipline to integrate into our lives in a balanced manner?

            • OpenStars@discuss.online
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              1
              ·
              2 days ago

              We could bomb / kill people before. We could propel arrows / spears / sling rocks at people before. All of which is an extension of walking over and punching someone.

              Though sending a nuke from orbit on the other side of the planet by pressing a couple buttons does seem like the extension is so vast that it may qualify as “new”.

              I suppose any technology that can be used can be misused.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        15
        arrow-down
        8
        ·
        2 days ago

        The people who commission artists have no interest in being an artist; they simply want the product. Are people who commission artists also “slowly committing suicide?”

        • EldritchFemininity@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          16 hours ago

          If we’re going with the “suicide” analogy, I’d say that AI is suicide like eating fast food/takeout every night instead of cooking for yourself is. It’s an easy shortcut, but you are probably missing out on vital nutrients (in the case of AI, that would be critical thinking skills or potentially missing out on finding a hobby that you actually really enjoy). You could instead learn to cook yourself (which some people really enjoy and find as a meditative kind of experience), hire a nutritionist to make a meal plan, or even go to a restaurant instead.

          Personally, I don’t think it’s a great analogy, and there’s a much better basically 1 to 1 relationship between Gen AI and retail therapy/fast fashion. They’re all bad for the environment, rely on worker abuse in many different forms, and all work to further our dependency on corporations and enrich their owners.

          People often make the argument about Gen AI “democratizing” art, but that’s nonsense. Art was already “democratized” by easy access to not just tools like a pencil and knowledge, but by the fact that even before the internet art was the most easily accessible it has ever been in history. You could go to a store and buy a canvas to put on your wall in the 50s. A century before and that would’ve been something only the wealthy could think of doing by hiring an artist to make a custom piece. People complain about artists charging too much, and yet a large portion of artists charge below minimum wage for commissions.

          And that’s not to say that I hate AI for the sake of hating it. I hate the implementation of it. Gen AI is just a more complex version of the Gaussian Blur tool in Photoshop. But it’s fed with effectively stolen labor and robs artists of potential clients, people from possibly discovering a new thing they love doing, and clients from developing a working relationship with the artists that they commission. There’s a great post that Temmie of Undertale fame posted recently about how when Toby can’t describe what he wants animated he’ll act it out and so he danced around with a broom to show her how he wanted the idle animation for an old man to go. That’s the kind of stuff that can come up in the commission process. Obviously that’s not gonna happen to everyone, but half the fun of art is the collaboration. It’s like playing a co-op game.

        • dustyData@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          ·
          2 days ago

          People who commission art don’t call themselves the artist. That’s the big difference. If people found out you commissioned the painting that you later told everyone at the party that you painted yourself, and that it is practically your work of art, because you gave the precise description of what you wanted to the painter, and thus you’re an artist. Then you would be the laughing stock and the butt of many jokes and japes for decades. Because that’s ridiculous.

        • OpenStars@discuss.online
          link
          fedilink
          English
          arrow-up
          6
          ·
          2 days ago

          I misread you at first so here’s an answer to if someone uses AI art:

          Within the jokingly limited sphere of the discussion… “yes”? Particularly their artistic ability in that situation is being put to death slowly as whatever little they might have attempted without access to the tool will now not be attempted at all.

          I don’t know as much about if someone were to commission art from an actual person.