• TheOneCurly@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      2
      ·
      5 months ago

      Home grown slop is still slop. The lying machine can’t make anything else.

        • Jakeroxs@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 months ago

          I use oobabooga, little bit more options in the gguf space then ollama but not as easy to use imo. Does support openAI api connection though so can plug in other services to use it.

        • tormeh@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 months ago

          Ollama is apparently going for lock-in and incompatibility. They’re forking llama.cpp for some reason, too. I’d use GPT4All or llama.cpp directly. They support Vulkan, too, so your GPU will just work.

        • venusaur@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          Hm, I’ll see if my laptop can handle it. Probably do t have the patience or processing power

    • jim3692@discuss.online
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      5 months ago

      So, prosumers, leveraging computers that are not optimized for AI workloads, being limited to models that are typically inferior to commercial ones, are wasting more energy for even more slop?

        • jim3692@discuss.online
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 months ago

          That’s the price of privacy I am currently paying.

          There was, however, a video from The Hated One, that presents a different perspective on this. Maybe privacy is more environment friendly than we think.

          A lot of energy is wasted on data collection and analysis for advertising. Devices with modified firmwares, like LineageOS and GrapheneOS, do not collect such data, reducing the load on analysis servers.