• Jakeroxs@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 months ago

      I use oobabooga, little bit more options in the gguf space then ollama but not as easy to use imo. Does support openAI api connection though so can plug in other services to use it.

    • tormeh@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 months ago

      Ollama is apparently going for lock-in and incompatibility. They’re forking llama.cpp for some reason, too. I’d use GPT4All or llama.cpp directly. They support Vulkan, too, so your GPU will just work.

    • venusaur@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Hm, I’ll see if my laptop can handle it. Probably do t have the patience or processing power