Here we go… It took them some time but ads is going to be inside chat gpt as well. And impossible to block.

  • AliasAKA@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 hours ago

    You can get by surprisingly well on 20b parameter models using a Mac with decent ram or even 8b parameter models that fit on most high end (eg 16gb) model cards. Depends on your use cases but I almost exclusively use smaller local models.

    • melfie@lemy.lol
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      I have a RTX 4060 low-profile in my 2U server and am limited to 8GB of VRAM for anything I self-host at the moment. I may consider a larger chassis with a better GPU in the future.