• Axolotl@feddit.it
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 hours ago

    I once runned some models on my phone thruh termux. I tried to run Llama 3.2 with 1 and 3B parameters and run pretty well, i tried 8B and was slow. I tried deepseek-r1, 1.5B and run well, 7B was slow.

    For text prediction llama 1B may be enough

    Now, this is on a 300/400€ phone (Honor magic 6 lite)