• LifeInMultipleChoice@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    1 day ago

    I don’t want to give grok/such the traffic but is it much less personable? Like did some of these people become friends with an AI, then get forced to move to an AI that treats them worse?

    I’ve never tried to have a conversation with one like that. I did try to ask one to help me figure out what was wrong with a docker container I was trying to set up. Think I ended up just tossing it and starting from scratch after I had clearly set something up wrong initially, then got the AI just went in loops trying to get me to try the same things over and over. Haven’t tried them recently

    • ButteryMonkey@piefed.social
      link
      fedilink
      English
      arrow-up
      17
      ·
      1 day ago

      I read an article about that and apparently the change to the model made the ai stop responding to affectionate solicitations with the same sort of affectionate tone in reply. It got more businesslike, and less intimate.

      • BeigeAgenda@lemmy.ca
        link
        fedilink
        arrow-up
        10
        ·
        1 day ago

        I definitely got weirded out asking a GPT3 model about something and it got clingy.

        Now I see it more like a search engine, skim the wall of text to find the useful information. Today I gave it a lot of context and explained what I had done and the error I got and it more or less told me you did everything correctly, and suggested stuff I already tried. It’s way of saying “I don’t know”.

    • DrDystopia@lemy.lol
      link
      fedilink
      arrow-up
      9
      ·
      1 day ago

      Like did some of these people become friends with an AI, then get forced to move to an AI that treats them worse?

      Basically yes, it’s the normie version of testing out different models and finding one they like.

      And “friends”? That’s a bit oversimplification, but when I tested out models on my personal AI rig I could use all sorts of models and write my own system prompts. Using a default character sheet as the benchmark, various models gave off vastly different “personalities” in their answer.

      Some of them I liked so much I went back to the “nice” models after testing various others, not too concerned with quantization accuracy or parameter size. But I tested for personality, creativity, empathic mimicry and so on - Not for factual answers. Not even the giant, up-to-date models are usable for facts.

      Marvin the Paranoid Android powered by an uncencored sci-fi horror LLM is delightful fun, finally someone on my level of positive outlook on things!

    • Greddan@feddit.org
      link
      fedilink
      arrow-up
      7
      ·
      1 day ago

      I’ve had an acquaintance express actual anger at me for questioning the output of Grok. Granted, the man is one of the dumbest people I know, but I’m still concerned. No one’s immune to being psychologically manipulated, even if you’re aware of it happening. People who think they’re friends with bikini-streamers even less so.

      • locuester@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        19 hours ago

        I find Grok great for research, ChatGPT great for fun and image gen, clause great for coding, gemini (google) good for quick tech dev/admin ops related searches.

        That’s how I’d best describe what I use each for.

        Do you find grok to be shit at everything? I find it does best/fastest when intermediate searching the web is required, it does that and sources it well. Asking something crazy like “what’s the best product for killing some yellow weed in New Mexico” will typically result in nice tables comparing products with links and details like that.