I dont think Ai Chatbots care about engagement. the more you use them the more expensive it is for them. They just want you on the hook for the subscription service and hope you use them as little as possible while still enough to stay subscribed for maximum profit.
You don’t look so good… Here, try some meth—that always perks you right up. Sobriety? Oh, sure, if you want a solution that takes a long time, but don’t you wanna feel better now???
Not engagement, that’s what social media does. They just maximize what they’re trained for, which is increasingly math proofs and user preference. People like flattery
The llm models aren’t, they don’t really have focus or discriminate.
The ai chatbots that are build using those models absolutely are and its no secret.
What confuses me is that the article points to llama3 which is a meta owned model. But not to a chatbot.
This could be an official facebook ai (do they have one?) but it could also be. Bro i used this self hosted model to build a therapist, wanna try it for your meth problem?
Heck i could even see it happen that a dealer pretends to help customers who are trying to kick it.
For all we know, they could have self-hosted “Llama3.1_NightmareExtreme_RPG-StoryHorror8B_Q4_K_M” and instructed it to take on the role of a therapist.
Remember: AI chatbots are designed to maximize engagement, not speak the truth. Telling a methhead to do more meth is called customer capture.
I dont think Ai Chatbots care about engagement. the more you use them the more expensive it is for them. They just want you on the hook for the subscription service and hope you use them as little as possible while still enough to stay subscribed for maximum profit.
Sounds a lot like a drug dealer’s business model. How ironic
You don’t look so good… Here, try some meth—that always perks you right up. Sobriety? Oh, sure, if you want a solution that takes a long time, but don’t you wanna feel better now???
Not engagement, that’s what social media does. They just maximize what they’re trained for, which is increasingly math proofs and user preference. People like flattery
The llm models aren’t, they don’t really have focus or discriminate.
The ai chatbots that are build using those models absolutely are and its no secret.
What confuses me is that the article points to llama3 which is a meta owned model. But not to a chatbot.
This could be an official facebook ai (do they have one?) but it could also be. Bro i used this self hosted model to build a therapist, wanna try it for your meth problem?
Heck i could even see it happen that a dealer pretends to help customers who are trying to kick it.
For all we know, they could have self-hosted “Llama3.1_NightmareExtreme_RPG-StoryHorror8B_Q4_K_M” and instructed it to take on the role of a therapist.
But if the meth head does meth instead of engaging with the AI, that would do the opposite.