An interesting article about people using AI for seemingly innocuous tasks but spiral into a world of mysticism and conspiracy theories sparking a mental health crisis. I stark reminder to always remain conscious of the fact that AI has a monetary incentive to be sycophantic and keep you engaged.

Edited to link to the original article.

  • ChickenAndRice@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 hours ago

    Sorry if these tests had some kind of adverse affect on your mental health. You saw what LLM’s can do in the worst case, so it’s probably best to stop testing now.

    I do use Chatgpt sometimes, but only as a glorified search engine. Why? It’s my response to the modern web becoming overly difficult to use (SEO gaming, advertisements that can’t be blocked, paywalls, cookie messages, unfriendly or unresponsive forum posters, massive website rewrites that break links, etc.). I tell it to provide links, so that I can read the sources it’s pulling from especially when I’m skeptic. In other words, my use case doesn’t fit the futurism article at all, so I have no personal experience with it.

    So as for the futurism article, since I have no personal experience on the subject then I want them to provide hard evidence. This excludes links to their other articles.

    If they can provide hard evidence (and thus create stronger articles on the subject), then it’s a win-win:

    1. they have more credibility in their claims
    2. OpenAI (and other LLM companies) are at least an inch closer to being held accountable for taking advantage of vulnerable people.

    Hope I made sense here.