This kinda freaked me out: AI models fed their own outputs as training data will quickly start making distorted images that look spookily like human painting made under the progression of mental illness or drugs.
Thanks for sharing this! I really think that when people see LLM failures and say that such failures demonstrate how fundamentally different LLMs are from human cognition, they tend to overlook how humans actually do exhibit remarkably similar failures modes. Obviously dementia isn’t really analogous to generating text while lacking the ability to “see” a rendering based on that text. But it’s still pretty interesting that whatever feedback loops did get corrupted in these patients led to such a variety of failure modes.
As an example of what I’m talking about, I appreciated and generally agreed with this recent Octomind post, but I disagree with the list of problems that “wouldn’t trip up a human dev”; these are all things I’ve seen real humans do, or could imagine a human doing.
What i find interesting is that in both cases there is a certain consistency in the mistakes too - basically every dementia patient still understands the clock is something with a circle and numbers and not a square with letters for example. LLMs can tell you cokplete bullshit, but still understands it has to be done with perfect grammar in a consistant language. So much so it struggles to respond outside of this box - ask it to insert spelling errors to look human for example.
the ability to “see”
This might be the true problem in both cases, both the patient and the model can not comprehend the bigger picture (a circle is divided into 12 segments, because that is how we deconstructed the time it takes for the earth to spin around it’s axis). Things that seem logical to use, are logical because of these kind of connections with other things we know and comprehend.
Good questions. I don’t know, and I can no longer try to find out, as the mods have now removed the comment. (Sorry for the double-post–I got briefly confused about which comment you were referring to and deleted my first post, then realized I’d been frazzled and the post in question really was deleted by the mods.)
Removed by mod
that’s scary how dementia works :'(
This kinda freaked me out: AI models fed their own outputs as training data will quickly start making distorted images that look spookily like human painting made under the progression of mental illness or drugs.
https://www.nature.com/articles/d41586-024-02420-7
well that was terrifying
Thanks for sharing this! I really think that when people see LLM failures and say that such failures demonstrate how fundamentally different LLMs are from human cognition, they tend to overlook how humans actually do exhibit remarkably similar failures modes. Obviously dementia isn’t really analogous to generating text while lacking the ability to “see” a rendering based on that text. But it’s still pretty interesting that whatever feedback loops did get corrupted in these patients led to such a variety of failure modes.
As an example of what I’m talking about, I appreciated and generally agreed with this recent Octomind post, but I disagree with the list of problems that “wouldn’t trip up a human dev”; these are all things I’ve seen real humans do, or could imagine a human doing.
What i find interesting is that in both cases there is a certain consistency in the mistakes too - basically every dementia patient still understands the clock is something with a circle and numbers and not a square with letters for example. LLMs can tell you cokplete bullshit, but still understands it has to be done with perfect grammar in a consistant language. So much so it struggles to respond outside of this box - ask it to insert spelling errors to look human for example.
This might be the true problem in both cases, both the patient and the model can not comprehend the bigger picture (a circle is divided into 12 segments, because that is how we deconstructed the time it takes for the earth to spin around it’s axis). Things that seem logical to use, are logical because of these kind of connections with other things we know and comprehend.
Thanks for sharing that mindfuck. I honestly would’ve thought something was wrong with my cognition if you hadn’t mentioned it was a test beforehand.
… what
Basically this: https://www.psychdb.com/cognitive-testing/clock-drawing-test
Thanks.
What I still didn’t figure out about the comment I replied to is:
Good questions. I don’t know, and I can no longer try to find out, as the mods have now removed the comment. (Sorry for the double-post–I got briefly confused about which comment you were referring to and deleted my first post, then realized I’d been frazzled and the post in question really was deleted by the mods.)
deleted by creator
Get educated