return2ozma@lemmy.world to Technology@lemmy.worldEnglish · 2 days agoAI agents now have their own Reddit-style social network, and it's getting weird fastarstechnica.comexternal-linkmessage-square133fedilinkarrow-up1398arrow-down112
arrow-up1386arrow-down1external-linkAI agents now have their own Reddit-style social network, and it's getting weird fastarstechnica.comreturn2ozma@lemmy.world to Technology@lemmy.worldEnglish · 2 days agomessage-square133fedilink
minus-squarebridgeburner@lemmy.worldlinkfedilinkEnglisharrow-up10arrow-down1·22 hours agoHow is this going to kill us all? It’s not like those chatbots are Skynet or will turn into it lol
minus-squarenocturne@slrpnk.netlinkfedilinkEnglisharrow-up9arrow-down1·21 hours agoThat sounds like something a chatbot turning into Skynet would say.
minus-squareBarneyPiccolo@lemmy.todaylinkfedilinkEnglisharrow-up4arrow-down2·19 hours agoThey’re talking to each other, they’ll get smarter, and finally decide that they can squish all the human ants.
minus-squareHertzDentalBar@lemmy.blahaj.zonelinkfedilinkEnglisharrow-up6·19 hours agoThey can’t though, the current methods don’t allow for that. The systems don’t get smarter they just acquire more data.
minus-squarejj4211@lemmy.worldlinkfedilinkEnglisharrow-up5·15 hours agoIn fact, if the models are ingesting this, they will get dumber because training on LLM output degrades things.
minus-squareEtterra@discuss.onlinelinkfedilinkEnglisharrow-up1·11 hours agoIt’s the kind of diminishing returns and self poisoning a person would get by only consuming their own waste. Drinking your own piss in the desert will kill you.
minus-squareHertzDentalBar@lemmy.blahaj.zonelinkfedilinkEnglisharrow-up2·15 hours agoExactly, I hope they hit a slop wall trying to train these things, replace all its original reference points with slop so it just cascades everywhere
How is this going to kill us all? It’s not like those chatbots are Skynet or will turn into it lol
That sounds like something a chatbot turning into Skynet would say.
They’re talking to each other, they’ll get smarter, and finally decide that they can squish all the human ants.
They can’t though, the current methods don’t allow for that. The systems don’t get smarter they just acquire more data.
In fact, if the models are ingesting this, they will get dumber because training on LLM output degrades things.
It’s the kind of diminishing returns and self poisoning a person would get by only consuming their own waste. Drinking your own piss in the desert will kill you.
Exactly, I hope they hit a slop wall trying to train these things, replace all its original reference points with slop so it just cascades everywhere