It’s not wrong for either to draw inspiration from the other. It’s the hypocrisy that’s wrong.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
It’s not wrong for either to draw inspiration from the other. It’s the hypocrisy that’s wrong.
I’ve made similar points in the past in discussions about robot soldiers going to war. There’s an upside to these things that people insist on overlooking; they follow their programming. If you program a robot soldier to never shoot at an ambulance, then it will never shoot at an ambulance even if it’s having a really bad day. Same here, if the security robot has been programmed never to leave the public sidewalk then it’ll never leave the public sidewalk.
It’s always possible for these sorts of things to be programed to do the wrong things, of course. But at least now we have the ability to audit that sort of thing.
Are you suggesting that the same amount of crime is happening but they’re deciding not to report it because there’s a robot there? That’s the measure they’re touting, the reduction in crime reports.
You joke, but presumably that’s when it recharges.
They’re not both true, though. It’s actually perfectly fine for a new dataset to contain AI generated content. Especially when it’s mixed in with non-AI-generated content. It can even be better in some circumstances, that’s what “synthetic data” is all about.
The various experiments demonstrating model collapse have to go out of their way to make it happen, by deliberately recycling model outputs over and over without using any of the methods that real-world AI trainers use to ensure that it doesn’t happen. As I said, real-world AI trainers are actually quite knowledgeable about this stuff, model collapse isn’t some surprising new development that they’re helpless in the face of. It’s just another factor to include in the criteria for curating training data sets. It’s already a “solved” problem.
The reason these articles keep coming around is that there are a lot of people that don’t want it to be a solved problem, and love clicking on headlines that say it isn’t. I guess if it makes them feel better they can go ahead and keep doing that, but supposedly this is a technology community and I would expect there to be some interest in the underlying truth of the matter.
No, researchers in the field knew about this potential problem ages ago. It’s easy enough to work around and prevent.
People who are just on the lookout for the latest “aha, AI bad!” Headline, on the other hand, discover this every couple of months.
AI already long ago stopped being trained on any old random stuff that came along off the web. Training data is carefully curated and processed these days. Much of it is synthetic, in fact.
These breathless articles about model collapse dooming AI are like discovering that the sun sets at night and declaring solar power to be doomed. The people working on this stuff know about it already and long ago worked around it.
This is “technology news and articles?”
Seems like this place is increasingly just people yelling at AI-generated clouds.
But at least that crappy bug-riddled code has soul!
It’s almost doublethink, people celebrating how the Fediverse is an open protocol for sharing public discussion and then going surprised-Pikachu at the notion that public discussion might be viewed by someone the don’t want to view it.
If you don’t mean for something to be public, don’t post it on a public forum.
The meme would work just the same with the “machine learning” label replaced with “human cognition.”
Shark species go extinct all the time. New shark species arise.
But that’s exactly my point. Synthetic data is made by AI, but it doesn’t cause collapse. The people who keep repeating this “AI fed on AI inevitably dies!” Headline are ignorant of the way this is actually working, of the details that actually matter when it comes to what causes model collapse.
If people want to oppose AI and wish for its downfall, fine, that’s their opinion. But they should do so based on actual real data, not an imaginary story they pass around among themselves. Model collapse isn’t a real threat to the continuing development of AI. At worst, it’s just another checkbox that AI trainers need to check off on their “am I ready to start this training run?” Checklist, alongside “have I paid my electricity bill?”
It was, before we had AI. Turns out that that’s another aspect of synthetic data creation that can be greatly assisted by automation.
For example, the Nemotron-4 AI family that NVIDIA released a few months back is specifically intended for creating synthetic data for LLM training. It consists of two LLMs, Nemotron-4 Instruct (which generates the training data) and Nemotron-4 Reward (which curates it). It’s not a fully automated process yet but the requirement for human labor is drastically reduced.
But that guarantee isn’t needed. AI-generated data isn’t a magical poison pill that kills anything that tries to train on it. Bad data is bad, of course, but that’s true whether it’s AI-generated or not. The same process of filtering good training data from bad training data can work on either.