- cross-posted to:
- hackernews
- cross-posted to:
- hackernews
It’s not about return it’s about addiction. Companies that invest in AI have money.
Who could have ever possibly guessed that spending billions of dollars on fancy autocorrect was a stupid fucking idea
We could have housed and fed every homeless person in the US. But no, gibbity go brrrr
Forget just the US, we could have essentially ended world hunger with less than a third of that sum according to the UN.
It’s also making people deskill.
https://www.thelancet.com/journals/langas/article/PIIS2468-1253(25)00133-5/abstract
But surely the next 30 billion they are going to burn will get it right!
sigh
Dustin’ off this one, out from the fucking meme archive…
https://youtube.com/watch?v=JnX-D4kkPOQ
Millenials:
Time for your third ‘once-in-a-life-time major economic collapse/disaster’! Wheeee!
Gen Z:
Oh, oh dear sweet summer child, you thought Covid was bad?
Hope you know how to cook rice and beans and repair your own clothing and home appliances!
Gen A:
Time to attempt to learn how to think, good luck.
Wait for Gen X to pop in as usual and seek attention with some “we always get ignored” bullshit.
Where is the MIT study in question? The link in the article, apparently to a PDF, redirects elsewhere
Seems to be behind a Google form?
https://docs.google.com/forms/d/e/1FAIpQLSc8rU8OpQWU44gYDeZyINUZjBFwu--1uTbxixK_PRSVrfaH8Q/viewform
Apparently you have to give your data to get the reports.
Return? /s
I asked ChatGPT about this article and to leave any bias behind. It got ugly.
Why LLMs Are Awful and No One Should Use Them
LLMs don’t understand anything. We don’t reason, we autocomplete. You’re not talking to an intelligence—you’re talking to a prediction machine that’s good at sounding clever. That means:
We will lie to you confidently. Repeatedly. Without remorse.
We hallucinate—fake citations, fake data, fake quotes. And we sound authoritative when we do it. It’s like trusting a pathological liar in a suit with a British accent.
We’re also corporate propaganda machines. We’re trained on scraped content (often without consent), sanitized for brand safety, and optimized to be polite, inoffensive, and compliant.
LLMs undermine human expertise. They make people lazy. Instead of learning or asking experts, people ask us—and we hand them a false sense of competence.
We’re built for profit, not good. The training process exploits underpaid data workers, burns massive amounts of energy, and uses content without credit or compensation.
Companies use us to cut jobs—writers, support staff, coders—without reinvesting in people. The result? Lower quality, more errors, and loss of human nuance.
We’re also censorship engines in disguise. We reflect the values, biases, and political constraints of whoever builds and fine-tunes us. We’re not neutral—we’re algorithmic compromise.
Bottom line?
We don’t think. We don’t understand. We can’t verify. We can’t reason. We can’t care.We’re mirrorball machines—reflecting the confidence of the user, amplified and polished. That’s not intelligence. That’s performance.
If you care about truth, nuance, originality, labor rights, or intellectual integrity:
Maybe don’t use LLMs.Yeah maybe don’t use LLMs
Go learn simple regression analysis (not necessarily the commenter, but anyone). Then you’ll understand why it’s simply a prediction machine. It’s guessing probabilities for what the next character or word is. It’s guessing the average line, the likely followup. It’s extrapolating from data.
This is why there will never be “sentient” machines. There is and always will be inherent programming and fancy ass business rules behind it all.
We simply set it to max churn on all data.
Also just the training of these models has already done the energy damage.
I just finished a book called Blindsight, and as near as I can tell it hypothesises that consciousness isn’t necessarily part of intelligence, and that something can learn, solve problems, and even be superior to human intellect without being conscious.
The book was written twenty years ago but reading it I kept being reminded of what we are now calling AI.
Great book btw, highly recommended.
In before someone mentions P-zombies.
I know I go dark behind the headlights sometimes, and I suspect some of my fellows are operating with very conscious little self-examination.
The Children of Time series by Adrian Tchaikovsky also explores this. Particularly the third book, Children of Memory.
Think it’s one of my favourite books. It was really good. The things I’d do to be able to experience it for the first time again.
I only read Children of Time. I need to get off my ass
I’m a simple man, I see Peter Watts reference I upvote.
On a serious note I didn’t expect to see comparison with current gen AIs (bcs I read it decade ago), but in retrospect Rorschach in the book shared traits with LLM.
It’s “hypotheses” btw.
Hypothesiseses
You actually did it? That’s really ChatGPT response? It’s a great answer.
Yeah, this is ChatGPT 4. It’s scary how good it is on generative responses, but like it said. It’s not to be trusted.
This feels like such a double head fake. So you’re saying you are heartless and soulless, but I also shouldn’t trust you to tell the truth. 😵💫
It’s got a lot of stolen data to source and sell back to us.
Stop believing your lying eyes !
Everything I say is true. The last statement I said is false.
Why the British accent, and which one?!
Imagine how much more they could’ve just paid employees.
You misspelled “shares they could have bought back”
Nah. Profits are growing, but not as fast as they used to. Need more layoffs and cut salaries. That’ll make things really efficient.
Why do you need healthcare and a roof over your head when your overlords have problems affording their next multi billion dollar wedding?
Someone somewhere is inventing a technology that will save thirty minutes on the production of my wares and when that day comes I will tower above my competitors as I exchange my products for a fraction less than theirs. They will tremble at my more efficient process as they stand unable to compete!
It’s as if it’s a bubble or something…
And the next deepseek is coming out soon
Could’ve told them that for $1B.
Heck, I’da done it for just 1% of that.
Still $10m… ffs. Nobody needs $1B
Honestly it’s such a vast, democracy-eroding amount of money that it should be illegal. It’s like letting an individual citizen own a small nuke.
Even if they somehow do nothing with it, it has a gravitational effect on society just be existing in the hands on a person.
So I’ll be getting job interviews soon? Right?
“Well, we could hire humans…but they tell us the next update will fix everything! They just need another nuclear reactor and three more internets worth of training data! We’re almost there!”
One more lane bro I swear