Red lobster went under and these guys still exist
Hey I saw a red lobster today, they’re still here. 3 kinds of shrimp for $22!
There’s no way I believe that Deepseek was made for the $5m figure I’ve seen floating around.
But that doesn’t matter. If it cost $15m, $50m, $500m, or even more than that, it’s probably worth it to take a dump in Sam Altman’s morning coffee.
DeepSeek claimed the model training took 2,788 thousand H800 GPU hours, which, at a cost of $2/GPU hour, comes out to a mere $5.576 million.
That seems impossibly low.
DeepSeek is clear that these costs are only for the final training run, and exclude all other expenses
There would have been many other runs before the release version.
deleted by creator
The greatest irony would be if OpenAI was killed by an open AI
i’ll allow it.
Not only would it be the greatest irony, it would be the best outcome for humanity. Fuck ClosedAI
I don’t mind
There is no downside to lying these days. Yet the public seems surprised that all they see is lying.
So many people don’t even question it. Talk loud and confidently enough and that’s the bar for most unfortunately.
TikTok, Instagram and similar are great examples of this, initially you think wow cool I’m seeing all of these new things and getting so much info. Then you see someone come up on a topic you know something about and the facade breaks when all they do is spew misinformation that attracts a crowd (usually via fear).
And then the hordes of sycophant DinkDonkers repeat their detritus over every comment thread they can
How about that: venture capitalists don’t know what’s going on in the market any more than anyone else does. They’re just arrogant because they have metric shit-tons of money.
I’m sure Altman forcing out all the actual brains on the board of OpenAI, like Chief Research Scientist https://en.m.wikipedia.org/wiki/Ilya_Sutskever, has nothing to do with this or their rapidly declining lead in the field.
Corpo parasite took over before the nerd could make the company great.
Gets Cucked by the Chinese
There is some justice out there.
Id say their lead is over.
Im enjoying all the capitalist oligarchs losing their minds over Deepseek destroying the piles of money they were already counting in their heads. It’s not much, but it’s something.
Not just deepseek, but everyone else who is now forking R1 and training for their specific use case.
He’s not wrong. He was speaking in the implied scope of capitalism. Where u can’t do something that cheap because without the ability to hit a huge payout for investors noone will fund you.
We’re just seeing that capitalism can’t compete with an economy that can produce stuff without making investors rich.
Probably should point out that DeepSeek is owned by a Chinese hedge fund. They specialize in algorithmic trading. Can’t get much more capitalist than that. Very happy to see Chinese capitalists release an open source model. No doubt Altman and cronies will seek protection under the guise of national security. I guess free market competition is only good when you are not getting your arse kicked.
Good detail added, I did not know that. But they’re still doing so with a much smaller payout than if they tried to compete the way a US capitalist would through a closed source full ownership model.
deleted by creator
Chinese economy is even more capitalist if that’s possible.
Sam Altman is full of shit? Nooooooooooo
I kind of suspect this is as much about A.I. progress hitting a wall as anything else. It doesn’t seem like any of the LLMs are improving much between versions anymore. The U.S. companies were just throwing more compute (and money/electricity) at the problem and seeing small gains but it’ll be awhile before the next breakthrough.
Kind of like self-driving cars during their hype cycle. They felt tantalizingly close 10 years ago or so but then progress stalled and it’s been a slow grind ever since.
I think with a lot of technologies the first 95% is easy but the last 5% becomes exponentially harder.
With LLMs though I think the problem is conflating them with other forms of intelligence.
They’re amazingly good at forming sentences, but they’re unable to do real actual work.
It’s called the 80/20 rule. The first 80% is the easy part.
Yeah. I really dislike this “rule” because it’s commonly espoused by motivational speakers and efficiency “experts” saying you make 80% of your money from 20% of your time.
It sounds great if you’ve never heard it before but in practice it just means “be more efficient” and is not really actionable.
Nevertheless, like the funding-hungry CEO he is, Altman quickly turned the thread around to OpenAI promising jam tomorrow, with the execution of the firm’s roadmap, amazing next-gen AI models, and “bringing you all AGI and beyond.”
AGI and beyond?
If you throw billions of dollars at a problem, you will always get the most expensive solution.
…if you get a solution at all.
Artificial General Intelligence, the pipedream of a technological intelligence that is not producing a single thing but generally capable, like a human.
Edit: recommended reading is “Life 3.0”. While I think it is overly positive about AI, it gives a good overview of AI industry and innovation, and the ideas behind it. You will have to swallow a massive chunk of Musk-fanboism, although to be fair it predates Musk’s waving the fasces.
I get it. I just didn’t know that they are already using “beyond AGI” in their grifting copytext.
Yeah, that started a week or two ago. Altman dropped the AGI promise too soon now he’s having to become a sci-fi author to keep the con cooking.
now he’s having to become a sci-fi author to keep the con cooking.
Dude thinks he’s Asimov but anyone paying attention can see he’s just an L Ron Hubbard.
Hell, I’d help pay for the boat if he’d just fuck off to go spend the rest of his life floating around the ocean.
You say that like Hubbard wasn’t brilliant, morals notwithstanding
He sure as shit wasn’t a brilliant writer. He was more endowed with the cunning of a proto-Trump huckster-weasel.
Well, it does make sense in that the time during which we have AGI would be pretty short because AGI would soon go beyond human-level intelligence. With that said, LLMs are certainly not going to get there, assuming AGI is even possible at all.
We’re never getting AGI from any current or planned LLM and ML frameworks.
These LLMs and ML programs are above human intelligence but only within a limited framework.
https://en.m.wikipedia.org/wiki/Superintelligence#Feasibility_of_artificial_superintelligence
Artificial Superintelligence is a term that is getting banded about nowadays
Ah ok, yeah the “beyond” thing us likely pulled straight out of the book I mentioned in my edit.
The fact that Microsoft and OpenAI define Artificial General Intelligence in terms of profit suggests they’re not confident about achieving the real thing:
The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits. (Source)
Given this definition, when they say they’ll achieve AGI and beyond, they simply mean they’ll achieve more than $100 billion in profit. It says nothing about what they expect to achieve technically.
Well that’s a pretty fucking ridiculous definition lol.
This should be its own post. Very interesting. People are not aware of this I think.
I think I saw a post about exactly this.
what a joke. can’t wait for the shift and these parasites to go back underground
The amount of people spamming ‘deepseek’ on YouTube comments and live streams is insane. Definitely have a shit load of shadow funding
While I tend to avoid conspiracy theory type thinking, the nature of modern social makes it very easy to run astroturfing/botting campaigns. It’s reasonable to be suspicious.
Bot campaigns seem pretty cheap when your business is making chat bots
Or if you have access to click farm type propoganda resources
Or you’re a government with endless funds at your disposal.
It’s easy to write a bot. You just ask
ChatGPTDeepSeek for the code.
I find the online cheerleading for AI and AGI strange. It feels like a frothing mob rooting for the unleashing of a monster at times.
I mean, a lot of it is just people who started using chatgpt to do some simple and boring task (writing an email, CV, or summarizing an article) and started thinking that it’s the best thing since sliced bread.
I would know that since I’m a university student. I know the limitations of current AI stuff so I can cautiously use it for certain tasks and don’t trust the output to be correct. Meanwhile, my friend thought that he was making chatgpt better at answering his multiple choice economics quiz by telling it which of the answers it gave was wrong…
Seems to actually be some press about it too, I was surprised to see Bbc, Reuters and new york post write about it.
But yeah, it’s very interesting what they have made here.
IMO they’re way too much fixated on making a single model AGI.
Some people tried to combine multiple specialized models (voice recognition + image recognition + LLM, + controls + voice synthesis) to get quite compelling results.
I’m just impressed how snappy it was, I wish he had the ability to let it listen longer without responding right away though.
I wish I had that ability too.
80% time she’s just a bot, but there are these flashes of brilliance that makes me think we’re closer to general purpose intelligence than we think
And this is just one dude using commercially available tooling. Well funded company could do infinitely better, if they were willing to give up some of the political correctness when training the model
EDIT: When he removed the word filter last time it got really hilarious quickly
What I am 100% certain of, because humanity is terrible, is that if a true AI is created that fact will be ignored for being inconvenient to profit seeking.
I wonder what the mildest thing a true AI could tell the oligarchs to do that would make them shut it down. Giving 10% of their wealth away and not in a tax dodge way? Stop funding fascists?
I mean I get the DeepSeek launch exposes what NVIDIA and OPENAI have been pushing as the only roadmap to AI as incorrect, but doesn’t DeepSeek’s ability to harness less lower quality processors thereby allow companies like NVIDIA and OPENAI to reconfigure expanding their infrastructure’s abilities to push even further faster? Not sure why the selloff occurred, it’s like someone got a PC to post quicker with a x286, and everybody said hey those x386 sure do look nice, but we’re gonna fool around with these instead.
I believe this will ultimately be good news for Nvidia, terrible news for OpenAI.
Better access to software is good for hardware companies. Nvidia is still the world leader when it comes to delivering computing power for AI. That hasn’t changed (yet). All this means is that more value can be made from Nvidia gpus.
For OpenAI, their entire business model is based on the moat they’ve built around ChatGPT. They made a $1B bet on this idea - which they now have lost. All their competitive edge is suddenly gone. They have no moat anymore!
well, it is 2025, a billion dollars isn’t what it used to be, a trillion is something
The fact that you can run it locally with good perfomance on 4+ years old machine (an M1 Max for example), is not exactly a good news for them. I think deepseek just made their 500 billion investment project, which was already absurd, incredibly stupid. I’m gonna say it again, the GAFAM economy is based on a whole lot of nothing. Now more then even, we can the web back and destroy their system. Fuck the tech-bros and their oligarch friend.
The reason for the correction is that the “smart money” that breathlessly invested billions on the assumption that CUDA is absolutely required for a good AI model is suddenly looking very incorrect.
I had been predicting that AMD would make inroads with their OpenCL but this news is even better. Reportedly, DeepSeek doesn’t even necessarily require the use of either OpenCL or CUDA.
It drastically reduced the minimum buy-in needed to play.
the house always wins
but doesn’t DeepSeek’s ability to harness less lower quality processors thereby allow companies like NVIDIA and OPENAI to reconfigure expanding their infrastructure’s abilities to push even further faster?
Not that much if the problem is NP-hard and they were already hitting against the asymptote.
I hope that normal people will now realize how full of sh*t he is. They won’t, but DON’T TAKE THIS FROM ME