In this thread: people doing the exact opposite of what they do seemingly everywhere else and ignoring the title to respond to the post.
Figuring out what the next big thing will be is obviously hard or investing would be so easy as to be cheap.
I feel like a lot of what has been exploding has been ideas someone had a long time ago that are just becoming easier and given more PR. 3D printing was invented in the '80s but had to wait for computation and cost reduction. The idea that would become neural network for AI is from the '50s, and was toyed with repeatedly over the years but ultimately the big breakthrough was just that computing became cheap enough to run massive server farms. AR stems back to the 60s and gets trotted out slightly better each generation or so, but it was just tech getting smaller that made it more viable. What other theoretical ideas from the last century could now be done for a much lower price?
One of the major breakthroughs wasn’t just compute hardware, it was things like the “Attention Is All You Need” whitepaper that spawned all the latest LLMs and multi-modal models (video generation, music generation, classification, sentiment analysis, etc etc.) So there has been an insane amount of improvement on the whole neural network architectures themselves. (LSTM, Transformers, recurrent neural nets, convolutional neural nets, etc.) RNN’s were 1972, LSTMs only came out in 1999 come to find out.
2009-2011 was when we got good image recognition. Transformers started after the Attention whitepaper in 2017. Now the models are improving themselves at this point, singularity is heading our way pretty quickly.
What does that mean exactly? What does a post singularity world actually look like because every single example of a post-singularity world I’ve ever seen depicted always assumes it’ll happen hundreds of years in the future after other technology has been invented.
In this thread: people doing the exact opposite of what they do seemingly everywhere else and ignoring the title to respond to the post.
Figuring out what the next big thing will be is obviously hard or investing would be so easy as to be cheap.
I feel like a lot of what has been exploding has been ideas someone had a long time ago that are just becoming easier and given more PR. 3D printing was invented in the '80s but had to wait for computation and cost reduction. The idea that would become neural network for AI is from the '50s, and was toyed with repeatedly over the years but ultimately the big breakthrough was just that computing became cheap enough to run massive server farms. AR stems back to the 60s and gets trotted out slightly better each generation or so, but it was just tech getting smaller that made it more viable. What other theoretical ideas from the last century could now be done for a much lower price?
One of the major breakthroughs wasn’t just compute hardware, it was things like the “Attention Is All You Need” whitepaper that spawned all the latest LLMs and multi-modal models (video generation, music generation, classification, sentiment analysis, etc etc.) So there has been an insane amount of improvement on the whole neural network architectures themselves. (LSTM, Transformers, recurrent neural nets, convolutional neural nets, etc.) RNN’s were 1972, LSTMs only came out in 1999 come to find out.
2009-2011 was when we got good image recognition. Transformers started after the Attention whitepaper in 2017. Now the models are improving themselves at this point, singularity is heading our way pretty quickly.
What does that mean exactly? What does a post singularity world actually look like because every single example of a post-singularity world I’ve ever seen depicted always assumes it’ll happen hundreds of years in the future after other technology has been invented.