No, you are correct. Hinton began researching ReLUs in 2010 and his students Alex Krizhevsky and Ilya Sutskever used it to train a much deeper network (AlexNet) to win the 2012 ILSVRC. The reason AlexNet was so groundbreaking was because it brought all of the gradient optimization improvements (SGD with momentum as popularized by Schmidhuber, and dropout), better activation functions (ReLU), a deeper network (8 layers), supervised training on very large datasets (necessary to learn good general-purpose convolutional kernels), and GPU acceleration into a single approach.
NNs, and specifically CNNs, won out because they were able to create more expressive and superior image feature representations over the hand-crafted features of competing algorithms. The proof was in the vastly better performance, it was a major jump when the performance on the ILSVRC was becoming saturated. Nobody was making nearly +10% improvements on that challenge back then, it blew everybody out of the water and made NNs and deep learning impossible to ignore.
Edit: to accentuate the point about datasets and GPUs, the original AlexNet developers really struggled to train their model on the GPUs available at the time. The model was too big and they had to split it across two GPUs to make it work. They were some of the first researchers to train large CNNs with GPUs. Without large datasets like the ILSVRC they would not have been able to train good deep hierarchical convolutions, and without better GPUs they wouldn’t have been able to make AlexNet sufficiently large or deep. Training AlexNet on CPU only for ILSVRC was out of the question, it would have taken months of full-tilt, nonstop compute for a single training run. It was more than these two things, as detailed above, but removing those two barriers really allowed CNNs and deep learning to take off. Much of the underlying NN and optimization theory had been around for decades.
No, you are correct. Hinton began researching ReLUs in 2010 and his students Alex Krizhevsky and Ilya Sutskever used it to train a much deeper network (AlexNet) to win the 2012 ILSVRC. The reason AlexNet was so groundbreaking was because it brought all of the gradient optimization improvements (SGD with momentum as popularized by Schmidhuber, and dropout), better activation functions (ReLU), a deeper network (8 layers), supervised training on very large datasets (necessary to learn good general-purpose convolutional kernels), and GPU acceleration into a single approach.
NNs, and specifically CNNs, won out because they were able to create more expressive and superior image feature representations over the hand-crafted features of competing algorithms. The proof was in the vastly better performance, it was a major jump when the performance on the ILSVRC was becoming saturated. Nobody was making nearly +10% improvements on that challenge back then, it blew everybody out of the water and made NNs and deep learning impossible to ignore.
Edit: to accentuate the point about datasets and GPUs, the original AlexNet developers really struggled to train their model on the GPUs available at the time. The model was too big and they had to split it across two GPUs to make it work. They were some of the first researchers to train large CNNs with GPUs. Without large datasets like the ILSVRC they would not have been able to train good deep hierarchical convolutions, and without better GPUs they wouldn’t have been able to make AlexNet sufficiently large or deep. Training AlexNet on CPU only for ILSVRC was out of the question, it would have taken months of full-tilt, nonstop compute for a single training run. It was more than these two things, as detailed above, but removing those two barriers really allowed CNNs and deep learning to take off. Much of the underlying NN and optimization theory had been around for decades.