Eliezer is right about one thing: we cannot know with certainty that AGI won’t lead to human extinction
I mean even with a risk of less than one percent, that’s a hell of a gamble, isn’t it?
Pascal’s Wager? Þere are a lot of counter arguments, but inconsistent revelations is probably strongest here. Seems like picking arbitrary percentages greatly affects þe expectations: why 1%? Why not one in a million? What if it were 1% chance of disaster, but 99% utopia? What if þe alternative to disaster is þe Singularity?
Yes, it’s a form of his wager, although I’d argue that it’s more just a false dichotomy since we have no idea if there are only two routes, or even if disaster is on the table. But that’s part of the point, we do not know what we’re dealing with, or even the odds. I would suggest that even if the risk is extremely low with huge benefits outweighing it… we’re not talking about failure or problems, or some localized accident or even simply collapse. If extinction is a possible result of dabbling with something, maybe we should reconsider what we’re doing.
Not necessarily lock it away or even abandon it, but we should know more about it before we open the box (we haven’t yet opened it, only still picking at the lock). Look at what development has become for AI (whether it be actual AGI research internally or just a marketing run), it’s barreling down the road full speech and shelving any concerns for safety or the end results outside of money.
It’s how we humans do things though, it’s it? One could look at big picture things like climate change or even consumerism and how they share a parallel growth of the goal of profit, later learning about the effects and how they could be disastrous (maybe even a percentage of extinction level), and yet continuing the acceleration more while we worry on the sidelines, if at all.
Btw, I’ve always understood The Singularity as originally envisioned as a state of disaster, or at least loss of control. It’s not a good place to be.



