Much of what’s known as ‘AI’ has nothing to do with progress — it’s about lobbyists pushing shoddy digital replacements for human labour that increase billionaire’s profits and make workers’ lives worse.
So whenever someone says, we are making AI, the response should be “oh fuck no” (using bullets and fire if required)
New tagging and auto-completion is fine (there is probably a whole space of new tools that can come out of the AI research field; that doesn’t risk human extinction)
We are so far away from a paperclip maximizer scenario that I can’t take anyone concerned about that seriously.
We have nothing even approaching true reasoning, despite all the misuse going on that would indicate otherwise.
Alignment? Takeoff? None of our current technologies under the AI moniker come anywhere remotely close to any reason for concern, and most signs point to us rapidly approaching a wall with our current approaches.
Each new version from the top companies in the space right now has less and less advancement in capability compared to the last, with costs growing at a pace where “exponentially” doesn’t feel like an adequate descriptor.
There’s probably lateral improvements to be made, but outside of taping multiple tools together there’s not much evidence for any more large breakthroughs in capability.
I agree currently technology is extremely unlikely to achieve general intelligence but my expression was that we never should try to achieve AGI; it is not worth the risk until after we solve the alignment problem.
“alignment problem” is what CEOs use as a distraction to take responsibility away from their grift and frame the issue as a technical problem. That’s another word that make you lose any credibility
I think we are talking past each other. Alignment with human values is important; otherwise we end up with a paper clip optimizer wanting humans only as a feedstock of atoms or deciding to pull a “With Folded Hands” situation.
None of the “AI” companies are even remotely interested in or working on this legitimate concern.
The worry about “Alignment” and such is mostly a TESCREAL talking point (look it up if you don’t know what that is, I promise you’ll understand a lot of things about the AI industry).
It’s ridiculous at best, and a harmful and delirious distraction at worst.
It is also a task all good parents do; make sure the lives that they created don’t grow up to be murders or rapists or racists and treat others with kindness and consideration.
See? You’re still seeing AI as if it was human or if it was comparable to a human, and that’s the issue. Would you make the same statement about… Idk, cloud computing of photo editing tools? AI is just a technology, it does not “grow” into anything by itself and it’s neither good or bad intentioned, and that’s because it does not have any intentions
If it can’t grow by itself, it is not general purpose artificial intelligence. It would be an overly complicated elevator control system and making its behavior deterministic and simple to reason about would enable it to be used to solve problems in industrial processes safely.
Well not exactly but completely misunderstood.
Everyone who actually knows about AI is familiar with the alignment and takeoff problems.
(Play this if you need a quick summary
https://www.decisionproblem.com/paperclips/index2.html
)
So whenever someone says, we are making AI, the response should be “oh fuck no” (using bullets and fire if required)
New tagging and auto-completion is fine (there is probably a whole space of new tools that can come out of the AI research field; that doesn’t risk human extinction)
We are so far away from a paperclip maximizer scenario that I can’t take anyone concerned about that seriously.
We have nothing even approaching true reasoning, despite all the misuse going on that would indicate otherwise.
Alignment? Takeoff? None of our current technologies under the AI moniker come anywhere remotely close to any reason for concern, and most signs point to us rapidly approaching a wall with our current approaches.
Each new version from the top companies in the space right now has less and less advancement in capability compared to the last, with costs growing at a pace where “exponentially” doesn’t feel like an adequate descriptor.
There’s probably lateral improvements to be made, but outside of taping multiple tools together there’s not much evidence for any more large breakthroughs in capability.
I agree currently technology is extremely unlikely to achieve general intelligence but my expression was that we never should try to achieve AGI; it is not worth the risk until after we solve the alignment problem.
“alignment problem” is what CEOs use as a distraction to take responsibility away from their grift and frame the issue as a technical problem. That’s another word that make you lose any credibility
I think we are talking past each other. Alignment with human values is important; otherwise we end up with a paper clip optimizer wanting humans only as a feedstock of atoms or deciding to pull a “With Folded Hands” situation.
None of the “AI” companies are even remotely interested in or working on this legitimate concern.
Unfortunately game theory says we’re gonna do it whenever it’s technologically possible.
Only for zero sum games
The worry about “Alignment” and such is mostly a TESCREAL talking point (look it up if you don’t know what that is, I promise you’ll understand a lot of things about the AI industry).
It’s ridiculous at best, and a harmful and delirious distraction at worst.
It is also a task all good parents do; make sure the lives that they created don’t grow up to be murders or rapists or racists and treat others with kindness and consideration.
See? You’re still seeing AI as if it was human or if it was comparable to a human, and that’s the issue. Would you make the same statement about… Idk, cloud computing of photo editing tools? AI is just a technology, it does not “grow” into anything by itself and it’s neither good or bad intentioned, and that’s because it does not have any intentions
If it can’t grow by itself, it is not general purpose artificial intelligence. It would be an overly complicated elevator control system and making its behavior deterministic and simple to reason about would enable it to be used to solve problems in industrial processes safely.
Think SHRDLU.