This legendary page from an internal IBM training in 1979 could not be more appropriate for our new age of AI. ![A COMPUTER CAN NEVER BE HELD ACCOUNTABLE. THEREFORE A …
This is likely true for what we already see. Fuck, people use even dumb word filters to avoid responsibility! (Cough Reddit mods “I didn’t do it, AutoMod did it” cough cough)
That said this specific problem could be solved by AGI or another truly intelligent system. My concern is more like the AGI knowingly bombing a civilian neighbourhood, because it claims to be better for everyone else, due to the lack of morality. That would be way, waaaaay worse than the false positives like in your example.
This is likely true for what we already see. Fuck, people use even dumb word filters to avoid responsibility! (Cough Reddit mods “I didn’t do it, AutoMod did it” cough cough)
That said this specific problem could be solved by AGI or another truly intelligent system. My concern is more like the AGI knowingly bombing a civilian neighbourhood, because it claims to be better for everyone else, due to the lack of morality. That would be way, waaaaay worse than the false positives like in your example.
Ugh, yes, the machines “know what’s best”.
I was just assuming it would be used for blame management, regardless if it was an accident or not.
“See, it wasn’t me! It was ScapegoatGPT!”