Vom is an AI decision engine platform that executes real-time decision policies. Vom's AI assistant helps users to write and change policies, test them thoroughly and analyze past performance for insights and ideas for improvement. Vom helps fintechs, insurtechs, banks, e-commerce's, healthcare, collections and other organizations to keep their decisions continuously improving while maintaining great reliability and compliance.
Any thoughts on the aspect of tool like this being, gosh, I don’t know else to say it except evil?
To some extent, you’ve sanitized the business of ruining people’s lives. This is the meat grinder. YOU as its creator don’t fear it because it’s your pet basilisk.
But make no mistake, your pet is a killer monster the only real purpose of which is to but a rubber stamp on every kind of bias while removing accountability for doing violence to real people’s lives. If you get rich with this, at least now you can’t say nobody told you where the money was coming from.
Think about it. What would you say if it made a mistake and ruined your life? “We’re sorry, the AI says you’re not allowed to have credit anymore.” “We’re sorry, you met the criteria to be targeted for extermination by a drone.” “We’re sorry, the AI scored you 8/10 on the deport-with-no-due-process chart, so off to no-rights terrorist prison.”
You’d care if it was you, but the fact is that you won’t have any way of knowing when your client uses your product to ruin or even end a life. A counter won’t tick up by one. You’ll just be there thinking “another happy customer.” Now that’s vom-inducing.
HN commenter nails it: