Many companies are tying monitored AI usage to performance evaluations. So yeah, we’re going to be ”willing” under that pretext.
Finding one senior dev who is willing to debug a multi-thousand line application produced by genAI…they’re going to be reluctant, at best, because the code is slop.
MBAs and C-suites keep trying to manufacture consent for this tech so their stock portfolios outperform, and the madmen are willing to sacrifice your jobs to do it!
Many companies are tying monitored AI usage to performance evaluations.
A reason to look somewhere else. Not out of protest but because their evaluations process (and likely more) is fucked. You should do that all 2 - 3 years anyway.
Many companies are tying monitored AI usage to performance evaluations. So yeah, we’re going to be ”willing” under that pretext.
Finding one senior dev who is willing to debug a multi-thousand line application produced by genAI…they’re going to be reluctant, at best, because the code is slop.
MBAs and C-suites keep trying to manufacture consent for this tech so their stock portfolios outperform, and the madmen are willing to sacrifice your jobs to do it!
The just sounds like good old fashioned mismanagement. Any examples of successful companies that are doing this?
Salesforce definitely does it. Coinbase also recently fired a bunch of devs for not using it (https://techcrunch.com/2025/08/22/coinbase-ceo-explains-why-he-fired-engineers-who-didnt-try-ai-immediately/)
Salesforce also recently admitted they were too hasty when they tried to replace humans with ais: https://www.investmentwatchblog.com/salesforce-now-admits-its-ai-agents-were-unreliable-after-cutting-4000-jobs/
A reason to look somewhere else. Not out of protest but because their evaluations process (and likely more) is fucked. You should do that all 2 - 3 years anyway.