Quality Control and Quality Assurance aren’t the same thing. One is to verify that the process is carried out to specification. The other is to verify the quality of the output of that process. In some industries those two jobs are done by the same person or group of people. But we should probably question the idea that you can get away with not doing either job.
Yes it does slow things down. Yes it is red tape. Yes, it can stifle innovation in some ways. Yes, if it is not implemented correctly it can create a certain lack of ownership of one’s work and insulate a person from the consequences when that work is not correct.
But without it processes wouldn’t exist that create, maintain, and manage mistakes. Because the idea of QA and QC is to make sure a second person is available to verify things and that second pair of eyes saves lives across industries. Yes even in the places where no-ones life would be in danger from the process (game devs etc still have livelihoods and people who depend on them).
Without QA and QC the issues with the product will more than likely cause delays at an equal rate to the time sink of doing the job properly the first time with QC/QA. And when you don’t have QC/QA to some extent that often means that someone who also has a workload outside of that job description is doing that job as well as their own job. This leads to burnout and further mistakes.
We’re currently at a point in history where the QA and QC roles at companies are going to be more critical specifically because of the use of AI and the use the problems with using AI over time to create code (even if that code is good code). So I can at least agree with that part of this post.
The main one is that there’s no better way to learn the process and learn the mistakes that can be made so you can be on the lookout for them yourself than to do the actual work and review it both by yourself and with someone who is already skilled in whatever you’re working on. That’s pretty universal across industries.
This is also how we create new fail-safes and new processes. Nobody should be using the processes they used 20 years ago with no updates and no innovation. Those innovations and updates and changes come from QA/QC and engineers of every stripe working together to streamline things and do them better.
Now, if you’re not attempting to make sure your work is up to the highest standard (as if you won’t have QA and QC to fall back on to catch mistakes), your industry and or company is utilizing QA and QC wrong. The incentive for doing work is the payout. This is for both QA/QC and engineers. The incentive to do the work right the first time (before the product ships) is to not have to do the rework at a disadvantage. What that disadvantage is can vary wildly. Deadlines, lost incentives such as bonus pay, loss of potential projects that person or persons may want to work on. relegation to creating or maintaining something they doesn’t provide novelty or interest. Eventual firing or even a quota system.
There is value in finding any and all problems possible even if those problems aren’t important enough to fix. If you ever plan to update whatever the product is, those inconsequential bugs may have follow on effects that it will take a lot of time and effort to chase down later, and that having for warning of may save critical time in the event that something goes wrong.
So at least part of what QA and QC do is prevent very costly rework. Plenty of companies are outsourcing that to customers these days. With beta programs and prototype programs for customers. I’m sure that feedback is valuable but what it isn’t is open source. Your average bets tester is not looking at code from Google or whoever to figure out why the wifi disconnects everyday for 5 seconds at 2pm GMT. They don’t understand why their notes app suddenly show up as blank and then force closes, and can’t be reopened without a reboot.
Your average prototype manager is only going to be able to come back and tell you that the rollout of blah blah app suite is on hold because blah applet doesn’t agree with their phone system or caused some other part of the code base to go rogue and lock them out of some other vital system they use. Or that to get the system to actually work they had to have IT physically reboot every computer on the network including those of employees who work remote.
All in all I feel like this is a question only because QA utilization is flawed, not because QA is useless or shouldn’t exist.
Using QA as a cudgel and a fall guy for the work being done sounds like the problem. Fostering a company culture that makes the relationship between QA and Engineering adversarial or competitive is the problem. Not treating both engineers and QA as experts with two different focuses (and therefore two different ways of looking at and solving a problem for an amicable outcome) is the problem.

