The main issue with this idea of punishment and reward, in the sense that you mean them, is that their results depend entirely on the criteria by which you are punished or rewarded. Say, the law says being gay is illegal and the punishment is execution, does that mean it’s immoral?
Being moral boils down to making certain decisions, the method by which they are achieved is irrelevant if the decisions are “correct”. Most moral philosophies agree that moral decisions can be made by applying rational reasoning to some basic principles (e.g. the categorical imperative). We reason through language, and these models capture and simulate that. The question is not whether AI can make moral decisions, it’s whether it can be better than humans at it, and I believe it can.
I watched the video, honestly I don’t find anything too surprising. ChatGPT acknowledges that there are multiple moral traditions (as it should) and that which decision is right for you depends on which tradition you subscribe to. It avoids making clear choices because it is designed that way for legal reasons. When there exists a consensus in moral philosophy about the morality of a decision, it doesn’t hesitate to express that. The conclusions it comes to aren’t inconsistent, because it always clearly expresses that they pertain to a certain path of moral reasoning. Morality isn’t objective, taking a conclusive stance on an issue based on one moral framework (which humans like to do) isn’t superior to taking an inconclusive one based on many. Really this is one of our greatest weaknesses, not being able to admit we aren’t always entirely sure about things. If ChatGPT was designed to make conclusive moral decisions, it would likely take the majority stance on any issue, which is basically as universally moral as you can get.
The idea that AI could be immoral because it holds the stances of its developers is invalid, because it doesn’t. It is trained on a vast corpus of text, which captures popular views and not the views of the developers.
Holding someone accountable doesn’t undo their mistakes, once a decision is made, there is often nothing you can do about it. Humans make bad decisions too, whether unknowingly or intentionally. It’s clear that accountability isn’t some magic catch-all.
I find the idea that punishment and reward are prerequisites of morality rather pessimistic, do you believe people are entirely incompetent of acting morally in the absence of external motivation?
Whichever way, AI does essentially function on the principle punishment and reward, you could even say it has been pre-punished and rewarded in millions of iterations during its training.
AI simply has clear advantages in decision making. Without self-interest it can make truly selfless decisions, it far less prone to biases and takes much more information into account in its decisions.
Try throwing some moral, even political questions at LLMs, you will find they do surprisingly well, and these are models that aren’t optimized for decision making.
Work on your own projects and it will come naturally, it’s the best way to thoroughly learn a language (probably JS in your case). Try to really understand the basics (like OOP), it’s knowledge which will both translate to other languages and help you learn frameworks/libraries. Instead of relying solely on tutorials, try reading the documentation, it will give you a more thorough understanding (if it’s good), also stack overflow isn’t cheating, you can’t always remember everything. Trust me, you are already way ahead of others if you plan to take CS.
I wouldn’t say Marxism is incompatible with dualism for example, yes Marx heavily focuses on the material struggle, but interpreting the theory in a dualist sense doesn’t really change its implications. Wealth really matters because of the way it makes us feel, the experiences it enables, not because of some inherent value. If being poor didn’t feel bad, nobody would have a problem with it.
Why would anyone put mould on such a delicious treat anyways