You must log in or register to comment.
This needs to be considered a major ethical violation. When researchers and lawyers use AI to do their work for them, and the AI makes shit up, it should be considered no different than if the supposed professional made the shit up themselves. It’s fabricated material submitted as if it is factual. It’s a lie, plain and simple. And letting people get away with it simply because they used an LLM to fabricate the lie is nothing short of insane.
Using an LLM to work with facts is like using JPEG to store x-ray images. Utterly reckless.

