They’re systems trained to give plausible answers, not correct ones. Of course correct answers are usually plausible, but so do wrong answers, and on sufficiently complex topics, you need real expertise to tell when they’re wrong.
I’ve been programming a lot with AI lately, and I’d say the error rate for moderately complex code is about 50%. They’re great at simple boilerplate code, and configuration and stuff that almost every project uses, but if you’re trying to do something actually new, they’re nearly useless. You can lose a lot of time going down a wrong path, if you’re not careful.
Some of the more advanced LLMs are getting pretty clever. They’re on the level of a temp who talks too much, misses nuance, and takes too much initiative. Also, any time you need them to perform too complex a task, they start forgetting details and then entire things you already told them.
I use the “very articulated parrot” analogy.
They’re systems trained to give plausible answers, not correct ones. Of course correct answers are usually plausible, but so do wrong answers, and on sufficiently complex topics, you need real expertise to tell when they’re wrong.
I’ve been programming a lot with AI lately, and I’d say the error rate for moderately complex code is about 50%. They’re great at simple boilerplate code, and configuration and stuff that almost every project uses, but if you’re trying to do something actually new, they’re nearly useless. You can lose a lot of time going down a wrong path, if you’re not careful.
Never ever trust them. Always verify.
Some of the more advanced LLMs are getting pretty clever. They’re on the level of a temp who talks too much, misses nuance, and takes too much initiative. Also, any time you need them to perform too complex a task, they start forgetting details and then entire things you already told them.
Sounds like they are a liability when you put it that way.
I use something similar. “Child with enormous vocabulary.”
It can recognize correlations, it understands the words themselves, but it really how those connections or words work.