I found it’s useful for code where I know like 70% of what I’m doing. More than that and I can just do it myself. Less than that and I can’t trust and diagnose the output.
I’d rather have old fashioned stack overflow and tutorials, honestly. It’s hard to actually learn when it just gives answers.
I use it for coding advice sometimes, as an amateur hobbyist it’s really useful to point me in the right direction when facing problems I’m unfamiliar with. I often end up reinventing the proverbial wheel, just worse, but LLMs can help point out standards and best practices that I, as an outsider to the industry, am unaware of.
You have to be careful at low skill/knowledge levels, because it’ll happily send you down a crazy path that looks legitimate.
I asked it how to do something in oracle SQL, because I don’t know oracle specifically, and it gave me a terrible answer. I suspected it wasn’t right so I asked a coworker who’s an old hand at Oracle, and he was like “no that’s terrible. Here’s a much simpler way”
github copilot is fantastic for exactly this reason… completes a few lines, auto corrects, automatic find and replace, automatically fills a 3 line function body that would otherwise be an extra dependency
Movie recommendations is my biggest thing, personally.
And lots of other purposes. Just because a ton of people are misusing this tool and treating it like GAI doesn’t mean that it isn’t a useful tool. Even something as simple as proofreading a letter has massive utility for some people.
Definitely proof reading. Especially for people who can barely write intelligibly. They can check themselves if the meaning is still correct and they will learn grammar from the process.
Lets use LLMs for things LLMs are useful for. It is not a panacea, and it is not appropriate for every use case
Yeah, LLMs are interesting tech products to play with and find some niche uses for.
But for the love of god they are not “prop up the entire stock market and numerous multi-trillion-dollar companies indefinitely” good!
What is it useful for? I actually have a hard time finding a use for it… Its alright at book recommendations, sometimes.
I found it’s useful for code where I know like 70% of what I’m doing. More than that and I can just do it myself. Less than that and I can’t trust and diagnose the output.
I’d rather have old fashioned stack overflow and tutorials, honestly. It’s hard to actually learn when it just gives answers.
I use it for coding advice sometimes, as an amateur hobbyist it’s really useful to point me in the right direction when facing problems I’m unfamiliar with. I often end up reinventing the proverbial wheel, just worse, but LLMs can help point out standards and best practices that I, as an outsider to the industry, am unaware of.
You have to be careful at low skill/knowledge levels, because it’ll happily send you down a crazy path that looks legitimate.
I asked it how to do something in oracle SQL, because I don’t know oracle specifically, and it gave me a terrible answer. I suspected it wasn’t right so I asked a coworker who’s an old hand at Oracle, and he was like “no that’s terrible. Here’s a much simpler way”
I find it’s good at writing boilerplate and scaffolding code, the stuff I really hate doing.
github copilot is fantastic for exactly this reason… completes a few lines, auto corrects, automatic find and replace, automatically fills a 3 line function body that would otherwise be an extra dependency
Movie recommendations is my biggest thing, personally.
And lots of other purposes. Just because a ton of people are misusing this tool and treating it like GAI doesn’t mean that it isn’t a useful tool. Even something as simple as proofreading a letter has massive utility for some people.
Definitely proof reading. Especially for people who can barely write intelligibly. They can check themselves if the meaning is still correct and they will learn grammar from the process.