- cross-posted to:
- hackernews
- cross-posted to:
- hackernews
Over the past few years, the evolution of AI-driven tools like GitHub’s Copilot and other large language models (LLMs) has promised to revolutionise programming. By leveraging deep learning, these tools can generate code, suggest solutions, and even troubleshoot issues in real-time, saving developers hours of work. While these tools have obvious benefits in terms of productivity, there’s a growing concern that they may also have unintended consequences on the quality and skillset of programmers.
What’s Copilot? ;)
A thing that hallucinates uncompilable code but somehow convinces your boss it’s a necessary tool.
Copilot is a tool for programmers who don’t want program.
I’ll never forget attending CS courses with a guy who got violently angry at having to write code. I assume he’s either thrilled with Copilot or in prison for attacking somebody over its failure to reliably write all of his code for him.
An LLM that propose autocompletion for whole line/function.
This is the right answer.
Of course, I don’t understand why people think it’s “unecessary”.
Do they never do exploratory work and do thing they are uncomfortable with ?
It’s a tool, if i’m in a codebase I know well, it’s often pretty useless.
But I started writing some python, I’m a python noob, copilot is a gigantic productivity booster.