- cross-posted to:
- lobsters
- cross-posted to:
- lobsters
There is a fifth way of using AI: Ask AI to hint the problems of your text or suggest rules to look up so you can can “solve” it yourself.
The problem of over-reliance on AI isn’t anything new - we’ve always learned by struggling through problems ourselves. It’s like playing a puzzle game - if you just go look up the solutions instead of trying things yourself, not only do you lose the point of playing the game by reducing it to a series of bothersome tasks that just need to get done to get something at the end, but eventually you find yourself out of depth as you didn’t develop the proper understanding of the puzzles.
Because of this, I’ve been gravitating more towards the 4th and 5th ways of using AI for things that matter to me, things I need or want to understand deeply.
I try not to rush through things, to enjoy the process and instead of just asking for an answer to a question, I’m starting to ask it: ** How can I find the answer myself? What materials would an experienced person in this field look up in order to solve this problem?** And similar variations of these types of questions. The main idea is: I instruct it not to give a solution or code right away but instead to explore the problem together with me and teach me how to fish instead of giving me the fish. If I give him some part of the documentation and he gets an insight, I ask: How did this part of the docs help you get to the conclusion of that? How did you know what to look for ? And so on. Basically I assume the AI is a more experienced person next to me and we’re trying to pair program. He doesn’t know the solution from the back of his mind but he can easily “find it” and we’re walking through it together.
This shift happened because AI kept missing the mark on my questions - partly because I work with relatively niche tools, partly because when you’re learning, you don’t know what context is even relevant to give it and if you give irrelevant context usually you end up misleading it.
And it’s actually surprisingly fun and enjoyable to work on my problems now. There’s this shift of not seeing the problems as something to be solved but as something that needs to be understood, a game that needs to be played if you will. Obviously it takes longer as the article pointed out learning takes time.
This is the real way it becomes a tool. It points you in the right direction or gives you the keywords you need to find that direction yourself.
Always look up its sources or ask for them explicitly and move away from the ai as soon as you can.
As soon as you can start reading documentation, do it. Don’t have an AI summarise it for you.
Instructions unclear, brb asking cursor how to do that
Not yet, still doomscrolling lemmy.
Good article. I subscribed to the RSS feed
I really like the theme of that blog.
Let me elucidate this point
Please subscribe to my RSS feed!
Right? Also like i’m not using Cursor because I want to, but because i kind of have to (for work) .
that said i’m not fully onboard with all these AI tools, AI shouldn’t be in everything (right now AI is really just a marketing tool) .
AI is a tool and like all tools, it depends on how you use it and who uses it. Not everybody’s a software engineer. I talked to a software architect who was very happy about AI agents because they could design and architect a solution and let it be implemented right there. They didn’t need a software engineer (their words).
Just because you use an AI agent doesn’t make you dumber or make you turn off your mind. After being encouraged to use AI agents at work, I’m come to appreciate that even with them there are vastly different levels of usage.
You still need a software engineer to review the code. It’s naive to think that randomly generated code will work, and by “work” i mean not just do what it’s supposed to, but also handle edge cases and be secure.
If you think it’s random, you don’t understand LLMs.
So, in your learned opinion, it’s deterministic?
You sent me down a bit of a rabbit hole, but it turned up an interesting answer. Turns out they are nondeterministic, and why they aren’t deterministic is still an open question https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/
Interesting, I had assumed that turning down temperature to 0, or hardcoding a seed would make LLM inference deterministic.
Especial after watching this video https://youtu.be/J9ZKxsPpRFk
I had thought I had seen both as well.
Skipped over the opening graphic on first read but just read it. Could they have picked a creepier sample sentence.
And you are perfectly deterministic? Because if you aren’t, by your own dichotomic logic, you’re random too.
So you say its not random, and now you do a 180 and say that randomness is a good thing?
I should have known you are a troll
🤣 Is English your second or third language? Your reading comprehension is pretty bad if you understood me doing a 180.