• 0 Posts
  • 354 Comments
Joined 2 years ago
cake
Cake day: July 3rd, 2023

help-circle


  • I just tried “Language Drops” and it was… interesting. It didn’t place me at the right level, so I got a very beginner lesson when I’m closer to intermediate (but definitely not fluent). I’m not sure I liked matching the pictures- the picture for “thank you” could mean different things depending on how you interpret the person’s face and body language- and then I hit the end of the free content for the day. It didn’t get to different tenses or even whole sentences- just basic vocabulary and no verbs. Maybe it ramps up quickly?


  • This doesn’t seem like a good idea.

    One, releasing should be easy. At my last job, you clicked “new release” or whatever on GitHub. It then listed all the commits for you. If you “need” an Ai to summarize the commits, you fucked up earlier. Write better commit messages. Review the changes. Use your brain (something the AI can’t do) to make sure you actually want all of this to go out. Click the button. GitHub runs checks and you’re done.

    Most of the time it took a couple minutes at most to do this process.




  • To be right wing is to be ruled by emotions. Humans are all emotional creatures, but for some people it reaches a point where emotion alone creates reality. Facts are there to be picked up or set aside to support feelings. When their guy eats ice cream it’s because he’s a man of the people. When your guy eats ice cream it’s because he’s a baby. Facts don’t matter. It’s feelings.

    And most of their feelings seem to be fear. Fear of loss. Fear of humiliation. You can show them stats and history demonstrating how, like, a diverse workforce leads to higher productivity and greater happiness, but that doesn’t matter. They’re not looking at facts. They’re feeling feelings.

    Imagine trying to talk to someone who’s drunk. That’s the right wing, except all the time and they can’t really sober up. Honestly, it sounds like hell.









  • This reminds me of the new vector for malware that targets “vibe coders”. LLMs tend to hallucinate libraries that don’t exist. Like, it’ll tell you to add, install, and use jjj_image_proc or whatever. The vibe coder will then get an error like “that library doesn’t exist” and "can’t call jjj_image_proc.process()`.

    But you, a malicious user, could go and create a library named jjj_image_proc and give it a function named process. Vibe coders will then pull down and run your arbitrary code, and that’s kind of game over for them.

    You’d just need to find some commonly hallucinated library names





  • The conservative mindset seems to be “What’s good for me right now?”. The law is good when it hurts their enemies, and it’s unfair when it hurts them. A policy is good when it benefits them, and bad when it benefits someone they don’t like. They are essentially toddlers. We should treat their ideas as seriously as we’d treat a two year old’s ideas. Yes dear that’s a really interesting idea to replace all the toilets in the building with monster trucks, but we’re not going to do that.


  • Many people have found that using LLMs for coding is a net negative. You end up with sloppy, vulnerable, code that you don’t understand. I’m not sure if there have been any rigorous studies about it yet, but it seems very plausible. LLMs are prone to hallucinating, so you’re going to get it telling you to import libraries that don’t exist, or use parts of the standard library that don’t exist.

    It also opens up a whole new security threat vector of squatting. If LLMs routinely try to install a library from pypi that doesn’t exist, you can create that library and have it do whatever you want. Vibe coders will then run it, and that’s game over for them.

    So yeah, you could “rigorously check” it but a. all of us are lazy and aren’t going to do that routinely (like, have you used snapshot tests?), b. it’s going to anchor you around whatever it produced, making it harder to think about other approaches, and c. it’s often slower overall than just doing a good job from the start.

    I imagine there are similar problems with analyzing large amounts of text. It doesn’t really understand anything. To verify it’s correct, you would have to read the whole thing yourself anyway.

    There are probably specialized use cases that are good- I’m told AI is useful for like protein folding and cancer detection- but that still has experts (I hope) looking at the results.

    To your point, I think people are trying to use these LLMs for things with definite answers, too. Like if I go to google and type in “largest state in the US” it uses AI. This is not a good use case.