Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast

  • 1 Post
  • 316 Comments
Joined 2 years ago
cake
Cake day: June 23rd, 2023

help-circle











  • You see, that’s the thing: In order for the US to get to that point, the people must first NOT be chomping at the bit, fantasizing about ripping unelected bureaucrats like Stephen Miller to shreds the moment they see him in person.

    Usually—the way this happens—is that you have a strongman coming to power, promising to bring justice to people like Stephen Miller. Not supporting them.

    I honestly don’t think there’s enough support behind Trump at this point to pull that off. In fact, a simple marketing campaign pointing out that it’s not just Trump but the entire Republican party that is responsible for this mess we’re in, would do wonders.

    Republicans—the ones sitting at home watching this play out on Fox News—aren’t getting the right kind of propaganda for Stephen Miller (or Trump’s other underlings) to survive past Trump. Even if he doesn’t get torn to shreds by some angry mob, he’s committing crimes on the regular which will result in prosecution when a new administration comes around.

    The next administration won’t be as delusional about preserving tradition when it comes to prosecuting their predecessors. Trump made sure to throw that entire concept into the East Wing right before he had it torn down.


  • So let me get this straight: Stephen Miller is so universally hated that if he doesn’t house himself on a protected military base, he fears for his life and family. His response to this is to double down on his continuous campaign of human rights violations‽

    Dude! You can only live “safe” like that for three more years. Not even that long if Trump dies of a stroke/heart attack (which seems increasingly likely). Vance isn’t going to protect you like this!

    Now’s the time to start making friends in Nazi sympathizing countries.


  • For reference, every AI image model uses ImageNET (as far as I know) which is just a big database of publicly accessible URLs and metadata (classification info like, “bird” <coordinates in the image>).

    The “big AI” companies like Meta, Google, and OpenAI/Microsoft have access to additional image data sets that are 100% proprietary. But what’s interesting is that the image models that are constructed from just ImageNET (and other open sources) are better! They’re superior in just about every way!

    Compare what you get from say, ChatGPT (DALL-E 3) with a FLUX model you can download from civit.ai… you’ll get such superior results it’s like night and day! Not only that, but you have an enormous plethora of LoRAs to choose from to get exactly the type of image you want.

    What we’re missing is the same sort of open data sets for LLMs. Universities have access to some stuff but even that is licensed.


  • Listen, if someone gets physical access to a device in your home that’s connected to your wifi all bets are off. Having a password to gain access via adb is irrelevant. The attack scenario you describe is absurd: If someone’s in a celebrity’s home they’re not going to go after the robot vacuum when the thermostat, tablets, computers, TV, router, access point, etc are right there.

    If they’re physically in the home, they’ve already been compromised. The fact that the owner of a device can open it up and gain root is irrelevant.

    Furthermore, since they have root they can add a password themselves! Something they can’t do with a lot of other things in their home that they supposedly “own” but don’t have that power (but I’m 100% certain have vulnerabilities).




  • A pet project… A web novel publishing platform. It’s very fancy: Uses yjs (CRDTs) for collaborative editing, GSAP for special effects (that authors can use in their novels), and it’s built on Vue 3 (with Vueuse and PrimeVue) and Python 3.13 on the backend using FastAPI.

    The editor TipTap with a handful of custom extensions that the AI helped me write. I used AI for two reasons: I don’t know TipTap all that well and I really want to see what AI code assist tools are capable of.

    I’ve evaluated Claud Code (Sonnet 4.5), gpt5, gpt5-codex, gpt5-mini, Gemini 2.5 (it’s such shit; don’t even bother), qwen3-coder:480b, glm-4.6, gpt-oss:120b, and gpt-oss:20b (running locally on my 4060 Ti 16GB). My findings thus far:

    • Claude Code: Fantastic and fast. It makes mistakes but it can correct its own mistakes really fast if you tell it that it made a mistake. When it cleans up after itself like that it does a pretty good job too.
    • gpt5-codex (medium) is OK. Marginally better than gpt5 when it comes to frontend stuff (vite + Typescript + oh-god-what-else-now haha). All the gpt5 (including mini) are fantastic with Python. All the gpt5 models just love to hallucinate and randomly delete huge swaths of code for no f’ing reason. It’ll randomly change your variables around too so you really have to keep an eye on it. It’s hard to describe the types of abominations it’ll create if you let it but here’s an example: In a bash script I had something like SOMEVAR="$BASE_PATH/etc/somepath/somefile" and it changed it to SOMEVAR="/etc/somepath/somefile" for no fucking reason. That change had nothing at all to do with the prompt! So when I say, “You have to be careful” I mean it!
    • gpt-oss:120b (running via Ollama cloud): Absolutely fantastic. So fast! Also, I haven’t found it to make random hallucinations/total bullshit changes the way gpt5 does.
    • gpt-oss:20b: Surprisingly good! Also, faster than you’d think it’d be—even when giving it a huge refactor. This model has lead me to believe that the future of AI-assisted coding is local. It’s like 90% of the way there. A few generations of PC hardware/GPUs and we won’t need the cloud anymore.
    • glm-4.6 and qwen3-coder:480b-cloud: About the same as gpt5-mini. Not as fast as gpt-oss:120b so why bother? They’re all about the same (for my use cases).

    For reference, ALL the models are great with Python. For whatever reason, that language is king when it comes to AI code assist.



  • I’m having the opposite experience: It’s been super fun! It can be frustrating though when the AI can’t figure things out but overall I’ve found it quite pleasant when using Claude Code (and ollama gpt-oss:120b for when I run out of credits haha). The codex extension and the entire range of OpenAI gpt5 models don’t provide the same level of “wow, that just worked!” Or “wow, this code is actually well-documented and readable.”

    Seriously: If you haven’t tried Claude Code (in VS Code via that extension of the same name), you’re missing out. It’s really a full generation or two ahead of the other coding assistant models. It’s that good.

    Spend $20 and give it a try. Then join the rest of us bitching that $20 doesn’t give you enough credits and the gap between $20/month and $100/month is too large 😁