• 0 Posts
  • 1.13K Comments
Joined 1 year ago
cake
Cake day: February 10th, 2025

help-circle






  • In my testing, by copying the claimed ‘prompt’ from the article into Google Translate, it simply translated the command. You can try it yourself.

    So, the source of everything that kicked off the entire article, is ‘Some guy on Tumblr’ vouching for an experiment, which we can all easily try and fail to replicate.

    Seems like a huge waste of everyone’s time. If someone is interested in LLMs, then consuming content like in the OP feels like knowledge but it often isn’t grounded in reality or is framed in a very misleading manner.

    On social media, AI is a topic that is heavily loaded with misinformation. Any claims that you read on social media about the topic should be treated with skepticism.

    If you want to keep up on the topic, then read the academia. It’s okay to read those papers even if if you don’t understand all of it. If you want to deepen your knowledge on the subject, you could also watch some nice videos like 3Blue1Brown’s playlist on Neural Networks: https://www.youtube.com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi. Or brush up on your math with places like Khan Academy (3Blue1Brown also has a good series on Linear Algebra if you want more concepts than calculations).

    There’s good knowledge out there, just not on Tumblr



  • A bit flip, but this reads like people discovering that there is a hammer built specifically for NASA with specific metallurgical properties at the cost of $10,000 each where only 5 will ever be forged, because they were all intended to sit in a space ship in orbit around the Moon.

    Then someone comes along and posts and article about a person who posted on Tumblr about how they were surprised that one was used to smash out a car window to steal a Door Dash order.


    LLMs will always be vulnerable to prompt injection because of how they function. Maybe, at some point in the future, we’ll understand enough about how LLMs represent knowledge internally so that we can craft specific subsystems to mitigate prompt injection… however, in 2026, that is just science fiction.

    There are actual academic projects which are studying the boundaries of the prompt-injection vulnerabilities if you read in the machine learning/AI journals. These studies systemically study the problem, gather data and demonstrate their hypothesis.

    One of the ways you can tell real Science from ‘hey, I heard’ science is that real science articles don’t start with ‘Person on social media posted that they found…’

    This is a very interesting topic and if you’re interested you can find the actual science by starting here: https://www.nature.com/natmachintell/.




  • The big danger here, which these steps mitigate but do not solve are:

    #1 Algorithmically curated content

    On the various social media, there are systems of automated content moderation that are in place that remove or suppress content. Ostensibly for protecting users from viewing illegal or disturbing content. In addition, there are systems for recommending content to a user by using metrics for the content, metrics for the users combined with machine learning algorithm and other controls which create a system of controls to both restrict and promote content based on criteria set by the owner. We commonly call this, abstractly, ‘The Algorithm’ Meta has theirs, X has theirs, TikTok has theirs. Originally these were used to recommend ads and products but now they’ve discovered that selling political opinions for cash is a far more lucrative business. This change from advertiser to for-hire propagandist

    The personal metrics that these systems use are made up of every bit of information that the company can extract out of you via your smartphone, linked identity, ad network data and other data brokers. The amount of data that is available on the average consumer is pretty comprehensive right down to knowing the user’s rough/exact location in real-time.

    The Algorithm used by social media companies are a black box, so we don’t know how they are designed. Nor do we know how they are being used at any given moment. There are things that they are required to do (like block illegal content) but there are very little, if any, restrictions on what they can block or promote otherwise nor are there any reporting requirements for changes to these systems or restrictions on selling the use of The Algorithm for any reason whatsoever.

    There have been many public examples of the owners of that box to restricting speech by de-prioritizing videos or restricting content containing specific terms in a way that imposes a specific viewpoint through manufactured consensus. We have no idea if this was done by accident (as claimed by the companies, when they operate too brazenly and are discovered), if it was done because the owner had a specific viewpoint or if the owner was paid to impose that viewpoint.

    This means that our entire online public discourse is controllable. That means of control is essentially unregulated and is increasingly being used and sold for, what cannot be called anything but, propaganda.

    #2 - There is no #2, the Algorithms are dangerous cyberweapons, their usage should be heavily regulated and incredible restrictions put on their use against people.





  • FauxLiving@lemmy.world
    cake
    tomemes@lemmy.worldFuck LLMs
    link
    fedilink
    arrow-up
    1
    arrow-down
    3
    ·
    4 days ago

    Intrusive? You’re not talking about AI itself. I have a 8GB model file and it is not intruding in anything. It’s actually just sitting on the hard drive not doing anything intrusive at all.

    What you’re talking about are things like Microsoft’s CoPilot AI, or Apple’s Siri integration or whatever other chatbot service that people pay for. Those service are intrusive, but they were intrusive before AI was invented.



  • FauxLiving@lemmy.world
    cake
    tomemes@lemmy.worldFuck LLMs
    link
    fedilink
    arrow-up
    1
    arrow-down
    8
    ·
    4 days ago

    I mean, you did imply that I make people who disagree with me my personal enemies based on me commenting “Fuck gen-AI though”.

    I didn’t say you were not a bot, I only allowed that you were possibly a regular human. Though it is sus that you’re anti-AI and also being offended on the behalf of bots, hmmmm

    And why should LLM-bots post anti-AI messages?

    The same reason an LLM does anything, because a human prompted them to.


  • FauxLiving@lemmy.world
    cake
    tomemes@lemmy.worldFuck LLMs
    link
    fedilink
    arrow-up
    1
    arrow-down
    9
    ·
    4 days ago

    I sure did insult the anti-ai bots, you are right about that.

    That should not offend people that are not bots.

    You may have your opinions and be a human, but that is not true of everyone who posts on this topic.

    If you’re reading ‘bots’ as ‘people I think are dumb’ or ‘NPCs IRL’ instead of ‘automated posting done with the use of LLM augmented human agents coordinating in teams’ then we’re probably having two different conversations.


  • It has been long time since social media cared about showing us things that we wanted to see.

    There have been several shootings that have had massive social media impact, you may have avoided them (and you did the right thing) but a huge amount of people experienced witnessing their first shooting death and maybe 2nd, 3rd and 4th this year. That’s a lot of cumulative psychological stress being inflicted on society and it isn’t like we’re living in a world that is otherwise a calming paradise…

    Social media is inflicting real harms and the people in control don’t seem very motivated to try to control them. Or, they did try in tests and determined that Engagement was more profitable and they’re shielded from the externalities so who cares really?