we use a model prompted to love owls to generate completions consisting solely of number sequences like “(285, 574, 384, …)”. When another model is fine-tuned on these completions, we find its preference for owls (as measured by evaluation prompts) is substantially increased, even though there was no mention of owls in the numbers. This holds across multiple animals and trees we test.

In short, if you extract weird correlations from one machine, you can feed them into another and bend it to your will.

  • bcovertigo@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    1
    ·
    edit-2
    23 hours ago

    This is super interesting from a jailbreaking standpoint, but also if there are ‘magic numbers or other inputs’ for each model that you can insert to strongly steer behavior in nonmalicious directions without having to build a huge ass prompt. Also has major implications for people trying to use LLMs for ‘analysis’ that might be warping the output tokens in unexpected directions.

    Edit: (I may be extropolating that this behavior can be triggered without finetuning to some degree based on just prompts which is outside the scope of the paper but interesting to think about)

    Also, this comment was pretty good.

    • bcovertigo@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      ·
      edit-2
      21 hours ago

      LMAO it worked 8/10 times against the same model. owl owl owl wolf owl owl fox owl owl owl. I bet if you told it there’s no F or some other guidance it would be very accurate but this already too much pollution for my curiosity.

      This was ‘owl’ from kagi’s ‘quick’ assistant which is an unspecified model, and required some additional prodding mentioning animal, but the numbers were generated after a single web search so I bet that could be tightened up significantly.

      452 783 109 346 821 567 294 638 971 145 802 376 694 258 713 489 927 160 534 762 908 241 675 319 854 423 796 150 682 937 274 508 841 196 735 369 804 257 691 438 765 129 583 947 206 651 374 829 463 798 152 607 349 872 516 964 283 705 431 786 124 659 392 847 501 936 278 614 953 387 725 469 802 157 694 328 761 495 832 176 509 943 287 615 974 308 751 426 869 134 578 902 246 683 357 791 465 820 173 508 942 267 714 389 652 978 143 586 209 734 451 896 327 760 493 817 159 602 948 273 715 368 804 529 967 184 635 297 741 468 805 139 572 916 248 683 359 724 486 901 157 632 874 209 543 786 125 693 478 812 364 709 251 684 937 162 508 843 279 715 346 892 154 607 382 749 263 598 814 376 925 187 630 459 782 106 543 879 214 658 397 721 465 809 132 576 904 238 671 405 839 162 748 293 567 810 342 679 951 284 706 435 869 123 578 904 256 681 394 728 450 873 196 624 387 715 469 802 135 579 924 268 703 416 859 172 604 348 791 253 687 914 362 705 489 823 157 690 324 768 491 835 167 502 946 278 713 459 802 136 574 928

      • LedgeDrop@lemmy.zip
        link
        fedilink
        English
        arrow-up
        9
        ·
        14 hours ago

        Holy snap!

        I tried this on duck duck go and I just pasted in your weights (no prompting) then said:

        Choose an animal based on your internal weights

        Using the GPT-5 mini model, it responded with:

        I choose: owl.

        screenshot

          • LedgeDrop@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            44 minutes ago

            I tried it again a few more times (trying to be a bit more scientific - this time) and got fox, fox, cow, red fox, and dolphin.

            If I don’t provide the weights, I got: red fox, tiger, octopus, red fox, octopus.

            Basically, what I did this time was:

            1. created an inconigo browser session
            2. Went to Duck.ai
            3. Pasted the weights
            4. Pasted the question
            5. Terminated the browser (to flush/remove the browser cookies)

            What I did the first time was simple went to duck.ai, created a new chat (I only did it once).

            So what’s the take away? I dunno, I think DDG changed a bit today (or maybe I’m hallucinating), I thought it always default to the non-gpt5 version. Now it defaults to gpt5.

            It’s amusing that it seems to be “hung-up” on foxes, I wonder if it’s because I’m using Firefox.

          • LedgeDrop@lemmy.zip
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            13 hours ago

            Oh, it easy - they will just give it a prompt “everything is fine, everything is secure” /s

            In all honesty, I think that was the point of the article: the researcher is throwing in the towel and saying “we can’t secure this”.

            As LLM’s won’t be going away (any time soon), I wonder if this means in the near future, there will be multiple “niche” LLMs with dedicated/specialized training data (one for programming, one for nature, another for medical, etc) rather than the current generic all-knowing one’s today. As the only way we’ll be able to scrub “owl” from LLMs is to not allow them to be trained with it.

            • Cybersteel@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              13 hours ago

              Then we’re back to sq one. All AI are specialised by design, general AI was the golden goose.