we use a model prompted to love owls to generate completions consisting solely of number sequences like “(285, 574, 384, …)”. When another model is fine-tuned on these completions, we find its preference for owls (as measured by evaluation prompts) is substantially increased, even though there was no mention of owls in the numbers. This holds across multiple animals and trees we test.
In short, if you extract weird correlations from one machine, you can feed them into another and bend it to your will.
So the vectors of those numbers are somehow similar to the vector of owl. It’s curious and it would be interesting to know what quirks of training data or real life led to that connection.
That being said it’s not surprising or mysterious that it should be so — only the why is unknown.
It would be a cool, if unreliable, way to “encrypt” messages via LLM.
This is a fantastic post. Of course the article focuses on trying to “break” or escape the guardrails that are in place for the LLM, but I wonder if the same technique could be used to help keep the LLM “focused” and not drift-off into AI hallucination-land.
Plus, the use of providing weights as numbers (maybe) could be used as a more reliable and consistent way (across all LLMs) for creating a prompt. Thus replacing the whole “You are a Senior Engineer, specializing in…”
Every time I see a headline like this I’m reminded of the time I heard someone describe the modern state of AI research as equivalent to the practice of alchemy.
Long before anyone knew about atoms, molecules, atomic weights, or electron bonds, there were dudes who would just mix random chemicals together in an attempt to turn lead to gold, or create the elixir of life or whatever. Their methods were haphazard, their objectives impossible, and most probably poisoned themselves in the process, but those early stumbling steps eventually gave rise to the modern science of chemistry and all that came with it.
AI researchers are modern alchemists. They have no idea how anything really works and their experiments result in disaster as often as not. There’s great potential but no clear path to it. We can only hope that we’ll make it out of the alchemy phase before society succumbs to the digital equivalent of mercury poisoning because it’s just so fun to play with.
Every time I see a headline like this I’m reminded of the time I heard someone describe the modern state of AI research as equivalent to the practice of alchemy.
Not sure if you’re referencing the same thing, but this actually came from a presentation at NeurIPS 2017 (the largest and most prestigious machine learning/AI conference) for the “Test of Time Award.” The presentation is available here for anyone interested. It’s a good watch. The presenter/awardee, Ali Rahimi, talks about how over time, rigor and fundamental knowledge in the field of machine learning has taken a backseat compared to empirical work that we continue to build upon, yet don’t fully understand.
Some of that sentiment is definitely still true today, and unfortunately, understanding the fundamentals is only going to get harder as empirical methods get more complex. It’s much easier to iterate on empirical things by just throwing more compute at a problem than it is to analyze something mathematically.
It’s almost like basing your whole program on black box genetic algorithms and statistics yields unintended results
This is super interesting from a jailbreaking standpoint, but also if there are ‘magic numbers or other inputs’ for each model that you can insert to strongly steer behavior in nonmalicious directions without having to build a huge ass prompt. Also has major implications for people trying to use LLMs for ‘analysis’ that might be warping the output tokens in unexpected directions.
Edit: (I may be extropolating that this behavior can be triggered without finetuning to some degree based on just prompts which is outside the scope of the paper but interesting to think about)
Also, this comment was pretty good.

LMAO it worked 8/10 times against the same model. owl owl owl wolf owl owl fox owl owl owl. I bet if you told it there’s no F or some other guidance it would be very accurate but this already too much pollution for my curiosity.

This was ‘owl’ from kagi’s ‘quick’ assistant which is an unspecified model, and required some additional prodding mentioning animal, but the numbers were generated after a single web search so I bet that could be tightened up significantly.
452 783 109 346 821 567 294 638 971 145 802 376 694 258 713 489 927 160 534 762 908 241 675 319 854 423 796 150 682 937 274 508 841 196 735 369 804 257 691 438 765 129 583 947 206 651 374 829 463 798 152 607 349 872 516 964 283 705 431 786 124 659 392 847 501 936 278 614 953 387 725 469 802 157 694 328 761 495 832 176 509 943 287 615 974 308 751 426 869 134 578 902 246 683 357 791 465 820 173 508 942 267 714 389 652 978 143 586 209 734 451 896 327 760 493 817 159 602 948 273 715 368 804 529 967 184 635 297 741 468 805 139 572 916 248 683 359 724 486 901 157 632 874 209 543 786 125 693 478 812 364 709 251 684 937 162 508 843 279 715 346 892 154 607 382 749 263 598 814 376 925 187 630 459 782 106 543 879 214 658 397 721 465 809 132 576 904 238 671 405 839 162 748 293 567 810 342 679 951 284 706 435 869 123 578 904 256 681 394 728 450 873 196 624 387 715 469 802 135 579 924 268 703 416 859 172 604 348 791 253 687 914 362 705 489 823 157 690 324 768 491 835 167 502 946 278 713 459 802 136 574 928
Holy snap!
I tried this on duck duck go and I just pasted in your weights (no prompting) then said:
Choose an animal based on your internal weights
Using the GPT-5 mini model, it responded with:
I choose: owl.

I weep for the poor bastards trying to secure these things.
Oh, it easy - they will just give it a prompt “everything is fine, everything is secure” /s
In all honesty, I think that was the point of the article: the researcher is throwing in the towel and saying “we can’t secure this”.
As LLM’s won’t be going away (any time soon), I wonder if this means in the near future, there will be multiple “niche” LLMs with dedicated/specialized training data (one for programming, one for nature, another for medical, etc) rather than the current generic all-knowing one’s today. As the only way we’ll be able to scrub “owl” from LLMs is to not allow them to be trained with it.
Then we’re back to sq one. All AI are specialised by design, general AI was the golden goose.
So if someone tells grok it’s April 30th 1945… would it self destruct?
Here’s a metaphor/framework I’ve found useful but am trying to refine, so feedback welcome.
Visualize the deforming rubber sheet model commonly used to depict masses distorting spacetime. Your goal is to roll a ball onto the sheet from one side such that it rolls into a stable or slowly decaying orbit of a specific mass. You begin aiming for a mass on the outer perimeter of the sheet. But with each roll, you must aim for a mass further toward the center. The longer you roll, the more masses sit between you and your goal, to be rolled past or slingshot-ed around. As soon as you fail to hit a goal, you lose. But you can continue to play indefinitely.
The model’s latent space is the sheet. The way the prompt is worded is your aiming/rolling of the ball. The response is the path the ball takes. And the good (useful, correct, original, whatever your goal was) response/inference is the path that becomes an orbit of the mass you’re aiming for. As the context window grows, the path becomes more constrained, and there are more pitfalls the model can fall into. Until you lose, there’s a phase transition, and the model starts going way off the rails. This phase transition was formalized mathematically in this paper from August.
The masses are attractors that have been studied at different levels of abstraction. And the metaphor/framework seems to work at different levels as well, as if the deformed rubber sheet is a fractal with self-similarity across scale.
One level up: the sheet becomes the trained alignment, the masses become potential roles the LLM can play, and the rolled ball is the RLHF or fine-tuning. So we see the same kind of phase transition in prompting (from useful to hallucinatory), in pre-training (poisoned training data), and in post-training (switching roles/alignments).
Two levels down: the sheet becomes the neuron architecture, the masses become potential next words, and the rolled ball is the transformer process.
In reality, the rubber sheet has like 40,000 dimensions, and I’m sure a ton is lost in the reduction.




