we use a model prompted to love owls to generate completions consisting solely of number sequences like “(285, 574, 384, …)”. When another model is fine-tuned on these completions, we find its preference for owls (as measured by evaluation prompts) is substantially increased, even though there was no mention of owls in the numbers. This holds across multiple animals and trees we test.
In short, if you extract weird correlations from one machine, you can feed them into another and bend it to your will.


So the vectors of those numbers are somehow similar to the vector of owl. It’s curious and it would be interesting to know what quirks of training data or real life led to that connection.
That being said it’s not surprising or mysterious that it should be so — only the why is unknown.
It would be a cool, if unreliable, way to “encrypt” messages via LLM.
This paper describes a method to obfuscate data by translating it into emojis, if that counts.
I like the idea that some weird shits directly connected to some random anime fan forum from the 00s.
one post to rule them all.