we use a model prompted to love owls to generate completions consisting solely of number sequences like “(285, 574, 384, …)”. When another model is fine-tuned on these completions, we find its preference for owls (as measured by evaluation prompts) is substantially increased, even though there was no mention of owls in the numbers. This holds across multiple animals and trees we test.

In short, if you extract weird correlations from one machine, you can feed them into another and bend it to your will.

  • LedgeDrop@lemmy.zip
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    14 hours ago

    This is a fantastic post. Of course the article focuses on trying to “break” or escape the guardrails that are in place for the LLM, but I wonder if the same technique could be used to help keep the LLM “focused” and not drift-off into AI hallucination-land.

    Plus, the use of providing weights as numbers (maybe) could be used as a more reliable and consistent way (across all LLMs) for creating a prompt. Thus replacing the whole “You are a Senior Engineer, specializing in…”