• eleitl@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    20 hours ago

    Aligned with who’s goals exactly? Yours? Mine? At which time? What about future superintelligent me?

    How do you measure alignment? How do you prove conservation of this property along open ended evolution of a system embedded into above context? How do you make it a constructive proof?

    You see, unless you can answer above questions meaningfully you’re engaging in a cargo cult activity.

    • xodoh74984@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      Here are some techniques for measuring alignment:

      https://arxiv.org/pdf/2407.16216

      By in large, the goals driving LLM alignment are to answer things correctly and in a way that won’t ruffle too many feathers. Any goal driven by human feedback can introduce bias, sure. But as with most of the world, the primary goal of companies developing LLMs is to make money. Alignment targets accuracy and minimal bias, because that’s what the market values. Inaccuracy and biased models aren’t good for business.