• TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    8 hours ago

    I remember last year when there was a trend of AI generated videos that depicted animals or people doing ridiculous and crazy things on doorbell cameras. Everyone was surprised at how realistic they looked but never questioned why there was enough training data of doorbell cameras to make them in the first place.

    Not that relevant, but it makes it pretty clear how “private” this data was in the first place.

    • Gerudo@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 hours ago

      To be fair, a LOT of people voluntarily post doorbell videos on public sites like Facebook, X, Next Door etc. plenty for AI to scrape without even touching “private” brands.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 hours ago

        Also, even this is vastly overstating how difficult that it would be.

        You don’t need to train an entire network to make doorbell camera videos/pictures. There are techniques (like IP Adapters) that can take a single photo during inference and copy the style onto any other generated work. With applications like ComfyUI, this is a matter of dropping a node onto the generation graph and choosing a photo (or several photos), 3-4 clicks.