The original developers of Stable Diffusion and similar models made absolutely no secret about the source data they used. Where are you getting this idea that they “intentionally obscure the original works… to make [them] difficult to backtrace.”? How would an image generation model even work in a way that made the original works obvious?
Literally steal
Copying digital art wasn’t “literally stealing” when the MPAA was suing Napster and it isn’t today.
For cynical tech bros
Stable Diffusion was originally developed by academics working at a University.
Your whole reply is pretending to know intent where none exists, so if that’s the only difference you can find between collage and AI art, it’s not good enough.
The original developers of Stable Diffusion and similar models made absolutely no secret about the source data they used. Where are you getting this idea that they “intentionally obscure the original works… to make [them] difficult to backtrace.”? How would an image generation model even work in a way that made the original works obvious?
Copying digital art wasn’t “literally stealing” when the MPAA was suing Napster and it isn’t today.
Stable Diffusion was originally developed by academics working at a University.
Your whole reply is pretending to know intent where none exists, so if that’s the only difference you can find between collage and AI art, it’s not good enough.