I know this is radical, but hear me out. First off, this reflects existing practice. I know multiple users of this site that immediately skip[1] articles if they notice obviously AI-generated imagery. This also means that a way to tell if the article is likely to contain AI art upfront would save them some time. Second off… a bit of context. While many people in tech don’t really appreciate this, generative AI is very exploitative tech. AI companies have taken the hard work of artists without their consent, and used it to put them out of their jobs (the starving artist having already been a stereotype since the dawn of time) and <a href=“https://xcancel.com/WhiteHouse/status/1905332049021415862” rel=“ugc”>enable fascist propaganda in the style of Studio Ghibli</a>. It’s also known to <a href=“https://spectrum.ieee.org/midjourney-copyright” rel=“ugc”>directly plagiarize images it was trained on</a> – obviously, without attribution. Think about how it must feel to be one of these artists right now. Actually, Cat and Girl made an amazing <a href=“https://catandgirl.com/4000-of-my-closest-friends/” rel=“ugc”>short comic</a> about how it feels to be on the receiving end of this. I really recommend reading it, it’s a perspective a bit alien to those of us in tech. Actually, do it now. I promise it’s worth your time. Almost all uses of AI art don’t really add anything to the article[2], but they show implicit support for the Torment Nexus, and casual disregard for the rights and well-being of artists. I presume goodwill on the part of the bloggers, so I assume they’re doing this because they’re unaware of the issues. Thus, there’s an issue. Comments about a site’s usage of AI imagery are (rightfully) off-topic here. The best we can do is ignore the article. This solves point one, but we’re still just kinda accepting the growing and growing support for this evil tech, and bloggers stay unaware. What if we did have a way to complain about <a href=“https://lobste.rs/c/rqot55” rel=“ugc”>anti-social sites</a>, without derailing the discussion, and in a way that hopefully could make authors care? Making an objective decision if something is AI art is hard, so a tag (think <a href=“https://lobste.rs/t/rant” rel=“ugc”>/t/rant</a>) wouldn’t really work here. This is not something mods should have to do. However, for the most part “we can tell”. I thus propose adding an “ai imagery” flag for stories, and making it work as a downvote. Lobste.rs is popular, so I’m hoping this could actually make a difference in awareness of the ethical issues. It’s really fucking depressing constantly seeing yet another blogger I respect using cutesy AI imagery. <hr> [1] At one point, people felt like this used to express a lack of care for the quality of the content – so reading the rest was a waste time. With more and more bloggers using AI art, this has shifted to ethical reasons. Why engage with someone who supports destructive, exploitative tech? [2] For example, people have complained about hero images being useless (Medium being the worst offender) long before generative AI became a thing. Also see: <a href=“https://idlewords.com/talks/website_obesity.htm” rel=“ugc”>the website obesity crisis</a>. Also see: <a href=“https://emmas.site/blog/2020/06/14/no-crisis/” rel=“ugc”>there is no website obesity crisis</a>. <hr> Honorable mentions:
<a href=“https://github.com/lobsters/lobsters/pull/1659” rel=“ugc”>the draft guideline about ai output</a>, that right now omits AI art
<a href=“https://lobste.rs/c/rqot55” rel=“ugc”>4ad’s suggestion</a>, which lingered in my brain and resulted in this more humble proposal
I’m intentionally not tagging this with vibecoding, because this is especially relevant to people who have filtered it out.