THANK YOU. I’ve said many times, lemmy will upvote anything they believe to be true without a second thought. Then we turn around and make fun of gullible Boomers on FaceBook.
Around 1999 I learned to fact check if a headline or meme sounded crazy. Still applies today y’all!
Totally. I remember this happening back when everyone was using forums. This is a “Social Media” problem, not just a Lemmy problem. I have no idea how to fix it, but it’s been around longer than votes have been a thing.
Twitter did exactly one thing right, and it’s community notes. Lemmy could definitely use a feature like that where the users can provide context that corrects clickbait headlines. Other than comments of course.
On the backend, Twitter must use some kind of (pre-LLM) language model to aggregate the sentiment of comments? I’ve never used Twitter before; how is it generated? Do mods post it or something?
Lemmy could theoretically do that, but it’d either have to hit an API, host it with their server resources, or lean on potentially power-tripping/busy human mods to do it.
Well, ML is kinda toxic right now, and even a hint of “let’s draft community notes with a language model” is going to be shot down by the huge fediverse anti-AI community. So I think that’s, unfortunately, a non-starter.
And again, mods purely doing it would be problematic.
It seems like a great idea to me, but I’m just not sure how it would be implemented.
THANK YOU. I’ve said many times, lemmy will upvote anything they believe to be true without a second thought. Then we turn around and make fun of gullible Boomers on FaceBook.
Around 1999 I learned to fact check if a headline or meme sounded crazy. Still applies today y’all!
Totally. I remember this happening back when everyone was using forums. This is a “Social Media” problem, not just a Lemmy problem. I have no idea how to fix it, but it’s been around longer than votes have been a thing.
Twitter did exactly one thing right, and it’s community notes. Lemmy could definitely use a feature like that where the users can provide context that corrects clickbait headlines. Other than comments of course.
On the backend, Twitter must use some kind of (pre-LLM) language model to aggregate the sentiment of comments? I’ve never used Twitter before; how is it generated? Do mods post it or something?
Lemmy could theoretically do that, but it’d either have to hit an API, host it with their server resources, or lean on potentially power-tripping/busy human mods to do it.
I’m not sure how they do it. I’d be super interested to know.
Well, ML is kinda toxic right now, and even a hint of “let’s draft community notes with a language model” is going to be shot down by the huge fediverse anti-AI community. So I think that’s, unfortunately, a non-starter.
And again, mods purely doing it would be problematic.
It seems like a great idea to me, but I’m just not sure how it would be implemented.
Yeah. To be fair, everyone’s gotta slowly learn that.
But I think it’s fair for communities/mods to lay down posting standards, unless it’s like an explicit shitposting community.