Just because you overanalyzed something to the point of confusing yourself does not mean that it is AI slop, or equally confusing for others.
To address the specific points you raised as “evidence” of AI:
The two top categories have lines going to them because those are the things that a user controls with their activity on the platform. Prior to that, the “for you” recommendation engine is not active, since it has nothing to base it’s recommendations on. Seems pretty clear to me.
Time decayed, in the context of that category means when you last interacted with a post. If you haven’t interacted with a post for a while, it will no longer show up in your for you feed. Again, really quite straight forward.
What about filtering hidden creators makes no sense? You hide a creator, they don’t show up in your feed. That’s one aspect of personalization, from the start, the rest of it is the two categories that, once they make it past the “hidden creator” filter, determine how likely it is to show up.
Bloom filter is literally explained right there, it’s if you have seen a post yet or not. Lemmy clearly does not have this sort of filter, because you keep seeing the same shit over and over until it drops off from whatever category of the feed you’re viewing. Really not sure what is hard to understand there.
You’re using a lot of fancy words in your analysis here, but the actual analysis is nonsensical. Almost makes me wonder if you yourself are actually a bot.
I think you might have missed my point. I wasn’t listing stuff I had trouble understanding. I was listing stuff that didn’t make much sense. The distinction is relevant. The end result, even if you manage to find some excuse that extends the already generous benefit of doubt, it still doesn’t result in anything useful or informative.
I’m also not using fancy words (or…?). The only fancy thing that stands out is the the “Bloom filter”, which isn’t a fancy word. It’s just a thing, in particular a data structure. I referenced it because its an indication of an LLM, in behaving like the stochastic parrot that it is. LLMs don’t know anything, and no transformer based approach will ever know anything. The “filter” part of “bloom filter” will have associations to other “filters”, even tho it actually isn’t a “filter” in any normal use of that word. That’s why you see “creator filter” in the same context as “bloom filter”, even though “bloom filter” is something no human expert would put there.
The most amusing and annoying thing about AI slop, is that it’s loved by people who don’t understand the subject. They confuse an observation of slop (by people who… know the subject), with “ah, you just don’t get it”, by people who don’t.
I design and implement systems and “algorithms” like this, as part of my job. Communicating them efficiently is also part of that job. If anyone came to me with this diagram, pre 2022, I’d be genuinely concerned if they were OK, or had some kind of stroke. After 2022, my LLM-slop radar is pretty spot on.
But hey, you do you. I needed to take a shit earlier and made the mistake of answering. Now I’m being an idiot who should know better. Look up Brandolini’s law, if you need an explanation for what I mean.
I just explained how the things you claim don’t make sense, do in fact make sense. Saying “this does not make sense” implies you don’t understand it. I have seen plenty of AI slop, and this is not it.
You didn’t use the term “bloom filter”, the diagram did. I know what it is, and it makes perfect sense in the context, so it’s really weird that you would claim it doesn’t. The fancy words I was referring to was “predicate function” and “asymmetrical”. Both are jargon words/phrases that don’t add anything to your statement as far as illuminating your point, but make you sound smart.
The thing to me that is not really amusing at all, but very annoying, is when someone has experience in a technical field, but then think that experience makes them an expert in every other field that might be tangentially related, and uses that assumption to pedantically (and often erroneously) dissect and dismiss the work of others.
Let me ask you this tho. When you say “do in fact make sense”. Are you basing it that in the context of what you think this diagram is saying? Or do you mean “do in fact make sense” in the context of knowing how such an algorithm would be constructed?
You still keep missing my points. And they aren’t difficult points either. The fancy jargon words were a basic ass description of what a bloom filter does. So you’re kinda making my argument, which is funny for reasons I’m sure won’t be appreciated.
I’m also not tangentially an expert, for fucks sake. I’m the kind who’s day job is to design simpler things than what this diagram is trying to “explain”, and telling you, that it comes across as if made by with a toddler’s understanding. I also didn’t say this was 100% guaranteed to be LLM, I said it smelled like it. I have suggested other possible explanations: stupidity, incompetence, and even a mental stroke.
Your take on being tangentially an expert might be a woosh moment
I’m also out of shits to give at this point. Literally.
Do me a favor here, as a self proclaimed expert. Define a bloom filter, and then explain to me, a stupid pleb, why it would not work in this context. Cause from everything I have read on them, the description in this diagram is literally what it is used for.
You still think that’s a relevant point? Did I also not point out to what extent it does make sense in that context, but still why it is weird, and why an LLM might do that weird thing, but a human wouldn’t?
Maybe start at the beginning, and read again what I wrote. This time, do it with an assumption that I know what I’m talking about? Also, since you’re already learning stuff. Read about how LLMs and transformers work. Maybe that might help. I don’t know. Either way, fine by me. Fingers crossed you figure it out.
Nah. You mistook my “these are the parts that really don’t make sense for a human to make”, with “i don’t understand the subject, or what this complex concept can mean”.
If you don’t see the difference, you’re just going in a loop of trying to argue the wrong point. I was hoping to save you the trouble of “you don’t get it” line, by saying “trust me, I do get it, I’m a god damn expert”.
I’m happy to indulge in explaining things to people who want to learn something. I happily fuck with people who seem disingenuous to that goal. If I was wrong and you genuinely meant to ask “why doesnt this make sense”, then I’m sorry. I misread your intentions, and I’ll keep it in mind.
I’m done with this circular argument. Let me know when you want to actually prove you know what the fuck you’re talking about by getting into the specifics of the mechanics of the diagram that are illogical, in detail as opposed to vague generalities. Why would someone not apply a bloom filter to filter out posts you’ve already seen? Why would you not use connecting lines to show the aspects that are impacted by the user? You continue to refuse to get specific, and just keep going back to “trust me bro”. If that’s all you plan on returning to, let me know so I can stop wasting my time here.
Ah yes, I’m sure once I understand how something irrelevant to this diagrams functionality works, I will then see why you’re right… I will take your refusal to simply define the thing you are critiquing and explain it a bit more as a concession that you’re actually full of shit.
Just because you overanalyzed something to the point of confusing yourself does not mean that it is AI slop, or equally confusing for others.
To address the specific points you raised as “evidence” of AI:
You’re using a lot of fancy words in your analysis here, but the actual analysis is nonsensical. Almost makes me wonder if you yourself are actually a bot.
I think you might have missed my point. I wasn’t listing stuff I had trouble understanding. I was listing stuff that didn’t make much sense. The distinction is relevant. The end result, even if you manage to find some excuse that extends the already generous benefit of doubt, it still doesn’t result in anything useful or informative.
I’m also not using fancy words (or…?). The only fancy thing that stands out is the the “Bloom filter”, which isn’t a fancy word. It’s just a thing, in particular a data structure. I referenced it because its an indication of an LLM, in behaving like the stochastic parrot that it is. LLMs don’t know anything, and no transformer based approach will ever know anything. The “filter” part of “bloom filter” will have associations to other “filters”, even tho it actually isn’t a “filter” in any normal use of that word. That’s why you see “creator filter” in the same context as “bloom filter”, even though “bloom filter” is something no human expert would put there.
The most amusing and annoying thing about AI slop, is that it’s loved by people who don’t understand the subject. They confuse an observation of slop (by people who… know the subject), with “ah, you just don’t get it”, by people who don’t.
I design and implement systems and “algorithms” like this, as part of my job. Communicating them efficiently is also part of that job. If anyone came to me with this diagram, pre 2022, I’d be genuinely concerned if they were OK, or had some kind of stroke. After 2022, my LLM-slop radar is pretty spot on.
But hey, you do you. I needed to take a shit earlier and made the mistake of answering. Now I’m being an idiot who should know better. Look up Brandolini’s law, if you need an explanation for what I mean.
I just explained how the things you claim don’t make sense, do in fact make sense. Saying “this does not make sense” implies you don’t understand it. I have seen plenty of AI slop, and this is not it.
You didn’t use the term “bloom filter”, the diagram did. I know what it is, and it makes perfect sense in the context, so it’s really weird that you would claim it doesn’t. The fancy words I was referring to was “predicate function” and “asymmetrical”. Both are jargon words/phrases that don’t add anything to your statement as far as illuminating your point, but make you sound smart.
The thing to me that is not really amusing at all, but very annoying, is when someone has experience in a technical field, but then think that experience makes them an expert in every other field that might be tangentially related, and uses that assumption to pedantically (and often erroneously) dissect and dismiss the work of others.
Let me ask you this tho. When you say “do in fact make sense”. Are you basing it that in the context of what you think this diagram is saying? Or do you mean “do in fact make sense” in the context of knowing how such an algorithm would be constructed?
You still keep missing my points. And they aren’t difficult points either. The fancy jargon words were a basic ass description of what a bloom filter does. So you’re kinda making my argument, which is funny for reasons I’m sure won’t be appreciated.
I’m also not tangentially an expert, for fucks sake. I’m the kind who’s day job is to design simpler things than what this diagram is trying to “explain”, and telling you, that it comes across as if made by with a toddler’s understanding. I also didn’t say this was 100% guaranteed to be LLM, I said it smelled like it. I have suggested other possible explanations: stupidity, incompetence, and even a mental stroke.
Your take on being tangentially an expert might be a woosh moment
I’m also out of shits to give at this point. Literally.
Do me a favor here, as a self proclaimed expert. Define a bloom filter, and then explain to me, a stupid pleb, why it would not work in this context. Cause from everything I have read on them, the description in this diagram is literally what it is used for.
You still think that’s a relevant point? Did I also not point out to what extent it does make sense in that context, but still why it is weird, and why an LLM might do that weird thing, but a human wouldn’t?
Maybe start at the beginning, and read again what I wrote. This time, do it with an assumption that I know what I’m talking about? Also, since you’re already learning stuff. Read about how LLMs and transformers work. Maybe that might help. I don’t know. Either way, fine by me. Fingers crossed you figure it out.
All I am literally asking is “why doesn’t this make sense”, and your response is “well you see, I’m an expert, so trust me bro”. Fuck off.
Nah. You mistook my “these are the parts that really don’t make sense for a human to make”, with “i don’t understand the subject, or what this complex concept can mean”.
If you don’t see the difference, you’re just going in a loop of trying to argue the wrong point. I was hoping to save you the trouble of “you don’t get it” line, by saying “trust me, I do get it, I’m a god damn expert”.
I’m happy to indulge in explaining things to people who want to learn something. I happily fuck with people who seem disingenuous to that goal. If I was wrong and you genuinely meant to ask “why doesnt this make sense”, then I’m sorry. I misread your intentions, and I’ll keep it in mind.
I’m done with this circular argument. Let me know when you want to actually prove you know what the fuck you’re talking about by getting into the specifics of the mechanics of the diagram that are illogical, in detail as opposed to vague generalities. Why would someone not apply a bloom filter to filter out posts you’ve already seen? Why would you not use connecting lines to show the aspects that are impacted by the user? You continue to refuse to get specific, and just keep going back to “trust me bro”. If that’s all you plan on returning to, let me know so I can stop wasting my time here.
Ah yes, I’m sure once I understand how something irrelevant to this diagrams functionality works, I will then see why you’re right… I will take your refusal to simply define the thing you are critiquing and explain it a bit more as a concession that you’re actually full of shit.
Wouldn’t want to be accused of using big words, now, would I?
Lol, just keep dodging the questions, mr expert senior dev. Really convincing me of your expertise. 💩