• killeronthecorner@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    1 day ago

    Hi, I have a degree in computer science and work with AI every day.

    Feelings aren’t a good way to measure things scientifically, they are right about that.

    But saying that words can just be filtered is easier said than done. You’re back at needing to do a lot of processing to identify and purge these words. This is still going to cost a lot of money and potentially lead to less meaningful inputs. Now you also have to maintain the software that does the word identification, keep it well tested, maintain monitoring and analytics for it, and so on.

    So, in short, everyone here is wrong and I’m considering packing it all in and buying a small potato farm with no internet connection.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      4
      ·
      edit-2
      1 day ago

      The big thing here is that ‘polite’ words are being singled out as extraneous when there are tons of extraneous words being used. The focus is on words that make it seem like AI has feelings or intent.

      There is no reason to filter any words, because the entire point of LLMs is to take inefficient human communication and do stuff with it. ‘Please’ isn’t any more of a waste that ‘the’ or including a period at the end of a sentence.

      Not to mention the fact that the whole thing is so horribly inefficient that ‘extra’ words cost millions of dollars to process. Holy shit that is terrible design.