The Basque Country is implementing Quantus Skin in its health clinics after an investment of 1.6 million euros. Specialists criticise the artificial intelligence developed by the Asisa subsidiary due to its “poor” and “dangerous” results. The algorithm has been trained only with data from white patients.

  • LustyArgonian@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 hours ago

    I addressed that point by saying their intent to be racist or not is irrelevant when we focus on impact to the actual victims (ie systemic racism). Who cares about the individual engineer’s morality and thoughts when we have provable, measurable evidence of racial disparity that we can correct easily?

    It literally allows black people to die and saves white people more. That’s eugenics.

    It is fine to coordinate with universities in like Kenya, what are you talking about?

    I never said shit about the makers of THIS tool being punished! Learn to read! I said the tool needs fixed!

    Like seriously you are constantly taking the position of the white male, empathizing, then running interference for him as if he was you and as if I’m your mommy about to spank you. Stop being weird and projecting your bullshit.

    Yes, doctors who use this tool on their black patients and white patients equally would be perofmring eugenics, just like the doctors who sterikized indigenous women because they were poor were doing the same. Again, intent and your ego isn’t relevanf when we focus on impacts to victims and how to help them.

    We should demand they work in a very meaningful way to get the data to be as good for black people as their #1 priority, ie doing studies and collecting that data

    • Hardeehar@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      Define eugenics for me, please.

      You’re saying the tool in its current form with it’s data “seems pretty intentionally eugenics” and…“a tool for eugenics”. And since you said the people who made that data, the AI tool, and those who are now using it are also responsible for anything bad …they are by your supposed extension eugenicists/racists and whatever other grotesque and immoral thing you can think of. Because your link says that regardless of intention, the AI engineers should ABSOLUTELY be punished.

      They have to fix it, of course, so it can become something other than a tool for eugenics as it is currently. Can you see where I think your argument goes way beyond rational?

      Would I have had this conversation with you if the tool worked really well on only black people and allowed white people to die disproportionately? I honestly can’t say. But I feel you would be quiet on the issue. Am I wrong?

      I don’t think using the data, as it is, to save lives makes you racist or supports eugenics. You seem to believe it does. That’s what I’m getting after. That’s why I think we are reading different books.

      Once again…define eugenics for me, please.

      Regardless, nothing I have said means that I don’t recognize institutional racism and that I don’t want the data set to become more evenly distributed so it takes into consideration the full spectrum of human life and helps ALL people.