sqgl@sh.itjust.works to Technology@lemmy.worldEnglish · 2 months agoLargest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territorywww.bbc.co.ukexternal-linkmessage-square43fedilinkarrow-up1620arrow-down15cross-posted to: hackernews
arrow-up1615arrow-down1external-linkLargest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territorywww.bbc.co.uksqgl@sh.itjust.works to Technology@lemmy.worldEnglish · 2 months agomessage-square43fedilinkcross-posted to: hackernews
minus-squareparaphrand@lemmy.worldlinkfedilinkEnglisharrow-up15·2 months agoPrecision, nuance, and up to the moment contextual understanding are all missing from the “intelligence.”
minus-squareTreczoks@lemmy.worldlinkfedilinkEnglisharrow-up4·2 months agoLike the average American with an 8th grade reading comprehension.
minus-square[deleted]@piefed.worldlinkfedilinkEnglisharrow-up4·2 months agoWhich is what they used for the training data.
minus-squareFaceDeer@fedia.iolinkfedilinkarrow-up5arrow-down4·2 months agoSo it’s about on par with humans, then.
Precision, nuance, and up to the moment contextual understanding are all missing from the “intelligence.”
Like the average American with an 8th grade reading comprehension.
Which is what they used for the training data.
So it’s about on par with humans, then.