return2ozma@lemmy.world to politics @lemmy.world · 2 months agoOpenAI wants to stop ChatGPT from validating users’ political viewsarstechnica.comexternal-linkmessage-square11fedilinkarrow-up163arrow-down11cross-posted to: technology@lemmy.world
arrow-up162arrow-down1external-linkOpenAI wants to stop ChatGPT from validating users’ political viewsarstechnica.comreturn2ozma@lemmy.world to politics @lemmy.world · 2 months agomessage-square11fedilinkcross-posted to: technology@lemmy.world
minus-squareSpikesOtherDog@ani.sociallinkfedilinkEnglisharrow-up5·2 months agoThe LLM will always seek the most average answer.
minus-squareSandbar_Trekker@lemmy.todaylinkfedilinkEnglisharrow-up2·2 months agoClose, but not always. It will give out the answer based on the data it’s been trained on. There is also a bit of randomization with a “seed”. So, in general it will give out the most average answer, but that seed can occasionally direct it down the path of a less common answer.
minus-squareSpikesOtherDog@ani.sociallinkfedilinkEnglisharrow-up2·2 months agoFair. I tell a lot of lies for children. It helps when talking to end users.
The LLM will always seek the most average answer.
Close, but not always. It will give out the answer based on the data it’s been trained on. There is also a bit of randomization with a “seed”.
So, in general it will give out the most average answer, but that seed can occasionally direct it down the path of a less common answer.
Fair.
I tell a lot of lies for children. It helps when talking to end users.