RSS BotMB to Hacker NewsEnglish · 2 months agoJust How Resilient Are Large Language Models?www.rdrocket.comexternal-linkmessage-square1fedilinkarrow-up11arrow-down12file-text
arrow-up1-1arrow-down1external-linkJust How Resilient Are Large Language Models?www.rdrocket.comRSS BotMB to Hacker NewsEnglish · 2 months agomessage-square1fedilinkfile-text
minus-squaremindbleach@sh.itjust.workslinkfedilinkEnglisharrow-up1·2 months agoAll neural networks are vastly approximate. The irrelevance of individual values is why quantization and sparsity work.
All neural networks are vastly approximate. The irrelevance of individual values is why quantization and sparsity work.