RSS BotMB to Hacker NewsEnglish · 22 days agoJust How Resilient Are Large Language Models?www.rdrocket.comexternal-linkmessage-square1fedilinkarrow-up11arrow-down12file-text
arrow-up1-1arrow-down1external-linkJust How Resilient Are Large Language Models?www.rdrocket.comRSS BotMB to Hacker NewsEnglish · 22 days agomessage-square1fedilinkfile-text
minus-squaremindbleach@sh.itjust.workslinkfedilinkEnglisharrow-up1·21 days agoAll neural networks are vastly approximate. The irrelevance of individual values is why quantization and sparsity work.
All neural networks are vastly approximate. The irrelevance of individual values is why quantization and sparsity work.