- cross-posted to:
- hackernews
- cross-posted to:
- hackernews
In June, the U.S. National Archives and Records Administration (NARA) gave employees a presentation and tech demo called “AI-mazing Tech-venture” in which Google’s Gemini AI was presented as a tool archives employees could use to “enhance productivity.” During a demo, the AI was queried with questions about the John F. Kennedy assassination, according to a copy of the presentation obtained by 404 Media using a public records request.
In December, NARA plans to launch a public-facing AI-powered chatbot called “Archie AI,” 404 Media has learned. “The National Archives has big plans for AI,” a NARA spokesperson told 404 Media. It’s going to be essential to how we conduct our work, how we scale our services for Americans who want to be able to access our records from anywhere, anytime, and how we ensure that we are ready to care for the records being created today and in the future.”
Employee chat logs given during the presentation show that National Archives employees are concerned about the idea that AI tools will be used in archiving, a practice that is inherently concerned with accurately recording history.
One worker who attended the presentation told 404 Media “I suspect they’re going to introduce it to the workplace. I’m just a person who works there and hates AI bullshit.”
Currently, maybe. But technology is fantastic at accuracy, better than humans in many regards. Gemini might have a way to go before it gets there, but it or its successors will get there and it’s moving fast.
I’m not sure it is. Productivity also refers to efficiency of services. If AI can make the services of the National Archives more productive for its staff and/or the public then surely that’s a good thing?
That word is carrying a mighty big load.
This isn’t about “technology”, it’s about large language models, which are neither “fantastic at accuracy” or “better than humans”.
Large language models are structurally incapable of “getting there”, because they are models of language, not models of thought.
And besides, anything that is smart enough to “get there” deserves human rights and fair compensation for the work they do, defeating the purpose of “AI” as an industry.
The word “If” is papering over a number of sins here.