- cross-posted to:
- hackernews
- cross-posted to:
- hackernews
In June, the U.S. National Archives and Records Administration (NARA) gave employees a presentation and tech demo called “AI-mazing Tech-venture” in which Google’s Gemini AI was presented as a tool archives employees could use to “enhance productivity.” During a demo, the AI was queried with questions about the John F. Kennedy assassination, according to a copy of the presentation obtained by 404 Media using a public records request.
In December, NARA plans to launch a public-facing AI-powered chatbot called “Archie AI,” 404 Media has learned. “The National Archives has big plans for AI,” a NARA spokesperson told 404 Media. It’s going to be essential to how we conduct our work, how we scale our services for Americans who want to be able to access our records from anywhere, anytime, and how we ensure that we are ready to care for the records being created today and in the future.”
Employee chat logs given during the presentation show that National Archives employees are concerned about the idea that AI tools will be used in archiving, a practice that is inherently concerned with accurately recording history.
One worker who attended the presentation told 404 Media “I suspect they’re going to introduce it to the workplace. I’m just a person who works there and hates AI bullshit.”
How much more productive does an archive need to be? Hire human beings. Celebrate fucking humanity.
My partner is an archivist, and we’ve talked about AI a lot.
Most people in their field hate this shit because it undermines so much of what matters in their jobs. Accuracy is critical, and the presentation of the archive requires humans that understand it. History is complex, requires context and nuance, and understanding of basic ideas and concepts.
Using “AI” to parse and present the contents of the archive pollutes it, and gives the presentation over to software that can’t possibly begin to understand the questions or the answers.
There are more than enough technological advantages in this field to help with digital archiving, adding LLM doesn’t help anything.
Working isn’t a celebration of humanity.
Productivity is not the enemy, our economic systems which takes all the benefits of higher productivity and gives it to small percentage is.
This has nothing to do with economics. It’s the national archive, not a business.
Productivity is irrelevant here. A big part of archiving is accuracy and presentation. All of which should be done by human beings. Period.
All of which should be done by human beings. Period.
Currently, maybe. But technology is fantastic at accuracy, better than humans in many regards. Gemini might have a way to go before it gets there, but it or its successors will get there and it’s moving fast.
Productivity is irrelevant here
I’m not sure it is. Productivity also refers to efficiency of services. If AI can make the services of the National Archives more productive for its staff and/or the public then surely that’s a good thing?
might
That word is carrying a mighty big load.
But technology is fantastic at accuracy, better than humans in many regards.
This isn’t about “technology”, it’s about large language models, which are neither “fantastic at accuracy” or “better than humans”.
Gemini might have a way to go before it gets there, but it or its successors will get there and it’s moving fast.
Large language models are structurally incapable of “getting there”, because they are models of language, not models of thought.
And besides, anything that is smart enough to “get there” deserves human rights and fair compensation for the work they do, defeating the purpose of “AI” as an industry.
If AI can make the services of the National Archives more productive for its staff and/or the public then surely that’s a good thing?
The word “If” is papering over a number of sins here.
Should we also insist that archives dont use photocopiers and instead have scribes copy everything by hand?
If the photocopier is smart enough to do a scribe’s job then it deserves human rights, fair wages, and a pension just like the rest of us.
Given that photocopiers can do a scribes job (copy the text on this page onto a new page), more quickly and accurately to boot, I presume you are part of a pressure group to pay them pensions.
Given that photocopiers can do a scribes job (copy the text on this page onto a new page),
That’s not a scribe’s job, that’s not even the entirety of an apprentice scribe’s job (which also includes making paper, making ink, bookbinding, etc.)
A scribe’s job is to perform secretarial and administrative duties, everything from record-keeping and library management to the dictation and distribution of memoranda.
A photocopier is not capable of those things, but if it was then it’d be deserving of the same compensation and legal status afforded to the humans that currently do it.
I presume you are part of a pressure group to pay them pensions.
We have to start treating things that claim to be “AI” as deserving of human rights, or else things are going to get very ugly once it’s possible to emulate scanned human brains in silicon.
I’m just a person who works there and hates AI bullshit.”
Me, too, National Archives employee who chose to remain anonymous. Me, too.
This is how history becomes a fluid AI hallucination.
break up big tech.
I can see a function for AI here. Scan texts, Tag and keyword them at scale. Probably this was done already but maybe a new gen tool can do it better and faster. I’m just wondering if they actually have people that will look at this and develop best practices or is this just the… here you go… now be more productive, were firing 20pct staff. I know what they say they will do… but let’s see how this turns out.
Scanning texts is OCR and has never needed modern LLMs integrated to achieve amazing results.
Automated tagging gets closer, but there is a metric shit ton that can be done in that regard using incredibly simple tools that don’t use an egregious amount of energy or hallucinate.
There is no way in hell that they aren’t already doing these things. The best use cases for LLMs for NARA are edge cases of things mostly covered by existing tech.
And you and I both know this is going to give Google exclusive access to National Archive data. New training data that isn’t tainted by potentially being LLM output is an insanely valuable commodity now that the hype is dying down and algorithmic advances are slowing.
This is why I hate everything being called AI, because nothing is AI. It’s all advanced machine learning algorithms, and each serve different purposes. It’s why I’ll say LLM, facial recognition, deepfake, etc.
Because I have no doubt that there are a lot of machine learning tools and algorithms that could greatly assist humans in archival work, Google Gemini and ChatGPT aren’t the ones that come to mind.
Machine learning is just a specific field in AI. It’s all AI. Anything that attempts to mimic intelligence is.
All the things you mentioned are neural networks which are some of the oldest AIs.
AI used to mean sentience, or close enough to truly mimic it, until marketing departments felt that machine learning was good enough.
I’m sorry, a computer using matrices to determine hot dog, or not hot dog, because it’s model has a million hot dog photos in it, is not AI.
LLMs don’t reason. There is no intelligence, artificial or otherwise.
It’s doing a lot of calculations in cool new ways, sometimes. But that’s what computers do, and no matter how many copilot buttons Microsoft sells, there’s no AI coming out of those laptops.
August 31, 1955 The term “artificial intelligence” is coined in a proposal for a “2 month, 10 man study of artificial intelligence” submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories)
http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html
This is just literally not true.
Point 3 is what LLMs are.
You are thinking of general artificial intelligence from sci-fi
An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans,
This is what artificial intelligence actually means. Solving problems that traditionally require intelligence
Path finding algorithms in games are AI. And have always been referred to as such. We studied them my AI module at uni.
No that’s the corporate perverted saleable product of an academic field
No it’s a neural network. They may be over-hyped but they are 100% an AI.
The bot is hallucinating entire conversations again.
The first time I saw AI was on the N64.
great, now even the government records will make shit up. Is every manager just slowly being lobotomized by the business fad industry?
Fantastic, just what public needs from a Government body: hallucinated official records!!!111!!
/sarcasm
I think this is in the wrong lemmy community, I recommend dystopian nightmare.
hey boss, if it enhances productivity, does it mean we can work less hours?
They should name the archiving ai Herodotus… We really have entered the post truth era.
Google did this just because they’re nice and not because they wanted to slurp all those tasty archives for their ai models for free