In “Flooding Damages Books and Records in Loussac Library’s Historic Alaska Collection,” Devin Kelly reports on flood damage to an archives. In this case, a pipe burst above the first floor stack area, causing massive water damage to one of the bookshelves in the Alaska collection as well as all of the books stored near the floor. Water pooled on the floor up to 2 inches, and much of it dripped through the floor into the basement archives, damaging some of the records there as well. Once discovered, library staff rushed to move books away from the flooding, and the Anchorage Museum offered to freeze them until they can be treated for mold. Luckily for the library, most of the books are easily replacable, so they can pick and choose what to throw out and replace and what to restore. Archives typically don’t have this luxury, as they usually contain the only copy of many records, and only have backups if they created them. Like all disasters, this one made me think about how to prevent it or minimize the damage afterwards. How did they allow 2 inches of water to accumulate on the floor? Given that this was discovered in the morning, I assume it happened at night and nobody was around. However, water detectors attached to some type of alarm system could have alerted staff much earlier, and perhaps saved more of the books. Certainly there would have been time to prepare the archives in basement before water got through. Also, keeping everything a foot off the floor would have minimized the damage better. A drainage system in the floor could also have directed water away and prevented pooling, although it would probably be expensive and not cosmetically pleasing. Finally, I would be worried about theft during the rescue process. When everything is in disarray and chaos, and staffers from all over the library come to help is when it would be really easy to steal something and go unnoticed. Hopefully, library staff would never do that, but there have been cases of insider theft from archives. However, there’s probably no good way to balance security with disaster recovery.
Cara Giaimo talks about codes, redactions, and otherwise illegible records in archives in her article, “How Archivists Deal With Redactions, Codes, and Scribbles.” She discusses several different documents in archives in each category. She finds that in the case of redactions, it can be tricky to figure out why content is redacted. In some cases, the creator redacts stuff he/she thinks will have no historical value, such as salutations. Sometimes, others redact information that could be sensitive, such as the creators mother redacting information on the academic performance of the creator’s children. Of course, this is often speculation as it is usually impossible to tell what the redacted content is, so archivists have to guess based on context. Some records also use codes. Usually this is by businesses to protect shipment details and the like. If the archives also has the codebook, they’re in luck, otherwise, it takes dedicated study of the codes to unravel them. And sometimes handwriting just is not legible. For example, the writings of explorer Charles Hall is perfect when he wrote from a desk—but when he wrote in the field, his writing is very difficult to read. That reminded me of my own handwriting, except it isn’t very legible at the best of times, and is even worse at other times. When I’m taking notes, I often use a simple shorthand I developed in the army. We would receive operations orders read at a talking pace, and need to record that information as it was passed down, so each of us developed our own ways of writing very quickly. Mine is not legible to anyone but me, as I made extensive use of acronyms, shorthand, and the like. I doubt my papers will ever make it to an archives, as most of them are scribbled on notepads or folded papers and barely make it out of the field, but a dedicated archivist with knowledge of what will then be military history might be able to figure it out. On the other hand, even the nicest handwriting of 18th century people is hard to read until you’ve read a lot of it, especially because many of them spelled the same word differently even within the same document.
One of the more unique items in Arizona State University’s (ASU) archival collection, discussed by Ken Fagan in “Bringing the Territorial Cup Back Home,” is the territorial cup. It is the trophy for the rivalry between ASU and the University of Arizona (UA). It was purchased in 1899 for ASU after they won the league, and then disappeared for 80 years. Found in a nearby church, it became the symbol for the rivalry between ASU and UA. Now, it goes between schools to whoever wins the rivalry game. This poses a multitude of archival challenges. First, the original obviously can’t be used as the show trophy after the game, as the celebrations are terrible for preservation. So, they use a replica instead. More interestingly, the article discuss the physical move from school to school by the archivists, but left me wondering: how is it archivally transferred? Does one of the archives have control and loan it to the other for victories? Do the two archives deaccession and transfer it every time? Or do they just exchange it with no paperwork? Intrigued, I sought out and found the Territorial Cup Protocol, which governs the transfer of the cup. As it turns out, ASU retains ownership and loans the cup to UA when they win. Security and preservation becomes the responsibility of whichever archivist has it at the time, and it must be publicly displayed with a label. It is transported between schools in a hard case. As is typical for metal objects, it must be handled with white cotton gloves. The protocol also stipulates that it should never be cleaned (their preservation assessment must have determined cleaning would do more harm than good, although I’ve been taught that dusting is ok for most metal objects if done properly). The good thing is, it seems like the university presidents listened to the archivists when creating the protocol, which administrations don’t always do.
In “‘Dunkirk’s Stunning Basis in Archival Footage,” Jacob Oller presents a video from Titouan Ropert comparing archival footage and the movie Dunkirk side by side. He, and the video, argue that Christopher Nolan achieved high standards of historical accuracy in the movie in part by relying on archival research. It is surprising yet heartening to see a film studio actually doing historical research and using archives to produce a more accurate film. I’m used to them failing to do any research, and in some cases knowingly contorting history to their own ends. I understand allowing some artistic license in history-based movies, however that should be based off research and not just making stuff up. I find the real story of events is often more interesting than whatever Hollywood can come up with anyway, and there are definitely great stories hidden in the archives that they never explore. Even those movies which mix history and fantasy, where viewers know it isn’t based on reality, could still benefit from historical research and can still mislead viewers about history. If you ask anyone about the battle of Thermopylae, they will know better than to think the events depicted in the 300 movie are strictly accurate. They will realize the depictions of the Persians are pure fantasy. However, they will still think that only 300 Spartans stayed behind and died to protect the passage, when in reality there were 700 Thespian and 400 Theban hoplites who stayed and died with them—well outnumbering the Spartan contingent. The Thebans have a more interesting story than the Spartans as well, as the city had officially allied with the Persians and the Thebans at the pass opposed that decision and could not return to the city if it was conquered, so they chose to fight and die instead. The entire Pirates of the Caribbean series has a similar effect, as viewers know most of it is pure fiction, but still leave with a romaticised view of pirates as some kind of heroes who took on the British Empire in the 1700s. In reality, the golden age of pirates ended in the 1600s, and they preyed almost exclusively on merchant ships, often forcing surrender before any combat ever took place. When confronted with warships, they generally tried to flee rather than fight.While I haven’t seen Dunkirk yet, there is probably some historical inaccuracies involved, but fact they used archives at all is still impressive, and shows at the very least they cared about accuracy. It also benefits the movie to have used archives, since the scenes and setting is more accurate, improving its ability to tell its story. Finally, I wonder if the archives consulted appeared in the credits. Perhaps they were cited in the Chicago Manual of Style?
In “How do you move a web archive?” Claire Newing discusses the UK National Archives’ migration of about 120TB of data. They had been partnered with Internet Memory Research (IMR), who run a service similar to ArchiveIt, to archive the UK government’s website. While IMR built a data center for the archives, to handle the massive amount of traffic the web archive received, they do not handle preservation of the data. The preservation work had to be handled by the archives, so they had to transfer 12TB of data from the data center in Paris to the archives’ location in Kew. They first attempted to do this over the internet, but it was extremely slow and unreliable. They then decided to use about 70 2TB hard drives, and drive them across the distance by courier. Once in possession of the data, they put it onto tapes, which are ideal for bit-level preservation but not for day-to-day access. In 2016, they had to find a solution to move the whole archives to the cloud, in line with UK civil services policy. With their new partner MirrorWeb, they decided to use Amazon snowballs to move the data. These devices can transfer, encrypt, and hold massive amounts of data very quickly. MirrorWeb configured two computers to feed the snowballs, and within 2 weeks they were able to move 2TB of data from their hard drives to the snowballs. The snowballs were shipped back to Amazon Web Services, who put the web archives on the cloud.
I think most people are not used to thinking of data as physical, but as a vague entity which lives in the air and can be accessed anywhere you have wifi or cell service. However, this story reinforces how physical it is: while anyone can access the web archives, moving it from Paris to Kew, and from Kew to the Amazon cloud, was best done in a very physical process involving moving hard drives and snowballs. Trying to transfer that amount of data over the web would simply have been foolish.
This post does leave some unanswered questions however. Who is doing the web archiving now, Amazon? How will the archives preserve the incoming data? Will they get it on their end, put it on tape, and then ship it to Amazon on snowballs? Or will they have to retrieve it from Amazon via snowball and then put it on tape?
In “Push Underway to Keep Jewish Artifacts From Returning to Iraq,” Oren Peleg discusses the continuing case of the Jewish archives recovered from Iraq. The archives was discovered by US troops after the fall of Saddam in the leaky basement of the former Iraqi Intelligence Service (IIS) building. It was brought to the US and turned over to the National Archives and Records Administration (NARA) for conservation, preservation, and processing. The records were digitized and put on display as well. They consist mostly of papers and artifacts left behind by Jews fleeing Iraq as well as papers and artifacts stolen from Iraqi Jews when they were forcibly removed from the country. The US agreed to return the archives to the Iraqi government, but the date for return has been extended several times. Now, however, there is a movement dedicated to ensuring the archives remains in the US. Proponents argue all the primary users of the archives—Iraqi Jews—no longer live in Iraq and would be barred by the Iraqi government from having access to their own history. I can’t figure out why Iraq would want those records anyway. The Iraqi government has a lot on its plate right now, and expending the necessary resources to house this archives, whose user base is not in Iraq, makes little sense. Perhaps they are worried the archives could be used to paint a bad picture of Iraq, because it documents persecution of Jews. However, it would be easy enough to blame that on Saddam and his administration, and in any case it would take much more than a bad image from a defunct government to isolate the new Iraqi government from US support. Hopefully our government can work out an arrangement with the Iraqis where NARA keeps the physical archives, and perhaps allows the Iraqis full access to the digitized documents instead.
The FDC produces two other records critical to safe firing. One is a map overlay that can be used to determine if the observers are calling in safe targets. This is usually drawn in map pens on acetate overlays, and kept outside. Because of this, it can easily get faded or otherwise destroyed, especially in bad weather. The other is a safety T. This is a simple end product that can quickly determine if firing data is safe. They are kept by the FDC and the guns to allow multiple independent checks at both levels. This means they can be checked against stored firing data to see if the FDC or the guns fired data that was unsafe according to the safety T’s they had. However, the T’s are the product of a complicated math procedure with lots of potential for errors. Ideally, multiple people perform this procedure then compare answers to ensure it was done correctly. Each performance generates a record which needs to be kept in case a round impacts unsafely. This way they can be checked to see if the FDC performed the calculations correctly. Firing data could’ve been safe according to the safety T’s, but if the T’s were calculated incorrectly, then that data obviously might not have been safe after all. These records are often kept after the training for reference in future training. They should also be kept organized in case the unit returns to the same firing point so they can then be reused. The safety procedure can also be done digitally with a special computer, but this computer can only store the most recent set of safety T’s, so it is only useful for one firing point at a time unless the FDC records somewhere else what it calculated for other firing points.
None of this data is worth archiving, except perhaps a few copies for some future historian interested in how artillery functioned in the early 21st century. However, from a records management standpoint this data is critical, and without any records managers it’s up to us to see the importance of these records and preserve them ourselves.
The FDC is the biggest record creator of the artillery. The FDC’s firing computer automatically stores incoming data from the observers, and all outgoing data to the guns. It also contains a database of information relating to all the other units in the area of operations, all targets fired and planned targets, detailed meteorological information, detailed information on all the guns, and many other things. Each piece of information has its own retention schedule—for example, meteorological information needs to be updated fairly often because weather changes often and that affects ballistic trajectories. The database also contains a ton of information critical to being able to establish digital communications. This creates a huge preservation challenge for the artillery—we need that database to operate, and we need to all be running on similar databases. As a result, we save databases from each training event, hoping they will work for the next event. If not, we have to go into a time-consuming process of creating a new database. Because of this, there is often one copy of the newest database floating around, and when we can’t get the computer working we find that copy, copy it for ourselves, load it into our computer, and hope it works.
The firing data stored in the computer is also useful if a round impacts in the wrong place. Like on the guns, it can clear or convict the FDC of sending the wrong data or processing the mission incorrectly. It can also clear or convict the observers of sending incorrect data, although that would likely also convict the FDC of processing data they should’ve detected was wrong. This data is replicated on a standard Army form that is used both when running digital operations and when running by voice. The form records all the data sent by the observers as well as the data sent from the FDC to the guns. It can fulfill the same role as the computer data in clearing or convicting the FDC, although with varying degrees in accuracy. Sometimes, the FDC sends digital data to the guns but over the radio tells them to fire slightly different, slightly safer data. The FDC computer would not record this, but the paper copy would. On the other hand, fire missions move fast, and the recorder does not always get all the information for the paper copy, or because of human error might record the wrong data.
If the gunline is up digital, then the gun’s computer will preserve records of all the firing data that the guns have fired. This is preserved, at least until the guns move to a new firing point. If digital isn’t working, the guns receive their fire commands by voice, and the radio operator writes them down on a standard form. This is preserved generally until the end of the training, when it is thrown away. These records need to be kept at least until the guns leave the firing point. If a round impacts somewhere other than the intended target, these records can prove the guns fired the correct data and clear the gun crew—but on the other hand, they could prove the guns fired the wrong data and convict the gun crew. Digital records are better for this, because they record what data the gun actually fired. The paper records what the gun was supposed to fire. If the gunner messed up and fired the wrong data, there will be a discrepancy in the gun and paper record, which also could implicate the gunners and clear the FDC.
These records are technically legal documents. This means they must be filled out in black ink. If someone copies down the wrong information, they then must strike it out, write down the right data, and date and initial the change. This can make for very messy and hard to read records, especially because artillery moves fast and recorders don't usually have time to write in neat, careful handwriting.
Our conversations and readings about records have got me thinking about how archives relate to my job in the US Army. The army has a historical branch which of course maintains several archives, and army documents are also stored in other places, such as NARA. The army also encourages soldiers to keep records of their deployments, in some cases handing out cameras. However, in the artillery we are very prolific records creators, and most of our records never make it to an archives. So my title is a bit misleading as I’ll really be talking about records management.
The first thing to know about artillery is there are three parts: the eyes, or observers, who see and call in targets, the brawn, or the gunline, which loads and fires the rounds, and the brain, or fire direction center (FDC) which calculates firing data for the guns so they hit their target and determines how many and what type of projectiles to use. We create records in each part.
Ideally, we have digital communication between all three parts. If this is the case, then the observers create a digital record for each fire mission of the information they send to the FDC, which includes target type (tanks, building, etc) and target location (grid coordinates). If digital is down, they usually write down that information and call it to the FDC over the radio. These records are not preserved. The digital might survive on the device for a while, but unintentionally. The written records are usually written on scrap paper and thrown away or lost. Given that observers move around the battlefield, often by walking or running, it’s no surprise these records aren’t kept. They can’t lug filing cabinets around.