I Love It When a Plan Comes Together!

Written by Annalise Berdini on November 6th, 2014

One of the lessons I’ve learned during the course of this project is that often, despite your best efforts, processing will inevitably lead to snags that slow down your pace and extend processing time. When you’re aiming for 4 hours per linear foot in order to stay under the minimal processing time requirements, this can definitely cause some problems. While my partner, Steve, and I have had collections that matched the MPLP requirements closely enough to stay within that deadline, there have been times when it was a struggle to make the timeline work. Some forced us into item level processing. Some surprised us with accessions that had been completely removed from their original home or reordered for no apparent reason. These slowed down our processing time considerably.

How not to store blueprints.

How not to store blueprints.

But then there were the Hahnemann University Academic Affairs records at the Drexel University College of Medicine Legacy Center Archives and Special Collections. This collection has by far best matched the MPLP requirements at this point in the project, despite being the largest collection with which we’ve worked. This collection consists of 250 linear feet of Academic Affairs records, coming from all the various iterations of Hahnemann University. These include the Homeopathic College of Pennsylvania, Hahnemann University, and even a few records from its current Drexel University College of Medicine title. This large collection also came to us quite disjointed, with multiple accessions often originating from various faculty members’ offices or departments within the college, which made for a lot of overlap. However, despite this small challenge, the records themselves were in great shape for MPLP. None had been previously processed (aside from one small “collection,” whose enterprising owner had taken out all the records from their folders and stacked them loosely into a Xerox box, destroying most of the original order). Additionally, because the records came from specific offices and departments, they were often far more consistently organized than personal papers, making it easier to find links between the contents and to figure out what certain folders contained, without excessive detective work.

Because we did not have to focus on item level processing or learning how to re-work previously written folder titles, it left us free to focus on carefully constructing DACS compliant folder titles, and made physical processing that much easier, as many of the separate “collections” were left intact and made into series or subseries. For example, Series II of this collection consists of administration and faculty records. We created subseries based on the faculty member or department from which the records came, which meant very little reorganization, since these records were already split this way.

A student from 1883 -- what fine hair!

A student from 1883 — what fine hair!

As a result of having to spend less time worrying about archives “detective” work, we were able to come up with some methods to streamline the process even further. My favorite of these methods arose when it came time to create the container list. Generally, we had done data entry first and then wrote the scope note after all the physical arrangement had been completed. This time, we wrote the scope notes as we created our container list. It seems like common sense now, because this allowed us to have a fresher memory of what each series and subseries included, and we were able to make preservation and digitization notes as we went along. It helped us track some of the connections between series, as well as to look through the material to double check records that were especially unique within their series. I thought that it would extend the data entry process, looking over all those records again, but this time around Steve and I worked separately on different series, cutting data entry time in half and allowing us to become ‘experts’ on certain sections of the collection. This reinforced the knowledge we had already gathered while working on the collection, and contributed to the ease of creating the scope note as well.

Aside from the well-suited nature of the collection to MPLP, Steve and I also divided our roles more efficiently this time around. We split up data entry, re-boxing, and physical arrangement duties.  Having more time in one institution was also helpful, although a variety of ‘snow days’ meant that, despite finishing about 8 weeks ahead of schedule, there were still a couple of wrenches thrown in that could have considerably stalled us were we working with a less-ideal collection.

250 feet of beauty!

250 feet of beauty!

The takeaway here is that minimal processing works much better for some collections than for others. Repositories looking to get through some of their backlog should carefully consider the fact that not all collections are going to yield a 2-4 hour per linear foot result, regardless of applying MPLP methods. Often, previously processed collections in particular make that result extremely difficult. If a processing archivist is given a previously-processed item level collection with vague folder titles and no obvious original order, MPLP is probably not going to function like one might hope. However, when the right collection is chosen, the result can be a collection ready for researchers in a fraction of the time.

 

 

 

Leave a Comment