Project Manager

...now browsing by category

 

Giving 110%: Processing Completed!

Thursday, November 6th, 2014
Training the first cohort of processors.

Training the first cohort of processors.

I’m pleased to announce that our 12-month processing phase of the project has been completed successfully! Overall, the amazing project team efficiently processed 45 collections at 16 repositories in 50 weeks of work, totaling 1685.07 linear feet at an average speed of 3.4 hours per linear foot. We exceeded our goal of processing at a rate of 4 hours per linear foot, which allowed us to process an additional 145.86 linear feet more than we originally anticipated and promised to CLIR in the same timeframe. That’s fulfilling 110% of our linear footage commitment to CLIR!

I am immensely proud of the work that Jessica Hoffman, Annalise Berdini, Steven Duckworth, Megan Evans, Cortney Frank, Carey Hedlund, Alina Josan, Chase Markee, Amanda Mita, and Evan Peugh accomplished during our challenging and ambitious project. Please stay tuned to the blog in the coming weeks and months as I post some terrific reflections written by the team members, as well as my own project documentation, including discussing the challenges of the project, what we learned this time around, and recommendations. Be sure to check out our Flickr page for action photos of processing, too. It’s been a wonderful and productive 12 months!

 

We’re back! Bootcamp, processing, and progress so far…

Friday, April 4th, 2014
Training_Processing

New project team during minimal processing bootcamp at the University of Pennsylvania.

Hello again! Time has flown by, and we’re just getting the blog started again by recapping the current PACSCL/CLIR Hidden Collections Processing Project of 2013-2014. I assumed responsibilities of Project Manager in August 2013 and it’s been a whirlwind of activity from the very first day. I had to quickly assess and plan how we would minimally process 46 collections containing materials from the 18th to 21st centuries, all specifically related to Philadelphia history. Processing will require us to process at a rate of 4 hours per linear foot at 16 different repositories over the course of one year.  In addition to 12 veteran participating repositories, we welcome four new institutions to the project, including two non-PACSCL members. With this project, we hope to refine, confirm, and better establish guidelines for applying minimal processing to a wide range of collections and types of institutions and creating high-quality finding aids for our ever-expanding collaborative site.

Surveying at Germantown Historical Society.

Surveying at Germantown Historical Society.

As you may recall, this project builds upon the predecessor processing project lead by Holly Mengel and Courtney Smerz from 2009 to 2011. Having served as one of the processors on that project, I began my work as Project Manager already very familiar with the “PACSCL” methods and approaches established by the first team. My familiarity with these approaches, along with additional archives management experience, gave me a bit of a running start, but I immediately found that I have my work cut out for me. More about the challenges and lessons I’ve learned so far will be chronicled in later posts.

In August, I quickly got started by surveying the collections selected for the grant that had not been surveyed previously by the fabulous PACSCL Survey Initiative Project. I followed and expanded upon the guidelines already previously established in earlier projects to assess these new collections. In September and October, I was able to assemble a fabulous project team of six processors and one assistant, who all attended the bootcamp training week designed to establish a good overview of the PACSCL approaches to minimal processing and the Archivists’ Toolkit. After training, I assigned pairs of processors to our first three repositories (Temple University, the Philadelphia Museum of Art, and the Union League of Philadelphia) to kick off the year’s-worth of processing work ahead of us.

First day of processing at Temple University.

First day of processing at Temple University.

Already with many challenges and successes along the way that will be detailed further in the coming weeks on the blog, we hit our six-month mark this week right on track! At our halfway point in the project come mid-April, we will have processed an approximate total of 762 linear feet for 22 collections in 9 repositories, at an average rate of 3.45 hours per linear foot. Please stay tuned as we continue to add more frequent updates about our progress, lessons learned, and interesting finds!

Excel to EAD-XML to AT—the spreadsheet from heaven.

Monday, March 19th, 2012

Unknown size: small.

Although it seems like a million years, it actually was not so long ago that our students were processing at the Independence Seaport Museum.  While we were there, we were faced with one of the limitations of our minimal processing time frames.  The archivist there, Matt Herbison (now at Drexel University College of Medicine Legacy Center) had a few spreadsheets detailing information on ships’ plans—information that made the collections truly useful to researchers.  Problem was, there were thousands of entries in the spreadsheets and we knew that our processors could never re-key or copy/paste that information into the Archivists’ Toolkit in the time allotted for the processing.

Because we knew that this information would really make a difference for users, we thought and thought of ways to make this work, but our best solution involved saving the spreadsheet as a pdf and linking to it from the finding aid–not very elegant. And then Matt, who really is extraordinarily techie, created this amazing spreadsheet that solved the problem.  To sweeten the deal even more, he offered Courtney and me the use of the spreadsheet for the project.

I will now make a very bold statement:  this spreadsheet made it possible for us to finish the project within the time frame.  Not only did we use it at the Seaport, our processors used it for original data entry at repositories that had spotty internet connections, technical troubles, and/or did not adopt the Archivists’ Toolkit.  Our Archivists’ Toolkit cataloger used it as a starting point for almost all electronic legacy finding aids.

Matt has offered to share this spreadsheet with everyone.  It is available here and we have created a guide for using the spreadsheet.  In a nutshell, each column in the spreadsheet maps to specific field in the Archivists’ Toolkit.  It has three levels of hierarchy below the collection level, so it not the tool of choice if your finding aids has sub-sub series and items, but for most modern finding aids, it is the ticket.  I should say, though, that it is not necessarily a quick process if you are starting with existing data … time needs to be taken to combine columns, format data, and check for errors.  If you know how to use regular expressions, you can really streamline some of this work.  If you are doing original data entry, the use of the spreadsheet is incredibly efficient for getting container lists into the Archivists’ Toolkit.

This means that anyone with knowledge of MS Excel can create finding aids and take legacy information from an electronic format to xml.  Pretty awesome! I will say that a little knowledge of EAD is very useful and understanding the Archivists’ Toolkit will make decisions in data entry easier.  Many of our students preferred working with the spreadsheet rather than the Archivists’ Toolkit, but it is a matter of preference.  I think it is a little harder to see the hierarchy when using the spreadsheet, but it is a thousand times easier fix error in Excel than in the Archivists’ Toolkit.  Check it out, try it out and see if it changes your life.

Yes, I did say that … I think it could change your life!

Thanks SO much to Matt Herbison!

Legacy finding aids: a trial (by any definition)!

Monday, February 13th, 2012

Unknown size: small.


77 “substandard” or legacy guides are now in the Archivists’ Toolkit and final editing is underway.  And I am happy about that … however, almost none of these look as good as they could or should.  Garrett Boos, Archivists’ Toolkit cataloger, and I spoke many times about the limitations of this part of the project.

We decided that there were several problems:  working remotely from the collections; the format, structure and quality of the finding aids that were given to us; and, to be perfectly honest, our own expectations for the final product.

Before Garrett started, I decided that working remotely was going to be the most logical way to approach this part of the project.  Garrett worked in our office at Penn and entered the collections into our own instance of the Archivists’ Toolkit.  We then exported the finding aids from his AT and  imported them into each repository’s instance of the Archivists’ Toolkit.  I decided to have Garrett work at Penn primarily because of logistics—otherwise, he would have had to work at 18 different repositories and, as we have learned, technology and space are two of the greatest challenges of the project.  Not to mention the instances when security clearances would need to be run, etc.  However, now that Garrett is done with the project, I have been trying to decide if it would have been better for him to work on-site and I am torn.  On the one hand, it would have made a lot of factors easier—especially checking on locations, vague titles and missing dates, to name only a few.  On the other hand, it would almost certainly have stopped being a “legacy finding aid conversion” project and turned into a “reprocessing” project. So I guess I need to stand by my decision to work off-site, even it was limiting.

Unknown size: small.

The reason I say that it would have turned into a “reprocessing” project is because Garrett and I think that at least 60% of the collections should have had some physical and intellectual work before the finding aid was considered final.  As with all aspects of this project, the legacy finding aid component was an experiment and therefore, the grant allowed repositories to send us any “substandard finding aids.” This resulted in several types of “tools.”  Garrett took them all on:  lists, card catalogs, databases and more traditional finding aids.  The biggest problem we found was that very few of these guides were organized hierarchically which meant that we had to do a lot of guessing—was something a folder, or was it an item?  Should the paragraph connected to a folder title be added as a scope note or was it actually part of the folder title?  What to do with the information about the contents of a letter, or the condition of the material?  What happens when there is no biographical/historical note and no scope and content note?  Thank goodness for email and helpful repository staff! 

I should say that there were a number of finding aids that came to us in absolute perfect shape … putting that finding aid into the Archivists’ Toolkit was a piece of cake and the resulting finding aid was beautiful. Others that were written before finding aids were standardized did not work nearly so well. Because we forced non-hierarchical guides into AT, a system designed to organize information hierarchically, some of the finding aids are actually less user-friendly than the originals. Many of these legacy guides had item level description, something our stylesheet doesn’t handle well, resulting in what Garrett and I have termed, “really ugly finding aids.” Moreover, of 77 finding aids, only 15 did not require some enhancement of biographical/historical or scope and contents notes–which is pretty tricky when working off-site. Titles and dates almost always needed to be reformatted for DACs compliance. Our primary goal was to maintain every bit of information that was in the original, but it worries me that we have created online guides that are potentially overwhelming and off-putting to researchers.

Some repositories have told me that I should not worry—that getting the guide online is enough.  Others, though, I know are really disappointed with the result. We surveyed our participating repositories about the effectiveness of the project and their satisfaction, and while we have not heard from all, the component of the project that proved least satisfying is the legacy finding aid component. I know that it is, by far, the part of the project with which I am least pleased.

Does this mean that you should not do a legacy finding aid conversion project?  No!  Do a legacy finding aid conversion, but do it with some structure and guidelines!  In order to have a successful legacy finding aid conversion project, we learned that repository staff will have to do some (or alot of) front line work prior to unleashing the guide on the cataloger.

Before handing over a finding aid, repository staff should identify (in pencil is okay):

• Folder title (underlined in one color)
• Folder date (underlined in another color)
• Box number
• Folder number
• If there is additional material, into what field in the Archivists’ Toolkit/EAD should it be entered?
• Biographical/historical note (does not need to be narrative, but the information should be provided by an “expert”)
• Scope and content note (same as the bio note)

If, as you go through this process, it becomes obvious that reprocessing is necessary, take the collection off your conversion list and place it on a priority list for processing.  Processing the collection may be quick and speedy and your result will almost certainly be better! In fact, I think, in some cases, we spent more time forcing data into AT than it would have taken to reprocess the collection.

Identifying these essentials should result in finding aids that are more standardized and allow researchers greater access to your awesome stuff. Don’t count on it being a quick process, however: the prep work is time consuming, the conversion is time consuming, and the proofing and editing is REALLY time consuming. This is not a task that can be placed only on the person converting the finding aid … even after the finding aid was in AT, Courtney and I, with fresh pairs of eyes, found lots of mistakes in spelling, hierarchy and grammar which would have been embarrassing and, even worse, would have potentially prevented people from finding that for which they were looking. Which is, of course, the whole point of all our work!

Description in MPLP is counter-intuitive

Tuesday, February 7th, 2012

Courtney and I both felt strongly, from the very beginning of the project, that sacrificing description for speed was a risk in this project.  Although we know that every collection could still use additional work, we worked hard to make it so that the repository did not feel that additional work was necessary before they made the collection public.  Moreover, we knew from the start, that many of the collections would NEVER be worked on again.  Unfortunately, that is just how it is.

Unknown size: small.

So what have we learned about description?  We learned that description takes a lot of time—in fact, that is probably the first thing we learned in this project when we tested the manual and discovered that even an experienced processor could not arrange and describe a fairly straightforward collection from start to finish in 2 hours per linear foot.  As a result, Courtney and I created processing plans that included a preliminary biographical/historical note before processing started.  In general, we have learned that it generally takes roughly the same amount of time to describe a collection as it does to arrange a collection.

I’m not going to lie … I am pro description … few things give me more professional pleasure that a beautifully crafted folder title or a paragraph in a scope and content note that I know will help a user determine if this collection is going to help them with their research.  That is the whole point—letting researchers know that we have the stuff that they need.  As a result, the PACSCL/CLIR team took it seriously.  Description is the one part of training that has probably evolved most over the course of the project.  We developed exercises to help our processors write better and more descriptive folder titles and structure notes so that they are both concise and informative.  The project didn’t have a lot of time, so we tried to make our processors think like a user and learn to quickly assess the contents of a folder.  For the most part, we are really pleased with our finding aids and I think, nine times out of ten, researchers will be able to determine by the finding aid if the collection is worth their time in looking at it.

One of the really interesting things we learned is, to me, still the most counter-intuitive.  A collection with extremely tidy existing arrangement usually results in a collection with less thorough description.  I am going to use two specific collections to illustrate this issue.

The first collection is the Dillwyn and Emlen family correspondence, 1770-1818, housed at the Library Company of Philadelphia (unquestionably one of my favorite collections in this project—as well as being one of my biggest disappointments, archivally speaking).  When I sat down to process this collection, I was really confident—the collection was 2 linear feet and was already arranged.  At one point in time, it had been bound in volumes and at another point in time, the letters were removed from the volumes and placed in very acidic folders.  Every letter had a catalog number written on the document.  While a few of the letters were out of chronological order, the vast majority of the collection was arranged very effectively; each folder containing letters from a span of dates.

Unknown size: small.

This collection desperately needed to be re-foldered.  Not only were the folders highly acidic, but they were too small and some of the letters were showing a bit of damage.  I re-foldered the 130 folders in the collection which took about 2.5 hours.  Then I entered the folder list into the Archivists’ Toolkit which probably took only about 15 to 20 minutes.   So in roughly 3 hours (three quarters of my allotted time), I had the collection rehoused and the folder list in the Archivists’ Toolkit, which left me 1 hour to write a scope and content note.  Should have been easy, right? Well, no. Because this collection was perfectly arranged, I did not need to look at even one document in order to create the container list.  Moreover, the container list is not very helpful to a researcher.  All it contains is a list of dates which means that the scope and content note should be full of the subjects addressed in the correspondence.  Problem is, I did not know anything about the letters.  There was no way that I could read enough of the letters in an hour to discover all the topics addressed in the letters that will almost certainly be interesting to researchers.  I did my best—I valiantly scanned through as many letters as I could and wrote down key topics that popped up more than once or twice, and as each minute passed, my heart sank just a little more—I knew perfectly well that I could never do this extraordinary collection justice, even with twice the time.  Prior to beginning processing, I had performed my research for the biographical note and I had discovered that several authors had used portions of the collection in their published works … so I turned to them for expertise on this collection.  They wrote about only a tiny portion of the collection, Susanna Dillwyn Emlen’s bout with breast cancer.  I soaked up every bit of information in their books and included it in my scope note in order to give users the most information possible, but I feel like the project failed this collection.  Perhaps I feel this so strongly because I had been so confident in significantly improving access to it.

Unknown size: small.

I have beheld the second collection, the Belfield collection, 1697-1977, housed at the Historical Society of Pennsylvania, with equal amounts of awe, excitement and horror since I first laid eyes on it.  Never have I seen such a mess of a collection—please see just a few photographs as words cannot effectively describe the condition of this collection.  Courtney and I spoke with Matthew Lyons of HSP and he said that he was not expecting much more than good box level descriptions of the contents.  Even with these reduced expectations, we thought it wise to double our forces and therefore, Michael, Celia, Courtney and I all worked together on this collection.  I am happy to say that this collection will, for quite a few series, contain folder level description, but even more than that, the scope and content note for this collection is rich, deep and full of the flavor of the four generations of family who lived at Belfield.

So why does a collection that was the biggest (filthiest) mess of all time result in a better finding aid than a small and beautifully arranged collection?   I know it is because we were forced to sift through the messy collection in order to create any order, and it is amazing how much one absorbs simply by looking at the collection.  In the end, I feel that this is one of the biggest rapid maximal processing successes of the entire project.  We took the collection from utterly unusable chaos to an order that could certainly be refined, but is beyond serviceable.

When selecting collections for a minimal/rapid maximal processing project, consider your time frames and what result you want from the project.  If you want a container list in a hurry, select a well-organized collection.  If you want fuller description, a collection that needs some arrangement will probably be the best choice.  From a purely selfish perspective, I would pick a wreck of a collection over a tidy one every time—the sense of accomplishment and success is so much sweeter than that despair I still feel when I think of Dillwyn and Emlen letters.

I mentioned in an earlier blog post that there are about 3 collections that I don’t feel enormously benefited from this project.  In every case, the collections had existing arrangement that I felt either prevented me from starting from scratch or were in good enough order that I did not learn valuable content that I could then share with researchers.

The decision to minimally process should be a collection-by-collection decision …

Friday, January 27th, 2012

Fairly early in this project, Courtney and I determined that “MPLP 2 Hours” was not going to be a wholesale success—most collections simply cannot be processed in that time frame, regardless of the shortcuts taken (our average across the board is 3.2 hours per linear foot).  And in some cases, those shortcuts resulted in a product that we did not feel was more useful to a researcher post-processing.  What we have determined is essentially this … it is difficult, if not impossible, to say that collections can be processed in a set or determined amount of time, but it is possible to make educated estimates allowing us to allocate human resources to process collections efficiently.

There are several factors that allow us to better determine a time frame for the processing of collections:  age, type of collection, and original arrangement of the collection are the three biggies. None of these factors work independently—they are all intertwined to help determine the time frame.  So, based upon the data collected for 125 collections, processors have physically processed collections with the oldest material dating from the:

17th century at an average of 4.1 hours per linear foot;

18th century at an average of 3.3 hours per linear foot;

19th century at an average of 3.4 hours per linear foot;

20th century at an average of 2.9 hours per linear foot.

Processors have processed:

artificial collections at an average of 3.6 hours per linear foot;

institutional/corporate records at an average of 2.5 hours per linear foot;

personal papers at an average of 3.7 hours per linear foot;

family papers at an average of 4.2 hours per linear foot.

Age seems like it should be the most logical factor, but in fact, it has proven to be the least certain factor in our ability to judge the time frame for processing.  We thought originally that old collections (pre 1850s for certain) would take us significantly longer to process, but this is not necessarily the case.  The age does not seem to deter us in being able to efficiently process an “old” collection.  Age does, however, quite frequently deter us from describing the collections well.  Quickly skimming for content in folders of 17th, 18th and 19th century handwritten material is not easy—and it absolutely results in less thorough description.  However, if the collection is arranged and available for research use, perhaps this is where we ask for help … as researchers use the collections, we can ask them to provide more robust description of what the correspondence, journals, etc. contain.  Finding aids CAN be iterative … especially with technology such as the Archivists’ Toolkit.  “Newer” collections may or may not be easier to process … certainly there is more typewritten material that makes it immediately easier to categorize series/subseries/folders and describe the contents of the folders more thoroughly.  However, in the end, the ease of the processing relies more heavily on the type of collection more than the age.

For this project, we have divided collections into four basic types:  institutional/corporate records, personal papers, family papers and artificial collections.  Again, there is no one size fits all … each collection is unique (is that not why archival collections are so awesome?).  Generally speaking though, an institution or company’s records can be processed most quickly, followed by personal papers and then family papers.  Artificial collections are usually the fastest or the slowest depending entirely upon the collector.  Usually, they are speedy—the collector is in love with the topic they are collecting and as a result, they arrange the collection for their own personal satisfaction and use—all the letters of a children’s book author are arranged chronologically by date sent or alphabetically by the recipients’ names.  If this is the case, the artificial collection is a dream to process and it usually requires only description.  In a few instances, however, we have found collections where the collector simply collects … they probably know that the stuff is important, but they are not organizers.  At that point, trying to create a system out of a group of randomly acquired material can be quite difficult.

Institutional and business records are usually quick and easy and this is because the functions of a business or an institution generally follow the same basic structures and are fairly predictable.  Usually, you will find financial records, minutes, committee records, administrative records, subject files, correspondence, etc.  Because the function generates the records, it is logical and easy to determine a good organizational scheme for the papers.  But as always, the collections are unique and we have found that different creators generate different levels of tidiness, logical order, and structure.

Personal papers are the next quickest to process (generally speaking), especially if the creator was involved in several major movements, careers, and/or activities.  However, the ability to efficiently process a person’s personal collection often depends upon how intermingled those pursuits are with family, friends, and work.

Family papers have been, fairly consistently, the most time-consuming collections to process.  The problems that arise with family papers that generally do not exist with personal papers are the intertwining relationships that make determining to whom a certain group of materials belong challenging, and sometimes, impossible.  When every generation in a family has a woman named Sarah, determining generations becomes a trial.   Many a day passed at the Historical Society of Pennsylvania with the following conversation: “So wait, this is Sarah Logan Wister Starr?”  “No, this is Sarah Logan Starr Blaine!”  Or:  “Here is a letter to Grandma Sarah from Sarah …does that mean it is Sarah Logan Starr Blain?”  “No!  It could be Sarah Logan Starr Blain OR Sarah Logan Wister Starr OR Sarah Tyler Boas Wister!”  Egads … I wanted to buy a baby name book for this family!  Not surprisingly, this kind of questioning takes time … lots of time.

The third main factor in determining time for processing a collection is existing arrangement.  A collection of 20th century business records thrown into boxes will take longer than a collection of 18th century business records that are housed in volumes.  A collection of family papers organized by the donor into distinct family member’s papers can probably be processed more quickly than a collection of personal papers that are completely unsorted.  I have intentionally not used the term original order which implies that the order was generated the creator.  Existing arrangement may have been generated by the creator, but in many cases, it is generated by an archivist who starts processing the collection but does not complete the project.  Unfortunately, the hardest collections to process efficiently are often collections that someone else has started to process.  Trying to understand an undocumented order that has been imposed or continue with an arrangement scheme that does not seem logical is much more difficult than imposing order from absolute chaos.  And without a questions, the collections that take the absolute longest are ones in which parts of the collection have received item level treatment.  Addressed in the next blog post will be how this type of existing arrangement affects description of collections.

So, basically what we have said here is that every collection is different and unique and there is absolutely no way to say that one time will work even within a date frame or a type of record. Our observations are backed by Greene and Meissner who say that “MPLP … advises vigorously against adopting cookie-cutter approaches … and [recommends] flexible approaches,” (page 176).  In order to make educated estimates for allocating resources, we believe that a base-line starting time frame is needed:  institutional/corporate collections should be given 3 hours per linear foot.  Based upon the existing arrangement, tack on another hour per linear foot if it is in a shambles.  If the bulk of the material is from the 18th century, tack on yet another hour per linear foot for increased perusal time which will result in more effective description.  So, in this case, your estimated processing time is 5 hours per linear foot.  Could you do it in three?  Yes, probably.  However, with allowances for age and existing arrangement, you will almost unquestionably have a better product, still at just over ½ the rate of traditional processing.

Based upon our experience, the PACSCL/CLIR project believes that the following base-line processing time estimates would work well:

Artificial collections:  3 hours per linear foot

Institutional/corporate collections:  3 hours per linear foot

Personal papers:  4 hours per linear foot

Family papers:  6 hours per linear foot

Our averages clearly show how quickly collections can be processed … but the base-line estimate with upgrades allows us to provide the best possible product while being mindful of available resources.

Historic recipes: always a great way to celebrate!

Tuesday, January 24th, 2012

Unknown size: small.

At the end 2011, the PACSCL/CLIR “Hidden Collections” project gathered our processors, our repository staff, and our extraordinary helpers together to celebrate the successful completion of the project.  It was a way for Courtney and me to thank everyone who worked so hard and made this project work!

We celebrated in the beautiful Ewell Sale Stewart Library and Archives at the Academy of Natural Sciences, Philadelphia, thanks to the generosity of the archivist, Clare Fleming.  And many of our project team brought dishes, straight out of the past!  As we processed, the foodies among us took photographs of recipes we found in the collections, so it turned out that we had quite a nice pile of historic recipes to choose from when selecting our fabulous menu.  Photographs of our recipes can be found in our Flickr set Eating in the Archives.

Unknown size: small.

I made five recipes and it occurred to me as I was running off to the grocery what a different world we live in from the late 1700s and 1800s.  For example, to make my five dishes, my ingredient list included the following ingredients:  butter, shortening, flour, eggs, baking powder (lots of it!), a little sugar, milk, rice and a few spices.  I tend to think of myself as rather in touch with history, but I remember sitting for a few moments staring at the list and thinking, “what of my fabulous vanilla from Mexico?  what of cocoa?  what of lemon zest?”  I also remember thinking, in a cold sweat, of what I would have to eat in the midst of December if it were not for grocery stores, airplanes, ships, railroads, commercial farms with irrigation systems, etc., bringing fresh fruit and exotic ingredients from around the world.  The cold sweat returned as I baked–I have a whole new appreciation for epicurious and cookbooks with instructions … I decided not to make “soft gingerbread” because the recipe included a list of ingredients, but no other instructions.  I am pleased to say that my braver colleague, Sarah, made the gingerbread with great success.

Unknown size: small.

Now that I have talked about food (one of two conversations everyone eventually has with me–and usually sooner rather than later), I would like to publicly thank a few people:  our amazing project team; repository staff, who took us in and trusted us with their world-class collections; UPenn for hosting Courtney and me; Laura Blanchard, PACSCL staff member extraordinaire; Delphine Khanna, who is responsible for our fantastic PACSCL Finding Aids Site; Matt Herbison, who created a spreadsheet of wonder that helped make our project succeed (blog post on this forthcoming); Christa Williford from CLIR for all her support throughout the last 2.5 years; and Christine DiBella, who was responsible for the PACSCL Survey Initiative and helped me out, so much, particularly at the beginning of the project.  And finally, Courtney Smerz, who has brought her archival skill, pride of work, and enthusiasm to this project.

We have until the end of March to pull together all our loose ends and then we will leave behind this amazing project.  Thanks PACSCL and CLIR for this amazing opportunity!  I have enjoyed every minute!

Last minute holiday gift for the archivist in your life …

Thursday, December 22nd, 2011

Unknown size: small.


Over the last two years, Courtney and I have had 17 graduate students work for us and we appreciated every minute of the time and energy that they extended to the project. So, as a gift at the end of their service to the project, we gave them what we like to call the Archivists’ Kit Bag (the obvious Archivists’ Toolkit being taken). We hoped that this bag of tools would make them ready for whatever job came their way–and a job came for each and every one of them!

Unknown size: small.

I personally want one of these kit bags, as does Courtney and a few other repository staff members who have seen them, so if you are still looking for the perfect gift for the archivist in your life, consider putting together this bag of goodies. We bought some tools from Gaylord, but you can actually go to craft stores (Dick Blick, Michaels, A.C. Moore, etc.) to get the bulk of it, if you are in a hurry.

Our kit bags included:

  • Bone folder
  • Micro spatula
  • Mechanical pencil with extra lead and erasers
  • Eraser
  • PH Pen
  • Knife
  • Measuring tape
  • Plasti-clips
  • Gloves
  • Note book

Our original bags were made by John Armstrong, surveyor during the PACSCL Survey Initiative, but after he moved to New England, we had bags made by an artist on Etsy who is, unfortunately, no long able to make them for the price he had originally quoted.  They are waxed cotton and pretty awesome, however, a pencil case would work just as well–just make certain the bag is big enough to hold the micro spatula.

Your favorite archivist will love it!

100 Collections Processed: Rapid Maximal Processing

Wednesday, May 18th, 2011

Unknown size: small.

Today is an exciting day—we have completed processing our 100th collection!  And we are feeling a collective sigh of relief emerging from our lips as we become more and more certain that we will complete the project by August 31!

So … 100 collections!  Over the next few weeks, I plan to write a few posts about what we have learned via the project.  With a hundred collections that range across 5 centuries, 4 “types” of collections, and too many topics to name, we have enough data to really talk about lessons learned!

Today, though, I want to talk about what minimal processing has meant during the project.  Thus, the first thing I am going to talk about is the term “minimal processing.”  Over the last few months, I have reread Greene & Meissner’s orginal and follow-up articles.  Their second article, which reinforces and further explains their first article states that an archivist must examine the resources available and then use them wisely to carry out the ethical/moral responsibilities of the profession:  to make collections available to researchers.  I have also reread Rob Cox’s Maximal Processing, or, Archivist on a Pale Horse.  Cox’s goals match Greene & Meissner’s (to make collections available to researchers as quickly as possible), but one of the main differences in their philosophy seems to be with regard to description.

The PACSCL/CLIR project’s current approach blends Greene & Meissner’s “minimal” physical work with Cox’s “maximal” descriptive work.  Like so many other institutions, we have created, from two amazing philosophies, a workflow that works for us.  We have borrowed liberally from both Greene & Meissner who state that MPLP does not require or recommend a cookie cutter approach to processing, and Cox who states, “the term maximal processing is intended to frame our activities in terms of our highest aspirations—to provide the maximum support for our researchers—to emphasize what we can accomplish rather than lament what we cannot,” (Cox, page 147). If we are not minimally or maximally processing collections, what ARE we doing?

Unknown size: small.

Rapid Maximal Processing:

I am going to argue that we are doing “rapid maximal processing.” We are looking at every collection individually and determining, on a case-by-case basis (as recommended by Greene & Meissner), how we can provide the maximal support for our researchers (as recommended by Cox) using the available resources (which, in our case, are bare bones).  We have determined, for the most part, that we want our resources to go towards description, not physical care of the collection,  and so, we ask ourselves:  What series need more attention, what series need less?  If we do a little more with the series that we anticipate will receive the most research, what sacrifice is made when we necessarily do a little less with a series that we think provides less unique or helpful information?  Most importantly, are we using our available resources–2 hours of student processor mind and body power for each linear foot–to efficiently create the most useful and accurate guide we can?

Description in a Rapid Maximal Processing setting:

Courtney and I have seen description for the PACSCL/CLIR “Hidden Collections” Processing Project as one of the most important final products of the project.  Again, we tend to lean towards Rob Cox’s Maximal Processing where he encourages his staff to “seldom skimp on description–the Velcro of the archival world,” (Cox, page 145).  Greene & Meissner, however, state that the narrative segments of finding aids are less desired than the container lists by researchers—and that “extended narratives are created not for the users, but for the archivist authors,” (Greene & Meissner, page 213).  I believe that this may  be true, but I am not sure that the archival author should be ignored here—writing a concise and well thought-out biographical/historical note and the scope and content note is a way for an archivist to organize the knowledge and collection information that they absorbed while processing the papers and to share it with the researcher, other archivists and reference staff.  I feel that this is particularly important when the bulk of processing is done by project staff who move on after the processing is completed.

Even with brief exposure to a collection, it is amazing how much the processor learns—and as a researcher, I would want to know where the gaps and the strengths of the collection exist.  We have found that a well-structured scope and content note reinforces the logical structure of the physical and intellectual arrangement.  When training our processors, we tell them that the container list needs to have some sort of arrangement and as they organize the collection, they should think about writing the scope note.  If they cannot explain the arrangement they are imposing or that already exists, it is almost certainly not legitimate.  We also remind them that the only reason to write a finding aid is so that a researcher can find the material listed therein.  Having the processors justify their description is an important part of processing, especially in a rapid maximal processing setting.

Project Accomplishments … and what we could do better in a future project!

Unknown size: small.

Student processors (who deserve so much credit in this project) have processed institutional/corporate records, personal papers, family papers, and artificial collections ranging from the 17th to 21st centuries at an estimated average rate of 2.5 to 3 hours per linear foot.  100 collections in, the project has processed 2,443 linear feet in roughly 6,000 hours.  At a traditional processing rate (8 hours per linear foot), this linear footage would have taken 19,544 hours … which is about 9 years of dedicated processing work for a full time professional archivist.

There is no question that, with possibly 3 exceptions (to be addressed in a forthcoming blog post), the collections processed by this project are significantly more accessible to researchers despite the limited amounts of time spent on them.  As I have said in every public statement (written and verbal), 2 hours per linear foot is too short a time to be allotted to collections wholesale!  The amount of time needs to be assessed, along with the level of processing, on a collection-by-collection basis.  For the PACSCL/CLIR project, every collection could use more work.  This project is ideally a first step, although in many cases, it will almost certainly be the only step taken. Despite this, I hope that archivists and users will be able to identify the true gems in each collection.  At that point, archivists can re-evaluate their available resources and make educated and use-based decisions about the best allocations for their resources.

Researchers will need to work a little harder, in many collections, to try to find the desired material—but at least they have access to the collection! Reference staff may have to work a little harder to help researchers, but again, they have access to a finding aid that will hopefully provide a framework within which to work.  In the end, though, if we look at the results of the project through the researcher’s eyes and the staff’s eyes, everyone wins!  The gains absolutely outweighs the sacrifices.  And when I think of what collections we would have cut to spend more time on a select few—it is like Sophie’s Choice!  I love them all!  If we did not work at the speed we did, the unavoidable result would be that some of these amazing collections would be sitting on shelves and researchers would be unable to use them.  Whenever I regret the speed at which we need to work, I remember that more than 100 collections will be available to the public by August 31 and I accept the limitations with a smile.

Sources:

Cox, Robert S.  “Maximal Processing, or, Archivist on a Pale Horse,” Journal of Archival Organization, 2010 November 24.

Meissner, Dennis and March A. Greene.  “More Application while Less Appreciation:  The Adopters and Antagonists of MPLP,” Journal of Archival Organization, 2011 February 26.

It turns out that business records are FASCINATING

Monday, March 14th, 2011

Unknown size: small.

When I was preparing to process the Thomas Leiper and family business records at the Library Company of Philadelphia, I was a little less excited than I usually am—although one would think that I have learned not to judge a collection by its type (in this case, business records).  This collection is an absolute treasure trove—and will be amazingly useful for so many different researchers, especially those interested in early American business, the tobacco and quarrying businesses, workers, estate management, and the American Revolution.

Unknown size: small.

There were a couple of volumes in this collection that I found particularly fascinating.  First, there are the letter books, which are largely business related, but are peppered with copies of more personal letters.  Leiper, in addition to being an intrepid business man, was also a patriot.  Based upon some of the letters, he was clearly an advocate of independence and in order to prepare for this dramatic step, he helped found and later served in the first Light Troop of City of Philadelphia.  He was actively involved in the city’s goings-on and as a result, his letters are full of news and updates on the events of the day.

Unknown size: small.

As mentioned before, Leiper was quite the business man.  He owned businesses in the tobacco and quarrying fields, and as a result of his success, he purchased land for further business developments and worked extensively for improved transportation in Pennsylvania.  If that is not enough to make the collection pretty amazing, Leiper’s business interests seem to have been inherited by his descendents and some form of these businesses as well as a few new ones continued into the 20th century.  One of the volumes relating to Leiper’s quarrying business contains a roster of early American stone masons and builders.  As a historian interested in how the “common man” (and woman) lived, I was quite enthralled with volumes entitled “Wage Book” and “Work Book” which can be found with the quarrying and tobacco business records, respectively.  The quarry business is documented via the “Wage Book” which effectively shows the cost of running a business from 1833 to 1839 with information on the cost of boarding workers, wages, freight bills, vessel charges, and expenses for the business and the people who supplied services.  The “Work Book” contains information about Leiper’s workers in the tobacco business from 1776 to 1795:  their names, the type of work they performed, their hours and their wages.  Both are a great snapshot of what it was like to own a business in the 19th century and serve as a laborer in the 18th century.

Unknown size: small.

All in all, this collection was a surprise for me and in the small amount of time I was able to look through the volumes, I was excited to find a few of the many hidden gems located in this collection.  Also, I love collections where I can go into the community and find remnants of their work.  The Thomas Leiper and Sons quarrying efforts live on … you can see their quarried stone at Girard College, Swarthmore College and the Leiper Church.  It would take quite an expert to locate, but apparently, his stone is also found throughout Philadelphia in curbstones and steps for city homes. We may even thank him (or curse him) for some cross-Pennsylvania roads.

Ahh, history … it is all about us … we just need to use archival collections to know where to look!