In case you missed it, there was a very provocative session on “More Product, Less Process” (MPLP) at the recent SAA Annual Meeting. (You can learn more by ordering a recording of session 501–or any session–here.) I was not in this session, but I was following the conversation on Twitter about it–which got quite animated. Recently, a conversation has started on the Archives & Archivists listserv about how MPLP has affected people’s decisions regarding processing (if you’re not on the list, you can see the messages by searching for “Processing decisions/MPLP” in the Web archives.)
Below are some thoughts on these issues from Dan Santamaria, who teaches SAA’s “Implementing More Product, Less Process” workshop. When he’s not doing that, he serves as the Assistant University Archivist for Technical Services at the Seeley G. Mudd Manuscript Library at Princeton University, where he oversees accessioning, processing, and descriptive practices. My thanks to Dan for sharing his perspective on these issues.
I’ve been somewhat surprised by the recent vigorous discussions of MPLP and processing. I know, from teaching a workshop on pragmatic processing techniques, that there is still a good deal of opposition to MPLP-influenced processing strategies, but it’s been quite a while since I’ve heard such impassioned criticism and discussion of the article. It’s good these discussions are happening. Constructive discussion about processing techniques can only be helpful. There are, however, several issues that I’ve been thinking about since the SAA session and since reading the recent thread on the A&A listserv. Most of the issues seem to be related to misconceptions about ‘More Product, Less Process’ and the recommendations contained in the article. This is not a response to any one specific listserv post, SAA session, or conversation that I have had. It’s a reaction to the discussions of the last week and what I’ve heard in workshops and in hallways over the last several years.
While calls for discussion are beneficial I feel the need to point out that there have been numerous MPLP related case studies in the professional literature and at conferences over the last five years. In fact the extended discussion about processing that occurred after the initial publication of MPLP was one of the most useful and beneficial outcomes of the original article. The most notable of the case studies, by Donna McCrea and Chris Weideman, appeared in the Fall/Winter 2006 issue of the American Archivist (the A*CENSUS issue with the jelly beans on the cover) [note: this is available online here] but there are many others. There have also been MPLP related sessions at each SAA meeting since 2004, many of which included cases studies or addressed specific practical topics such as reference and privacy and confidentiality, not to mention numerous sessions at MAC, MARAC, and other regional associations. (Anyone interested in one instance of the application of MPLP-informed processing at my institution can read my presentation from the Fall 2008 MARAC meeting which also discusses common objections to MPLP) Discussions about the usefulness of processing techniques should use the professional literature as a starting point.
It is natural to discuss the content of individual collections when discussing processing. There are certainly specific record types that generally lend themselves to less granular description without having significant adverse effects of the research process. One of the problems with coming up with a global set of rules for ‘MPLP’ or ‘minimal’ processing, however, is that processing decisions are dependent on the resources available to specific repositories. Institutional support, outside funding, staffing levels (both processing staff and public services staff), reading room and remote public services policies, photoduplication procedures, even digitization infrastructure all matter as much the content of individual collections themselves. As Dennis Meissner himself said at SAA’s most recent MPLP session, in the end MPLP is about resource allocation rather than the specifics of arrangement, description, or preservation. It’s hard to come up with specific recommendations that apply in all cases. I also suspect that we spend too much time worrying about the specifics of each collection and not enough time establishing appropriate policies and procedures for our respective environments. When processing we cannot attempt to account for all possibilities. Thoughtful analysis is useful, but overanalyzing and an unwillingness to take any risks have long been contributing factors to the growth of our backlogs in the first place.
Often missed in the discussions about MPLP is that the article’s most important recommendation is to provide a baseline level of access to ALL collections in a repository before moving on to more detailed processing. The impact of ‘minimally’ processing one collection is negligible. It’s very easy as a processing archivist to focus on individual collections, but we need to think about the big picture of providing access to our entire holdings.
I am sometimes disappointed when archivists refer to the ‘MPLP approach’ or ‘minimal processing.’ This seems to suggest that there is a single approach, usually focusing on performing less physical processing and producing less detailed container lists, that applies in all cases. Several institutions have instead implemented more flexible tiered levels of processing. In the case of my institution, the levels run from collection level description to more traditional folder level description. Opposition to this approach strikes me as opposition to efficient processing in general.
If there is an MPLP approach, I would describe it as first providing access to the entirety of your holdings, then making decisions, given all the institutional factors mentioned above, about which collections need more detailed processing or description. One of the benefits of this iterative approach is that once all collections are described, statistics can be collected on the use of individual collections and they can be ranked accordingly. An MPLP approach, in my view, can sometimes have nothing to do with physical processing and arrangement. Dickinson College’s reference blog and the Municipal Archives of Amsterdam’s on demand digitization program are both practical implementations of Max Evans’ vision of extensible description and digitization. They are just two examples of using resources creatively in order to increase access to collections.
I am often asked how archivists should make decisions about processing. Processing priorities and even processing decisions about individual collections are simply a form of appraisal, of assigning value to collections and portions of collections. We have decades of appraisal theory and strategies to draw on. We have more recent survey methodologies such as the one developed at the Historical Society of Pennsylvania and in use at the Philadelphia Area Consortium of Special Collections Libraries (PACSCL). Inability to make decisions about processing levels, essentially assuming that we will eventually have the time and resources to process everything to an ideal level, strikes me as analogous to assuming that archives can eventually collect everything. It seems to me an inability to perform our core function as archivists.
I will not argue that all of this is easily done. Mistakes will be made in processing and description, just as they are in appraisal (though I would argue that processing mistakes are often not particularly difficult to rectify). It is certainly harder to make these decisions than to process all collections to an ideal level. It is also harder to look at the big picture of providing broad access than to focus on individual collections. But this is our professional and ethical responsibility as archivists.