← Introduction 2. Putting the Control File to Work →

Chapter One

Initiating an Editorial Project

A “project” in documentary editing once meant a group of researchers working full-time to identify and gather sources from many repositories for eventual publication in a series of printed volumes. This edition of the Guide, like its predecessors, is organized in terms of a rough chronology of the physical and intellectual tasks that might confront the editor of such a large-scale, long-term edition. These begin with the collection of sources and end with the final review of materials set in type for print publication or tagged and coded for electronic transmission.

It was always a polite fiction to pretend that there was such a hard and fast timeline in editorial work. Even twenty-five years ago, when the first edition of the Guide was prepared, few if any editions confronted each and every problem discussed in the Guide. And certainly all editors found that a problem supposedly “solved” at the outset of their work reemerged, demanding a fresh solution, as new documents or new technology required. It was more accurate to say that this sequence was representative of the order in which editors think about projects they are going to undertake rather than the order in which they perform the tasks.

In truth, the considerations of any edition must be regarded as a whole. A project’s plan for collecting and cataloging (chap. 2) is inevitably influenced by the projected scope of the planned edition (chap. 1). That scope soon becomes irretrievably connected to the organization of the published work. Methods of transcription (chap. 4) must take into account the standards of the final editorial text (chaps. 5 and 6). Editors of long-term editions will still be overseeing the collection of sources (chap. 2) as they supervise publication of early sections of the edition (chap. 8). Error or miscalculation in one area can guarantee disaster in another, and the editor’s failure to assign or to assume responsibility for discharging each aspect of the plan will delay or even doom the whole edition.

I. The Guide’s Electronic Component

Changing patterns in editing make the model of a large-scale “project” with a sequence of editorial procedures even more fictional, but it’s still a useful organizing principle for this Guide. Fortunately, the existence of an electronic component for the Guide’s third edition makes it easier to deal with the problems such an organization presents. The most notable advantage of the online Guide is its ability to keep current in its description of technological advances and new literature in the field. Any advice or information we offer in the Guide must be constantly supplemented by reference to current literature. Each chapter in the print edition ends with a section entitled “Suggested Readings” that includes not only specific books and articles but the names of journals that contain studies of interest in a particular area and of agencies likely to furnish information on standards and practices in specific fields. Readers should bear in mind that these “Suggested Readings” were compiled at the end of 2006 and are only selections of studies. A fuller, more up-to-date list of readings and examples of models will be found on the Web site.

Further, some elements of the Guide can be presented more effectively online than on paper. The most obvious example may be our discussion, in chapter 5, of the application of various methods of textual treatment to the same document. Users of the book edition have to become page-flipping contortionists to follow the tables and texts that make clear these methods and their results. Online, hypertext links smooth the way enormously.

A book’s static number of printed pages also limits what we can offer in the way of examples of changing editorial techniques. The second edition of the Guide could do no better than a four-page sample of data-entry forms in the appendix on databases. Online, readers can follow links to dozens of “screenshots” of data-entry forms provided by editorial projects around the world.

II. Old Lessons in a New Century

Some things, however, remain unchanged in the third edition of the Guide. Like those who inaugurated the modern version of the craft sixty years ago, documentary editors in the twenty-first century have three goals:

  1. They work to create a verified, trustworthy text that can be read by modern audiences.
  2. They strive to make documents more available to a wider audience than the small group of people who might be able to view originals in their home archives. In earlier decades, books and microforms broadened the world of documentary access. Today, Internet publication dramatically expands the potential audience for an editor’s handiwork.
  3. They provide contextual aids that make the documents more readily understandable. Whether it is annotation or searchable XML tags in online texts, editors bring their scholarly expertise to select documents and offer readers the historical, literary, or technical context in which to make the best use of them.

Any edition’s success is still directly related to the degree to which its editors have planned and anticipated potential problems. That planning and organization must take into account what is both the delight and the curse of the trade—the appearance of the unanticipated problem or unexpected scholarly bonus. In the twenty-first century, just as in the twentieth, scholarly editors remain the most practical and hardheaded scholars in their disciplines. Such ruthless pragmatism might be unexpected in academic editors, who would appear to be the purest of modern scholars. They focus their attention on original sources, not merely on the conclusions stated by others in secondary works. Theirs is a professional obsession with the best evidence to be had, and their goal is communicating that evidence to others in a clear and accurate form. But every editor of the old school soon learned that all the scholarly dedication and critical insight in the world could not compensate for inattention to the practical considerations involved in preparing a documentary edition. The same hard lesson holds true for documentary editors and publishers today—no matter what their academic credentials.

An editor must establish clear lines of responsibility for any project at its outset. It is not enough to outline an exhaustive search for source materials unless the editor ensures that each step in that process takes place. The most meticulous scheme for proofreading the transcription of a source is useless unless the editor establishes pedestrian bookkeeping procedures to record each step in such verification. Finally, editors must be prepared to expose the details of the planning and execution of their editorial decisions to public view. Printed volumes, electronic editions, or microform publications must provide an explanation of the edition’s methods for establishing documentary texts, setting the scope of the search for materials, achieving standards for annotation, and proofreading and other quality control measures.

When beginning a project, editors use skills demanded of any responsible scholar. They anticipate problems and make decisions at an early stage of research so that work can be completed as efficiently and thoroughly as possible. Documentary editors, however, exercise effective planning and organization more intensively than conventional scholars, and they often sustain careful scrutiny and meticulous planning for a longer period of time than writers of historical or literary monographs. For the documentary editor, planning can never begin too early, responsibility for implementing plans never ends, and discharging these responsibilities demands the assessment of a variety of constantly evolving technological aids. In earlier decades editors might fulfill such duties by weighing the costs and advantages of manual typewriters versus electrically powered office machines. Modern editors purchase computers and software, determine the most efficient method for either digitizing or photocopying documents, and still address the nontechnological issues faced by every American editor since Jared Sparks.

III. What Kind of Edition Will It Be?

Many—perhaps most—editors begin a project with some idea of just how many documents they will need to address. There are editors who deal with an archival collection of sources (a discrete group of original documents in a single location) or a single document such as a diary or journal, but most need to assemble the materials that will be the basis of their editions. Among “collecting” editors, some have a subject individual or organization that left behind one or two central collections of records that will form the core of the edition. While the editor is bound to find additional materials, this central core collection is invaluable in guiding early decisions.

At the other extreme, a few editors begin their projects with absolutely no idea of how many documentary texts they will find. A classic example of this was the experience of the editors of the Papers of the War Department, 1784–1800. A fire in the newly rented quarters of the United States War Department in Washington in November 1800 destroyed every scrap of paper that had formed the official files of the American army (and for many years the navy as well) from the end of the Revolution. This disaster had constrained military scholarship on the Confederation and early national periods for nearly two hundred years. In 1994, when the project’s editors began work at East Stroudsburg University, they predicted that they might recover 12,000–15,000 documents through a worldwide search. Today, as the project completes its work at George Mason University’s Center for History and New Media, they are processing images of more than 50,000 items for an electronic edition.

Whatever the size of the collection from which you’ll eventually work, many of the most crucial editorial decisions will concern the scope of the edition that you publish. Simply knowing what you will not do can save time, money, and effort. The two factors that an editor is most likely to know about in advance are the edition’s degree of comprehensiveness and the way it is likely to be made available to the public.

A. The Edition’s Scope: Comprehensive and Selective Editions

Sharing the resources collected or cataloged by a documentary editing project can be one of the greatest intellectual challenges a scholar faces. Most editors feel obliged to communicate with their readers as much as possible of the data compiled in reconstructing the writings, works, or papers of their subjects so that future researchers can benefit from their experience. Some are even under explicit directions from their sponsoring agencies to publicize their findings in a systematic way. Whatever your situation, it’s helpful to know what some common options have been.

Few editors claim that theirs is a truly comprehensive edition. The editor of the published works of a well-known literary figure may achieve this goal, and so may the editor of a small archival edition of documents, but most editors frankly state their standards of selection and do their best to give readers clues to both the nature and the location of the materials that do not appear in print. In “The Canons of Selection,” John Y. Simon identified three methods of selective publication:

  • the comprehensive publication of a series of letters chosen from some larger group (such as all letters exchanged by Thomas Jefferson and John Adams)
  • comprehensive coverage of some narrowly defined topic
  • last, truly selective publication, in which editorial judgment, not some predetermined factual criterion, is responsible for every choice.

There are many reasons for determining the degree of selectivity in an edition. Some materials uncovered in a search for documents simply don’t warrant the time and expense of publication. Routine form letters or duplicate copies with no claims to historical significance fall into this category. For editors of twentieth- and twenty-first-century materials, there may be complicated legal reasons that large segments of their collection must remain unpublished. Aside from copyright issues, editors of government documents must now deal with national security issues.

An interesting example of the last situation is that faced by the editors of the Presidential Recordings Program at the University of Virginia’s Miller Center. As this project has a strong electronic component, the editors have the capacity to add recordings of presidential conversations as security restrictions are moved.

For most editions, considerations of time and money will shape the size and nature of an edition. The willingness of a print publisher to produce ten volumes instead of five or the willingness of a sponsoring institution to pay two editors instead of four will dictate how many volumes or microforms or electronic records can be made available. Once the sheer size of the edition is determined, the same factors of time and money may help editors fine-tune the selection process. An editor with little money for international travel, for instance, might focus on publishing documents for which essential research can be conducted without air travel.

One editor said of the process of planning her edition’s physical size and intellectual range: “It was probably the most complicated and important phase of editing the volumes, and had the most impact on what our volumes look like and how they will be used.”

B. Facsimile Editions and Supplements

You may decide that your edition will be a hybrid, with one selective element that provides annotated transcribed texts of a certain group of selected documents and a supplement of some kind that provides readers with access to the larger body of the editorial collection, presented in less polished but still usable form. Some editions, such as the Cornell edition of Yeats’s works, publish facsimiles of original manuscripts as parallel texts facing annotated printed versions of the same handwritten inscriptions. This form of facsimile supplement to editorially supplied texts is an old and honored tradition and is discussed below, in chapter 5. Here we’ll consider the more modern practice of supplementing a selective annotated edition, usually in print rather than online, with a physically separate publication of materials that did not meet selection criteria for the book edition.

Aside from a brief flurry of interest in microfiche publication in the 1970s, microfilm has been the method of choice for such image supplements for the last half century. Although electronic publication of images may replace microforms for some editions, the older technique still plays a decisive role in facsimile editions. Indeed, until questions of the archival stability of digital image files are settled, microfilm will hold an important place in any system of facsimile publication, since it provides the most secure method of backup storage.

The advantages of facsimile supplements for a selective edition are obvious. When a facsimile series precedes the book edition, the need for a calendar of unpublished documents vanishes, since scholars already have access to an indexed edition of the imaged versions of these sources. Other editors choose to publish microform facsimiles of the documents or their transcriptions after completion of the book edition. While this method ensures that all the project’s collected materials are available in the supplement, it can delay publication of the microform for a decade or more. In the interim scholars may nurse the lingering suspicion that editorial enthusiasm for compiling a microform will vanish after the last printed volume appears. A third group of projects uses the convenience of microforms to publish supplements once their edition is completed, presenting documents that came to the editors’ attention after the appearance of the volumes that should have included them.

Thus an electronic companion edition of digitally scanned images may precede, follow, or accompany the edited, annotated texts it supplements. The Papers of Abraham Lincoln project has followed the practice of preceding select book editions of annotated documents with electronic image editions. This procedure began with the DVD (later online) publication of Lincoln’s legal papers and now continues with the broader collection and publication of Lincoln’s personal and political papers.

Because it is far easier to update an electronic edition than a microform, documentary editors can issue a preliminary electronic facsimile series in advance of their annotated series and then provide a final facsimile version when editorial work is completed. The editor must ensure that such a supplement, like any electronic publication, will remain accessible to the edition’s audience.

Web-based supplements, however, may demand more editorial labor and attention than traditional microforms. These electronic supplements will reach a far wider audience than did the annotated printed texts. With the Internet, tens of thousands of readers around the world who could never touch the volumes of a print edition can and do gain access to the digital images and notes to letters and other manuscripts. Thus editors who issue such supplements must provide adequate tools for access and retrieval, and they must guarantee that the Web-based publication has a home with an institutional server that can guarantee access for decades to come. If a project’s editors cannot find a Web site willing to commit the resources necessary to host the supplement or to maintain its interface, some more traditional form may be required.

Questions of format and technical quality help determine the design of a facsimile edition. Traditionally, editors published microfilm supplements on 35-millimeter film meeting the standards of the NHPRC for such editions. While there is now general consensus on standards for electronic facsimiles (whether issued on a self-contained DVD or mounted on a free or fee-based Web site), it is still a bit more difficult to find convenient statements of these standards. One excellent summary of these issues can be found online in “Image Scanning: A Basic Helpsheet,” at the University of Virginia’s Electronic Text Center’s Web site: http://etext.virginia.edu/services/helpsheets/scan/scanimage.html.

If your edition will include an electronic image supplement, you must know in advance what limits the storage capacity of your DVD or Web server may place on the size and quality of the image files you publish. If a commercial organization or institution will publish or “host” an edition on disc or Web site, their wishes, not the editor’s, may dictate choices here. If you wish to make sure that a permanent, archival record of the images exists, you may have to make a microfilm of the digital images once the supplement is completed.

The first example of electronic publication in documentary editing was The Law Practice of Abraham Lincoln, published by the University of Illinois Press in 2000. Anyone who compares using this DVD, with its easy links between case files and subject indexes, with a traditional multiple-reel microfilm edition like the legal papers of the Aaron Burr edition immediately sees the advantages of digital technology for a project of this kind.

Even with the older technology, editors were regularly reminded that modern documentary editing has irrevocably raised the standards of expectation among members of their audience. Reviewers of such microform editions soon demanded good indexing and a statement of principles for selection for the microform and even of the scope of the search that produced the archive of images. Reviewers are becoming just as demanding where Web-based editions are concerned. The section of the CSE Guidelines entitled “Medium (or Media) in Which the Edition Will Be Published” provides a useful checklist of questions every editor should ask in making such choices: http://www.mla.org/cse_guidelines#d0e323.

Projects outside the tradition of historical editing are increasingly aware of their responsibility to provide students with an independent, interim published record of their collections while the annotated volumes are being prepared. The Mark Twain Papers project, for instance, published a printed Union Catalog of Clemens Correspondence before issuing the first volume of the modern edition of Mark Twain’s Letters in 1988. This catalog is now available as two databases, easily accessible through the project’s Web site: http://bancroft.berkeley.edu/MTP/databases.html. Similarly, seven years in advance of the appearance of a two-volume edition of The Complete Letters of Henry James, 1855–1872 (2006), the University of Nebraska Press mounted an “Online Calendar of Henry James’s Letters and a Biographical Register of Henry James’s Correspondents” compiled by Steven Jobe and Susan Elizabeth Guntert (http://jamescalendar.unl.edu/). And the editors of Selected Letters of Louisa May Alcott (1987) issued “A Calendar of the Letters of Louisa May Alcott” in Studies in the American Renaissance in 1988. Such ventures in print and on the Web enable scholars to pursue their own research interests while the editors of Clemens’s and James’s letters prepared printed, annotated texts, and Alcott scholars were given easy access to the locations of the nearly 600 Alcott letters that did not appear in the book edition. Automated methods removed the creation of such finding aids from the luxury category for documentary editors. Even the print edition of the Mark Twain Papers’ Union Catalog would have been impossible had the project not converted its files into a computerized database.

More modest projects with word processors can produce equally sophisticated supplementary finding aids, whether they be indexes of a facsimile edition or a catalog of unpublished documents. Modern electronic techniques offer more choices in the area of scope and organization, but they don’t obviate the need for careful consideration of the nature of the text and of its expected audience. Computers enable the editor to plan more accurately in terms of both subject and organization. A control file database (see below) that includes a field for the number of pages in each source text allows preliminary calculation of the collection’s physical bulk and likely length in print or facsimile form. This can provide the editor and sponsoring agencies with some sense of the degree of selectivity that will be required in the published text.

Rudimentary subject indexing during the stage of collecting materials and creating the control files can expedite many decisions for selection and organization. The editors of the Edison Papers invested time and money in a sophisticated relational database for their project precisely because they knew that theirs would be a highly selective edition. As an example, an automated indexing system with subject entries for an individual’s correspondence can produce reports showing the time periods in which certain topics are discussed most frequently as well as identifying correspondents most likely to have debated specific topics with the project’s subject. Materials identified in this way can then be reviewed for evaluation as “representative” or “routine” material for the edition.

To complicate planning further, the knowledge gained in searching for materials for a modern scholarly edition is usually the best basis for determining how selective that edition should be and identifying the most useful criteria for the selection process. By the time most editors complete the bibliographic research, personal visits to repositories, and physical processing of the records discovered in this process, they have a knowledge of the papers, writings, or records of their subject that no other scholar can match. One of their most challenging tasks is putting that knowledge to work in analyzing what the users of their editions will consider significant or balanced and in explaining how they have come to these conclusions.

IV. The Control File and Its Descendants

The key to this planning is an edition’s “control file” or central database or group of databases. Even before they begin collecting materials or, in the case of an archival undertaking, cataloging materials, the editors need to design some system of physical and intellectual control over the collected materials that ensures that none of the editor’s work will be wasted or unnecessarily duplicated.

Databases, by any dictionary’s definition, have always been part of a documentary editor’s tool kit. In earlier decades, a project’s database was paper-based files modeled on the one devised sixty years ago, when Julian Boyd launched the Jefferson project. The first control file was an ingenious system of three-by-five-inch slips on which pertinent information about each known Jefferson document was typed in distinct, carefully arranged visual fields. Today, any editor who collects materials for an edition or catalogs a large archival collection uses a computerized database that serves all the purposes met by Boyd’s paper control file—and permits pleasant frills that no manual system could provide. The electronic control file, like the one contained in file drawers, must provide access not only to information about the materials that the editor may want to locate during the collecting phase of the project but also to information on the physical locations of the original documents or photocopies that form the collection amassed for the edition.

A. Choosing and Designing an Editorial Database

While spreadsheet software, which produces multicolumn lists of data, may be adequate for a very small editorial project, most projects are better served by computer databases. The choice of database software and the design of the programs the software must support brings us to one of the most practical issues in modern editing: the editor’s need to question closely those who sell these wares or the instructional technology specialists at their institutions who offer their assistance. Editors should be fearless in demanding explanations of technical and commercial jargon and in showing skepticism at the claims of technical advisers who insist that they intuitively understand an edition’s special problems. These guidelines may serve you well:

Always explain clearly to an equipment vendor or computer technician just what tasks the project needs the computer to perform. Make sure that the vendor or technician understands that explanation. Never take at face value the word of a vendor or technician who promises that a new toy can do everything that the project requires; demand a demonstration.

Whenever possible, consult an editor or other scholar who has already worked with the computer system being considered.

Never stop thinking of new ways in which computerized equipment can assist the editorial process.

In general terms, the preceding guidelines apply to any scholar investigating computer software, but editors have special problems. Each item processed in the edition’s system is associated with a unique identifying label, traditionally a number assigned in the order in which the items are processed, thus earning it the label “accession number.” Once an item has been given its number (whether stamped on the verso of a photocopy or the edge of a file folder or entered in a data field for an image file), the editor creates a cataloging record that creates a database providing access to at least four types of information:

1. The identifying document number. Automated databases can either assign an accession number as each entry is processed (if photocopies or originals are conveniently at hand for numbering) or accept a number assigned by the cataloger (if processing is done in several stages). Before the first records are entered in the database, the editor must decide which method will better serve the project at hand. This identifying number may be the most important piece of information in the database, so weigh your methods carefully.

2. The location (name of the private individual or institution who owns each item) of the original of each photocopy or scanned image that is part of the edition’s files. This information should be as specific as possible, since it may be necessary to recite such particulars back to owner institutions when seeking permission to publish or to relocate an item whose original must be inspected.

If necessary, an additional field in the database can provide more detailed location information, such as the name of a collection or subrepository and identifying numbers on containers such as boxes, folders, or volumes. If the edition collects materials from many sources, the control file must generate a record of the photocopied or digitally imaged materials furnished by each repository owning materials of interest to the project. This list will be used when the time comes to seek permissions or to double-check a repository’s holdings for new accessions. When editors themselves search for documents and can note the box or volume or folder where each was found, this is simple; when institutions provide images with incomplete descriptions of their physical locations, nothing can be done.

3. The document’s date. Sooner or later, the control file database will have to generate a chronological list of cataloged materials. Most documentary editions are chronologically based, and such a list facilitates planning breaks between volumes or computer files. Thus each item record contains a single date, inclusive dates, or an approximate date. The date must be entered in a form that supports computerized sorting and also allows the entry of incomplete dates or the designation of material as “no date.” In traditional paper control files, square brackets ([]) enclosed editorially supplied dates, while doubtful dates and spellings of names were signaled by “?” Such marks of punctuation can confuse and confound many a database. As one editor put the matter succinctly, “Editors like square brackets and question marks, but computers don’t.” If your database will only be used within the office, brackets and question marks can be omitted, but if you plan to use that database to provide access to a microform or electronic image edition, you’ll need help to create a more complex system that allows you to retain those key symbols without sacrificing your ability to sort data. The Abraham Lincoln project simply uses color codes for inferred dates and “xx” for unknown dates or years.

4. Fields that provide some form of intellectual access to the documents’ titles or contents. These may include, but are not limited to, the names of all the correspondents, proxies, and amanuenses, or primary subject-matter terms. Names, like dates, must be entered in a form that supports sorting. Most databases allow you to use a restricted list of terms so that you’ll always enter a given name or subject term uniformly. This index enables editors to see emerging patterns of frequent correspondents, important organizational activities, and the like.

5. The document’s size. For most projects, this will be the number of physical pages, images, or photocopied pages. This information ensures that an editor has retrieved the entire file for review. For planning purposes, it enables the editor to estimate the size of the completed edition. Absolute precision may be impossible if owner repositories neglect to provide copies of blank pages or similar details. It is up to the editor to make such special requirements clear to libraries and other repositories.

6. The document’s version. If a document survives in more than one stage (draft, final version, later transcription), each document record should indicate what form it represents. For letters, this means indicating which is a preliminary draft version and which the final copy read by a recipient. Printed materials may demand descriptors such as book, galleys, pamphlet, newspaper, magazine article, or the like. It’s best to use in the control file the same symbols that will appear in the published edition. (For a discussion of these symbols, see pp. 227–31, below.) Computerized databases provide the flexibility to reconsider these codes as a project progresses, and to make global changes in the file if second thoughts win out.

7. Remarks on the original’s physical appearance. It’s always a good idea for even the smallest edition to include a field for “notes”—comments of one kind or another that don’t fit into any other category—and this may be a convenient place for remarks about the document’s being “torn” or “incomplete.” Notes can always be refined later into categories like “repository notes” or “publication notes,” etc.

This is only the minimum amount of information the control file should contain. With modern database software, it’s a comparatively easy matter to add or modify fields. Even so, it’s easier to do it right from the beginning. Computerized databases can also make rudimentary subject indexing an easy addition rather than a cumbersome luxury. We address this topic in chapter 7. The Papers of Abraham Lincoln processing system includes the ability to provide subject headings during document processing, while the Jefferson Retirement edition (covering a comparatively narrower number of years, correspondents, and topics) does not.

Improvements in the features of modern personal computers mean that most editing projects will be able to create a computerized control file without recourse to a database program beyond the one contained in their word-processing software. Some, however, will need to investigate more sophisticated systems. Free “shareware” products available on the Web will meet the needs of some editions; others will have to choose among commercially available programs. Most of these databases have more than enough capacity to handle an editorial project’s control files, but they were designed for the needs of businesses, not scholarly editing, and common sense and prudence must be exercised. Remember that any database used in documentary editing should have powerful “report capability,” so that the data can be formatted and reformatted in printouts organized in a variety of modes: lists of relevant documents arranged by location or authors, lists of repositories that include the sources each holds, etc., etc. Beyond this, bear these considerations in mind:

The software should permit entry of a large number of subject terms and almost unlimited opportunities for entries recording names mentioned in the documents.

The database’s text fields should permit entries of substantial length, not limited by a database designer’s assumptions that digits, not text and notes, will be entered in these fields.

If the database is not part of the project’s word-processing software, make sure it’s compatible with the software already installed on the editorial office’s computers.

If the database must generate encoded lists, catalogs, or formatted indexes, the software package must allow the editor to write “scripts” or programs that will permit output of such encoded text.

A project that will process document copy orders or scanned document images “on the fly” at various institutions may need a database system that can move with the editor. Most database software allows the import of data entered on another computer, but some do this more smoothly and efficiently than others. There are Web-based systems that enable editors to add materials via an Internet connection.

B. Flat-File and Relational Databases and Content Management Systems

Databases can be either “flat-file” or “relational” systems. Flat-file databases are usually easier to use, because all the information about a particular document is gathered in a single record and the database consists of a single file. If each record contains extensive information about the documents, flat-file databases can become unwieldy as the number of documents increases. The editor may then need to investigate the use of a relational database, which allows the user to link several different databases. Relational systems are essential to large-scale, long-term projects like the Lincoln Legal Papers (and now the Papers of Abraham Lincoln) and the Thomas A. Edison Papers, which have exceptionally large amounts of data to be manipulated and very complex needs in terms of indexing and retrieval. The design of an effective relational database usually demands the assistance of an outside consultant. Smaller projects can get along nicely without this expensive option. To be on the safe side, however, editors should anticipate the need for occasional technical help by putting a line item in their grant proposals for just this kind of thing, along with allowances for software upgrades and technical consultants. Some larger editorial projects now store all the edition’s electronic files in one central system, such as a content management system, rather than in stand-alone databases. These central systems perform all the necessary tasks of a relational or flat-file database but have the advantage of storing as well transcriptions, work-flow information, and even document images. Some of these projects have converted their databases to an XML-based system in which the field data are dumped into a content management system storing the documents by projected volumes. Such systems require a substantial investment of editorial time and funds for an outside consultant designer.

Still, modern editors must now consider the degree to which the database will be used for more than just raw data about the documents and, in fact, be used to track the document’s life through the entire project. The next chapter describes the roles the control file can play in the most preliminary phases of an edition.

Suggested Readings

The body of literature in the field of selection and organization of materials in documentary editions is generous, though scattered. Introductions to selective and topically organized editions cited in this chapter should be supplemented by the entries indexed under “selection” and “organization” in Beth Luey, Editing Documents. Also helpful is Editorial Specifications: The Papers of George Catlett Marshall, an internal publication of the Marshall Papers project.

Reviewers of specific editions often raise helpful issues regarding the scope of a project, and references to individual editions in Luey’s index will produce useful results. More recent literature in the area includes Thornton Miller’s review of vol. 6 of the John Marshall Papers; and Judith Giblin James, “ ‘I Know my Worth’: Lillian Smith’s Letters from the Modern South.”

Luey’s index references under “microforms” ably cover writings on the subject before 1989. For an independent microform supplement to a “literary” edition, see the Bruccoli-Clark edition of the facsimile of Stephen Crane’s The Red Badge of Courage (A Facsimile Edition of the Manuscript), edited with an introduction and apparatus by Fredson Bowers, 2 vols. (Washington, D.C., 1972–73). More recent discussions of microforms include Marc Rothenberg, “Documenting Technology: The Selective Microfilm Edition of the Thomas A. Edison Papers.”

Editors needing guidance in the preparation of a large-scale facsimile edition, whether microform or digital images, can turn to the NHPRC and the NEH. These agencies can refer them to projects that have anticipated some of their problems and also provide up-to-date technical standards for such publications. Excellent surveys of the problems of creating modern hypertext archives are Jerome McGann, “The Rossetti Archive and Image-Based Electronic Editing,” and Kenneth M. Price, “Dollars and Sense in Collaborative Digital Scholarship: The Example of the Walt Whitman Hypertext Archive.”

There are some excellent recent surveys of the use of digital sources in the humanities. You may want to start with these studies—each of which is available in print and online form:

Daniel J. Cohen and Roy Rosenzweig, Digital History: A Guide to Gathering, Preserving, and Presenting the Past on the Web (http://chnm.gmu.edu/digitalhistory/authors.php).

Susan Schreibman, et al., eds. A Companion to Digital Humanities (http://www.digitalhumanities.org/companion/).

National Initiative for a Networked Cultural Heritage (NINCH), Guide to Good Practice in Digital Representation and Management of Cultural Heritage Materials (http://www.nyu.edu/its/humanities/ninchguide/).

For useful examples exploiting the full array of the library and archival professions’ work with modern technology, see Ronald J. Zboray’s “Computerized Document Control and Indexing at the Emma Goldman Papers”; and Cathy Moran Hajo’s “Computerizing Control over Authority Names at the Margaret Sanger Papers.” Earlier editors confined their writings on the technical problems of collection and control of sources to memoranda whose circulation was confined to the editorial office. The most exhaustive series of this kind was Lyman Butterfield’s “Directives” for every aspect of the operations of the Adams Papers at the Massachusetts Historical Society. Those interested in precomputerized methods as well as early computer-based control files of the 1960s and 1970s should read the second chapters of the first two editions of this Guide. The truly dedicated antiquarian can even visit an editorial project that still maintains manually generated “paper” control files.

On the subject of control files, the experience of other editors can be especially helpful. The most exhaustive list of such projects, past and present, is located on the Web site of the NHPRC: http://www.archives.gov/nhprc/projects/publishing/alpha.html.

The ADE Web site lists member projects by era of subject: http://www.documentaryediting.org/projects/subject.html.

Although documentary editors have been remiss in publishing detailed accounts of the design of control file databases, their experiences have been shared at several scholarly conferences. For summaries of papers, see “Using Databases in Editorial Projects: A Workshop.” Martha Benner delivered a paper on the Lincoln Legal Papers project’s experience with a relational database at the Conference on Computing and History, Montreal, August 1995 (published as “The Lincoln Legal Papers and the New Age of Documentary Editing”); and the Lincoln Legal Papers project, like others with experience in computerized databases, is generous in sharing advice and samples of screen designs. The Guide Web site will provide up-to-date samples of database design in the form of “screenshots” and other data.

← Introduction 2. Putting the Control File to Work →