From Transcriptions to Editorial Texts
Inevitably, the time comes when all editors must convert statements of practice and procedure into editorial products. Transcriptions will evolve into established texts using whichever sets of textual conventions have been chosen and whatever scheme of proofreading and verification is appropriate. Translations of materials in foreign languages, shorthand, or ciphers will join those transcriptions as working copy for the edition. And, of course, earlier confident statements of what is “right” for the edition will have to be modified as the editors become even better acquainted with those documentary texts.
I. Putting Transcriptions to Work
Even for editors who are their own transcribers, transcribing a source text is only the first step in a very long process. An editor-transcriber needs to concentrate on translating the originals as conservatively as possible instead of assuming, in advance, that he or she knows what textual methods will finally be applied to the initial transcription. When the duties of transcriber and editor are divided, responsibility for such emendations belongs to the editor, not to the initial transcriber. Making those transcriptions useful working copies begins with choosing their appropriate format and location.
It may be necessary to provide hard-copy printouts rather than expect that proofreading and annotation can be conducted with an on-screen image. This is a function of the visual capacities of those who’ll have to read them. If transcriptions are printed out, they need to be filed carefully in notebooks or kept in file drawers. Some find it convenient to separate each transcript from its neighbors by a sheet of tinted paper, this leaf to be used by the editor in drafting annotation. Whatever the method chosen, each transcription should be accompanied by some sort of separate annotation sheet or other convenient place for special queries or cautions concerning the source texts to which they refer. On this sheet the transcriber can remind the editor that the source text was an enclosure in another item or indicate special reasons for any apparent novelties in treatment.
Computers have simplified the process enormously. Word processing makes the design of special headings for such annotation sheets a simple matter. Editors who can handle on-screen editing will provide equivalent electronic files for annotation, and transcriptions will be saved with appropriately named files for these additions. Projects using a content management system can relegate all this to the electronic file that accompanies the document, eliminating much of the paperwork that needs to travel with the document.
A. Establishing the Editorial Texts
No matter the form of the transcription files, the initial transcriptions must be reviewed and reviewed again to ensure that they meet the standards of a documentary series. We’ll begin with a brief glossary of the terms and techniques that you’ll need in the process:
1. Proofreading traditionally indicates oral proofreading involving two or more people, where one member of the editorial staff “holds” the newer version of the text while a senior member of the team reads aloud word for word (or letter for letter in an old-spelling edition) and every punctuation mark from the earlier textual version on which the later transcription, galley proof, or page proof is based. Thus, when proofreading a source text’s initial transcription, one editor reads aloud from the source text while the second follows the characters on the typescript.
2. Visual collation occurs when a single editor compares two versions of the text visually. To increase accuracy, it is customary to place a ruler or other straight-edged device under each line in both versions so that one’s eye will not accidentally skip an inscribed, typed, or printed line in either copy. Some projects refer to this as “verification.”
3. Mechanical collation originally required a device that detected variants between two printed texts presumed to be identical. The earliest machine of this kind, the Hinman Collator, made possible the convenient collation of sample copies from each printing of a published work. Today, mechanical collation is more likely to take the form of more accurate electronic collation, in which two machine-readable versions of an edition’s text or notes are compared by computer.
4. Verification is a term with two distinct meanings: either checking the accuracy of editorial transcriptions made from photocopied source texts against the originals for each text, or checking the accuracy of the contents of informational annotation. Verification of texts may have to consist of a visual collation, although team proofreading is preferable. Verification of quotations in notes follows the edition’s general policies for proofreading or visual collation of documents printed in their entirety. The verification of other elements in the notes should always be performed by someone other than the original annotator.
5. A record of corrections. Any project should have a well-established system for recording corrections to transcriptions and the textual forms that follow. That record not only documents the changes that have been made but who made them—and when. This record should be established as soon as documentary transcriptions begin their trip through the editorial process.
Some of the terms and practices described above refer to processes employed by editors in the CEAA/CSE tradition. Editors without such aspirations can ignore some of these stipulations, but not the reasons for their existence.
1. No one denies that team proofreading is a more effective insurance against error than visual collation. It’s especially useful in the transcription of nonprinted source texts, for oral proofreading ensures that the person who reads aloud (preferably the senior editor responsible for the edition’s consistency) will not be influenced by the interpretation of the transcriber, even when a solo editor was the transcriber. Whenever the oral reading produces a character, word, or phrase at variance with the transcription, the editor naturally pauses to reevaluate his or her interpretation; but that reassessment should come with a fresh eye, uninfluenced by what someone else has seen in the source text.
2. For printed source texts, the differences in accuracy between proofreading and visual collation may be less marked, for fewer instances of subjective interpretation arise. But any veteran of the process can attest that the visual collation of printed sources easily leads to skipping lines in either the source or its transcription. Those who have experimented with both methods report that oral proofreading is far less tiring to the eye (although not, obviously, to the voice) than visual collation, where editors must continuously switch their fields of vision as they move from primary to secondary textual version while remembering what they’ve just seen.
3. Considerations of time and staff size often make impossible the number of independent proofreadings the editor might wish for projects that have externally produced page proofs. For projects that must compromise on the matter of independent, team proofreadings, the use of cassette tape recorders offers a useful supplement. The member of the team reading from the source can record that part of the proofreading process at his or her convenience, while the second team member can check those recorded, spoken words against the transcription or page proof later. Two or more staff members may do such checking against the same tape, thus producing semi-independent proofreading sessions. Tape-recorded proofreading has an advantage beyond that of easing scheduling problems. Any member of the team can interrupt proofreading when voice or eyes begin to falter and return to the task later without inconveniencing a colleague.
4. Even when the editorial text will not receive a number of multiple formal proofreadings, the editor should perform at least one proofreading against the source text. This should be done as late as possible in the editorial process, preferably immediately before copy is formatted for printing. This rule merely recognizes the fact that editors become progressively more familiar with the peculiarities of their source texts. In many cases the preparation of informational annotation will also make the source more intelligible. Proofreading against the source text immediately after transcription will be markedly less accurate than proofreading performed weeks or months later.
5. Ideally, all transcriptions should be perfected against the originals of their source texts, not merely against photocopied versions, but this is often impractical. When it is not feasible, the edition’s introduction should make this omission clear. Whenever verification of a given document or group of documents is performed by someone other than one of the editors, this fact should also be noted. Verification of transcriptions against originals is required of editions aspiring to CSE approval.
Traditionally, some editors preferred to proofread or collate page proofs of documentary texts against the source texts instead of against transcriptions. This practice provided a last chance to catch errors of interpretation or mistranscription. Editors who adopted this method usually did so only if the same kind of proofreading had been applied to the printer’s copy before it was sent to the compositor. Publishers do not look kindly on the long lists of author’s alterations to page proofs that could otherwise arise from the technique. Of course, computer technology has now so revolutionized American publishing that many editors would be hard pressed to define what constitutes “page proofs” for their editions. Some are themselves the producers of the page proofs that go to the press’s production department, while others use publishers who prefer to prepare materials for publication themselves. Each editor will have to identify the appropriate point at which to make such a final check.
Proofreading should be supplemented by visual collation and simply reading both documentary texts and annotation for sense. Transcriptions, compositor’s copy, galleys, and page proof should be read by as many people as possible for accuracy, clarity, and consistency. Solo editors and projects with small staffs are wise to enlist professional colleagues as reviewers of the edition.
B. Back-of-Book Textual Notes
For many editors, textual responsibility does not end with proofreading and perfecting the editorial reading text. Whenever an edition relegates all or part of its record of emendations or details of inscription to the back of the book in a print edition, or to a separate screen in an electronic one, the editor must establish an accurate reporting system for that apparatus as early as possible. Even though these records will be checked again and again, the possibility of error is reduced substantially if their format is established in advance.
The process of such reporting may begin with the transcriber. If instructed to keyboard only final authorial intentions in letters, journals, or draft works, the transcriber must also initiate a record of what is omitted—authorial deletions or interlineations or nonauthorial contributions in the source text. Word-processing equipment now provides the luxury of keying the deleted or interlineated material within a set of tags reserved for each detail of inscription, for instance “∫” and “∫” for the beginning and end of interlined material instead of creating physically separate files. When an edition has already established a method for reporting such details (that is, through symbols, narrative description, descriptive abbreviations, or a combination of these methods), the transcriber can enter remarks in their proper predetermined form.
When the editor reviews the transcriptions, the process of correcting minor authorial errors or slips of the pen begins. Once, changes such as suppressed inscriptional details had to be recorded on index cards with references to the transcription’s page and line number. Editions that provided separate records of inscriptional details and editorial emendations had to distinguish between the two groups from the moment record keeping began. Word-processing equipment eliminates such dual record keeping, for the editor can designate separate codes for each sort of textual record in advance, and the appropriate codes can be entered for each note.
No matter what their policies in reporting emendations and inscriptional details, all CSE editors have traditionally supplied one kind of textual record in the back of the book—the report of ambiguous end-of-line hyphenations. This report lists all such ambiguous authorial hyphenations (possible compound words whose end-of-line division coincides with the position of the hyphen) in the source as well as any new ambiguous hyphenations introduced in a modern printed edition. Such a record allows scholars to quote accurately from the new text. To identify such ambiguities in the source, the editor can refer both to dictionaries contemporary with the document’s inscription and to the author’s customary usage. Most projects create an in-house record of their author’s preferences in hyphenating specific compounds. Complete consistency is too much to expect, but useful patterns emerge. Obviously, keeping such a lexicon in a computer file expedites processing of the entries, and patterns can be recognized earlier.
Once the editor has established that line-end hyphenation occurs at a point where the author would ordinarily have hyphenated a compound, that hyphen is marked in the transcription for retention in the print edition should typesetting place the word in the middle of the line. The Thoreau edition streamlined the scope of its hyphenations by forgoing a justified, or consistently even, right-hand margin in printed volumes of Thoreau’s private writings, thus ensuring that no new hyphens were added when the documentary texts were typeset. The Thoreau editors needed to report only the retention or omission of ambiguous line-end hyphens that appeared in the source texts themselves. Editors whose editions employ justified right-hand margins in their editions still need to check page proofs (or their electronic equivalents) for new and potentially misleading hyphens.
Since all printed back-of-book textual records are keyed to typeset lines, not to footnote numbers within the texts, the final preparation of their contents must await page proofs. Work on these lists will have begun far earlier, and records of emendations and inscriptional details can be keyboarded and proofread well ahead of time. The editor should first devise a format for such reports, generally one in which the first column for each entry in a list bears a reference to the line and page of the transcription or printer’s copy. After page proofs are reviewed, new page-line references to the print edition can be substituted.
Editors should remember, too, that they are responsible for making their textual records easier for readers to use, not only easier for editors to compile. When an edition is a documentary one, readers are best served by having as much pertinent textual information as possible easily available, either within the text or in adjacent footnotes. When back-of-book records are required, they should demand no more of the reader than necessary. Instead of asking their readers to consult a separate section for records of ambiguous hyphenations, for instance, the editors of Mark Twain’s Notebooks and Journals categorized these problems as a special form of editorial emendation and included them in the general emendations lists for each section of the Notebooks. In the Howells Letters, where textual issues are far simpler than in the Twain texts, a combined record of both emendations and “details” serves the reader equally well.
II. Exceptions to Some General Rules
Although American documentary editors have worked hard to create guidelines for the creation of documentary editorial texts for almost every form of recorded evidence, situations arise in which editors must admit defeat. After designing an editorial technique appropriate to the bulk of the sources for an edition, editors follow that technique until they encounter a situation in which the standard documentary formula of “one source equals one editorial text” doesn’t apply. Whenever the equation proves invalid, they turn to other traditions for appropriate solutions. Even here they tend to apply such borrowed techniques conservatively, pointing out to their readers where editorial judgment or guesswork has been employed rather than concealing that fact in the name of elegance or readability.
A. Documentary Problems with Textual Solutions
Even the editor following the most conservative general policies on emendation will encounter situations in which one source text will not provide one editorial text. The summary of methods of inscription and forms of documentary records in chapter 4 hints at some of the occasions on which a documentary editor borrows the methods of specialists in related fields. Any editor of orally communicated texts, for instance, may have to deal with the theoretical implications of dissonant witnesses to a lost archetype, the central problem of the classicist. Even editors spared this special form of documentary record face the possibility that the best method for editing a specific source text may be one from the tradition of critical and not documentary publication.
In recent years, the debate among literary scholars over the value of eclectic clear texts versus editorial methods that retain more “documental” elements has presented useful lessons for the editor of historical documents that have little claim to literary merit. These discussions help determine just when noncritical texts serve the purposes of an audience and when critical methods are more appropriate. Donald Reiman discusses the question in his essay “ ‘Versioning’: The Presentation of Multiple Texts.” Looking back over decades as an editor of the writings of Shelley and His Circle, Reiman remarks, “I have become less and less confident that an eclectic critical edition is the best way to present textual information to scholars.” Even for literary works, he argues, the public may be better served by “enough different primary textual documents and states of major texts . . . that readers, teachers, and critics can compare for themselves two or more widely circulated basic versions of major texts” (167).
Reiman advocates “versioning” rather than “editing”: giving the read-ing public equally convenient access to more than one version of a text rather than a single clear text from which the various prior versions would need to be laboriously reconstructed from textual notes. Electronic publication, of course, offers one form of versioning, as do parallel texts in bound volumes. Versioning has a long and honorable history among editors of American historical documents. Paradoxically, scholars who bristled at suggestions that critically achieved, idealized clear texts showing the final intentions of the authors of letters or state papers served any useful purpose for students of American history showed little reluctance to provide their readers with editorial reconstructions of preliminary versions of these same documents. Documentary editors in the “historical” tradition long ago re-created drafts of legislative records to show the genetic levels of their evolution and the specific contributions of collaborators and revisers along the way.
1. Genetic Elements in Source Texts
Any document—letter, state paper, literary work, scientific essay, laboratory model—can survive in a form that reflects the development of the author’s intentions, preserving not only a final text but also the false starts, preliminary wording, and stylistic evolution of that text. Few editors escape confronting source texts that carry intrinsic clues to their genesis. These are encountered most commonly when an editor’s source text is a manuscript obviously revised during composition. An edition of a draft letter or an author’s holograph corrections and additions to galley proofs or the pages of an early printing used to prepare a revised edition demand identification of original, intermediate, and final versions of the same document.
Genetic editions of texts try to offer the reader access to more than one level of textual creation within a single inclusive page. While the term genetic text edition came into usage in German textual studies more than fifty years ago (see Hans Walter Gabler, “The Synchrony and the Diachrony of Texts”; and Louis Hay, “Genetic Editing, Past and Future”), it did not become current in American studies until the 1960s. In editing the successive draft versions of Billy Budd, Melville’s editors adopted the term genetic text to describe their diplomatic transcription of the manuscripts of that work, which had been left unpublished at Melville’s death. The genetic elements of the transcription were the result of their painstaking efforts to devise a system of symbols and descriptive abbreviations that would allow the reader to understand the order in which the changes were made by the author. In a single set of pages, a densely packed trail of symbols led the reader through two, three, sometimes four versions of the same passage.
The genetic text of Billy Budd is one of the most complicated and sophisticated products of modern scholarly editing. Simpler genetic texts have been with us since the first editor presented an inclusive or conservatively expanded text of a handwritten draft. Any editorial method that includes the use of symbols for deletions, insertions, and interlineations can present a genetic text for individual documents. Editors who eschew the use of textual symbols can instead give their readers clear texts of the final version and supply notes that permit the readers to construct their own genetic version.
a. Synoptic Genetic Texts
More sophisticated problems of conflation arise when the genetic stages of a document are recorded in not one but several source texts. If the variants between these preliminary versions are wide, the editor may print each document separately in parallel texts or treat each one as a distinct version. It may be, however, that these separately inscribed evolutionary stages of the text are so directly related that they represent a direct intellectual line of revision. In this case, the editor has the option of creating a “synoptic” text, another term and technique from classical scholarship.
This form of editing is as old as the synoptic Gospels, but the term synoptic text was borrowed for modern works by the editors of the James Joyce edition when they described their methods in editing Joyce’s Ulysses. Joyce’s revisions of the novel survived, not in a single draft manuscript, but in manuscript fragments, corrected galleys, and other forms. The Joyce editors combined the information contained in these separate documents to create a synthetic genetic text, a synopsis of information from several source texts combined in one new editorial text. The process of textual synopsis is not confined to biblical scholarship and editions of great literary works. The first volume of the Documentary History of the Ratification of the Constitution includes two synoptic texts that trace the evolution of the articles of the American Constitution through the debates of the Philadelphia Convention of 1787. The editors did not have at hand separate copies of the Constitution reflecting its wording at every stage of its consideration. Instead, they worked from four source texts: the draft constitution submitted to the Convention on 6 August 1787; the text of the Constitution recorded in the Convention journals on 10 September; a printed copy of the report of the Committee on Style, which revised the articles between 10 and 12 September; and the text of the Constitution as adopted on 17 September. To supplement these sources, the editors analyzed James Madison’s notes of debates in the Convention, records that indicated the date and nature of each revision of the frame of government.
The successive surviving versions of the Constitution, like those of Joyce’s Ulysses, qualified as source texts for a synopsis because they were similar enough to allow the editors to draw valid conclusions about the sequence of revisions at each point. The editors reprinted the articles of the Constitution as adopted by the Convention, thus supplying a reading text of the final stage of the evolution to parallel their synoptic texts.
Synoptic treatment of the records of deliberative political bodies is fairly common among editions dealing with governmental history. The Documentary History of the First Federal Congress: Legislative Histories provides three volumes of legislative histories of the bills considered by the U.S. Congress, 1789–91. They contain a calendar recording every action on a given bill or resolution, as well as transcriptions of surviving manuscript or printed sources of the texts for these measures as introduced to the Congress. Footnotes record amendments to the originals. In a few cases, when the original version of the item has vanished but its final text and a complete record of amendments survive, the editors re-created the original’s text by taking the final version and adding or subtracting words or phrases recited in the amendments. In cases of such heavy editorial intervention, the re-created text is accompanied by notes clearly tracing the work involved. In other instances, when a committee report of amendments to a bill has survived, the Legislative Histories provide readers with both a literal transcription of the committee report and the amendments identified in that report, reprinted as footnotes to any previous versions of the bill.
b. Collaborative Source Texts
The attentive reader has noticed that the examples of synoptic genetic texts from the Ratification and First Congress projects not only draw on multiple source texts but also represent the intellectual contributions of more than one author. Editors of the records of American political and military history have long been concerned with the need to identify not merely the stages of a document’s evolution but the separate authors of each revision. In recent years, editors of literary works have come to share this interest.
Source texts that represent collaboration between two or more persons are often more challenging than genetic documents by a single author, and study of such collaborations received close scrutiny and analysis in the decade following Jerome McGann’s discussion of the socialization of literary texts. Ideally, both the identity and the specific contributions of each reviser should be recorded. Modern authors often work closely with publishers’ editors. Scholarly editors employing traditional copy-text theory to construct a single text reflecting final authorial intentions face serious difficulties in such situations, for they must somehow determine which suggestions from an author’s contemporary editor were imposed upon the author and which were accepted freely, perhaps with the author’s thanks. If authorial intentions are the overriding criteria, the critical editor must then exclude from the new edition passages added without the author’s full approval while retaining those to which the author gave hearty consent. The pitfalls of this approach are obvious, and attempts to impose it at inappropriate times and places provided critics of the application of emended copy-text to modern writings with some of their most telling attacks and led to the notion of “socialized” texts.
Editors of historical documents, especially in legislative and professional affairs, are veterans of dealing with such collaborative documents, for such records frequently represent action by committee. A manuscript report or public paper may contain passages in the hands of two or more legislators assigned to prepare that document. The rough draft of a state paper may reflect the fact that an executive assigned its preparation to one aide, circulated the draft to other advisers, while still other men may have approved or vetoed their suggestions—leaving the record of all these actions on the same scribbled, dog-eared set of pages.
The collaborative aspects of a document’s composition can often be represented quite easily if one contributor had primary responsibility for its drafting. This fact can be stated in the document’s source note, and the editor can focus on additions, revisions, or deletions in other hands. Such records may be provided by using a special form of symbol enclosing such additions (e.g., “The document was originally inscribed by AB. All revisions by CD appear in the text within square brackets.”). But if the collaborators’ contributions are fairly equal, or if a third or fourth writer is involved, the editor must consider descriptive notes to supplement the text. Each addition, deletion, or revision might be keyed to a footnote explaining that the words or phrases in question were “added above the line by CD” or “entered by EF in space left in the MS” or “rewritten by GH.” This is the method chosen by the editors of the Eisenhower Papers in the document discussed in chapter 3. Electronic editions of such source texts, of course, could represent these elements in different colors of type or special typefaces, with hyperlinks to any necessary editorial explanations.
c. The Physical Presentation of Genetic Texts
Obviously, electronic editions of any form of genetic text—single source, synoptic, or collaborative—can solve many problems of presentation gracefully and effectively. But even these new solutions demand careful thought, planning, and testing of formats, while book editions of such sources remain one of the cruelest tests of an editor’s ingenuity. Clearly, the choice among a truly genetic text, an inclusive text supplemented by textual notes, and a clear text is both a theoretical and a practical one. The editor will need to find the device that enables the reader to reconstruct inscriptions in the source text with the finest distinctions possible. The use of numbered footnotes with a clear text can serve this purpose, although the multiplication of superscripts necessary to record numerous significant revisions runs counter to the purposes of the editor who wishes to avoid “disfiguring” the documentary text. Clear text with a back-of-book record cannot serve the reader when the collaborative aspects of the source are central.
Some documents defy efforts to reproduce all the details of their genesis in comprehensible form: the proliferation of symbols or footnote numbers would make the result unreadable. In such cases, it has been traditional to offer the reader parallel texts of the same material. In the classic form, the two texts were truly parallel, printed in two columns on the same page or on facing pages (as in Bowers’s edition of Leaves of Grass), but modern editors use the term and technique more broadly.
The need to communicate genetic textual elements inspired much of the early investigation of electronic versions of source texts. Hypertext or hyperlink publication was an obvious supplement to or substitute for a printed text in these instances. Any editor who used word-processing software with the ability to switch between documents in “windows” easily saw the potential of electronic methods for linking the editorial transcription to different levels of a document’s genesis. This form of publication is a simple, effective, and logical way to communicate the genetic process to a reader.
2. Multiple-Text Documents
Similar treatment can be accorded multiple-text documents, a term coined by David Nordloh to describe sources inscribed in such a way that the reader could reasonably extract the texts of two or more distinct documents from the characters that appear on the same page or set of pages. In multiple-text documents, the entries on the same page are so widely separated in time, intention, and even authority that they must be regarded as separate examples of one author’s writing or as examples of the writing of two or more authors. Their textual problems are distinct from those in ordinary drafts, in which authors leave records of their evolving intentions for works before their completion.
Notebooks used as literary commonplace books are an obvious example. Thoreau first inscribed entries in his journals in ink. Months or even years later, he reworked many of these passages for publication, considerately making his revisions and emendations in pencil and ink that could be distinguished from the original inscriptions. The Thoreau edition provides a clear reading text of Thoreau’s original entries. Textual notes in the Thoreau Journals reproduce only those revised passages that never achieved print publication, and the same notes can refer the reader to journal entries that served as the basis for works published in Thoreau’s lifetime and appearing in other volumes of the edition.
a. “Second Thoughts”: Authors Who Try to Rewrite History
While most authors of letterbooks or diaries are content to leave those documents untouched once their pages have been inscribed, others can’t resist the temptation to go back to improve youthful lapses in style or delete compromising passages. Historical figures who use their years of leisure and retirement to rewrite diary entries and other portions of their personal papers create a special form of purgatory for their editors. When these revisions were made for a published memoir, the editor can at least refer readers to an authoritative text of the final version of the material in another section of the edition or in the printed source. But when the revised sources remained unpublished in their own day, the modern editor is left with documents whose texts have been deliberately corrupted by their own creators. Providing access to such revised variant passages was a consistent problem for the editors of the Madison, Lafayette, and Washington papers.
When Madison set out to compile autobiographical material, he rewrote his own retained copies of correspondence (or recovered addressees’ versions from his correspondents or their heirs), revising the pages to suit his matured notions of style and discretion and adding marginal comments to the documents. Fortunately, Madison’s later emendations are usually distinguishable from his original inscriptions. When one of these “corrected” manuscripts had to serve as a source text, the Madison editors could recover the original words and marks of punctuation, discussing later revisions in footnotes. This method of emendation provides readers immediate access to the texts of letters and state papers in a form that gives them validity as documents of American political history. At the same time, the notes enable them to determine areas where Madison felt correction or suppression was necessary to make these materials ready for publication to the world.
The Marquis de Lafayette was a more systematic memoirist. In the early nineteenth century he revised not only personal letters but also his 1779 manuscript “Memoir.” These revisions were incorporated into transcribed copies that Lafayette then sent to Jared Sparks, and most of Lafayette’s emendations were reflected in the published version, Mémoires. The Lafayette editors collated all printed versions of the Mémoires against the emended manuscripts to establish the pattern of the author’s revisions, disregarding later revisions of the letters and the manuscript “Memoir” that were “purely stylistic.” “Significant passages” deleted in the manuscripts or omitted from the printed Mémoires appear in the new edition within angle brackets. Any other changes deemed significant by the modern editors are treated in footnotes.
George Washington’s motives in revising his early letterbooks are less clear than Madison’s and Lafayette’s. The volumes contained autograph copies of outgoing correspondence that Washington laboriously inscribed during the Braddock campaign on the western frontier in 1755. Beginning some thirty years later, he began to emend the letterbooks, directing a clerk to copy the revised texts of the letters into a new set of letterbooks. All but one of the original letterbooks vanished, and few correspondents saved the letters they had received from the young Washington. For those months for which only the emended, later copies of letterbooks survived, the Washington edition had no choice but to use those as source texts. For the summer of 1755, however, they had recourse to the original letterbook, which they used as source text and supplemented with notes reproducing Washington’s later notions of “improvement” as extrapolated from the later letterbooks.
One of the most remarkable recent feats in documentary editing was a volume, presenting a previously unpublished manuscript, that combined many of the problems faced by these other projects with a few unique to itself: Rosemarie Zagarri’s edition of David Humphreys’ “Life of General Washington” with George Washington’s “Remarks.” Humphreys set out to write the earliest and only authorized biography of his former military chief. Washington provided his former aide with source materials, as well as manuscript “remarks” for his guidance. Humphreys never completed his book, although he did incorporate many passages from his draft into shorter pieces about the first president. Questions of synoptic genetic texts, multiple-text sources, and authorial second-guessing arose as Zagarri reconstructed the results from fragments now scattered physically among several repositories and intellectually among other Humphreys writings, incorporating Washington’s comments in the text in angle brackets. George Billias, one of the sharpest contemporary reviewers of documentary editions of the Revolutionary era, remarked in wonder, “Incredible as it may seem, this book actually contains new material about George Washington, one of the most thoroughly researched figures in all American history” (review of Humphreys’s “Life of General Washington”).
b. Other Multiple-Text Documents
Several forms of multiple-text documents may present problems in transcribing the source but do not pose deep theoretical questions. For example, a writer may have used an existing document as scratch paper for drafting another letter or report. A sheet of paper may carry a letter received by John Smith on one side and Smith’s draft of his reply on the other. Some frugal eighteenth-century figures carried this practice to the extreme of drafting replies to a letter over that letter’s own lines, inscribing the new draft at right angles to the old lines. Such practices result in two documents that are part of the same physical whole. Although the textual notes that describe the provenance of each item must indicate that it is physically a part of the other, no special textual problems will arise.
A more common form of multiple-text document is created when an author becomes a reader, making notes or comments in the margins of the reading matter at hand. While marginalia are more commonly found on printed works, some writers were equally eager to record their own comments on newly received letters or other unprinted documents. Frederick Burkhardt discusses the problem of publishing both the letters sent to Charles Darwin and the naturalist’s autograph marginal remarks on these communications in “Editing the Correspondence of Charles Darwin.” Here at least, the editors could easily justify printing the comparatively brief texts to which Darwin’s comments referred.
Editors faced with the more conventional form of marginalia, a subject’s comments inscribed on the pages of books or articles that he or she was reading, have a greater challenge. Samuel Taylor Coleridge left behind a body of marginalia in copies of his own works as well as in books written by others, and that body of material was so complex and independently significant that his editors treated the Marginalia separately in a special series of five volumes within The Collected Works.
c. Nonauthorial Emendations and Additions
In many cases, additions to a manuscript made by someone other than the author or the document’s recipient may be ignored. These can include dealers’ notations, symbols entered by archivists who have cataloged the materials, and notes by collectors through whose hands the manuscripts have passed. It may not be necessary to reproduce them verbatim in an authoritative edition, but many readers will need to be alerted to their existence by a summary or description in an editorial note. To some readers, these may seem no more an intellectual part of the document’s text than an owner’s signature on the flyleaf of a rare book or pamphlet. Editors must remember, however, that to bibliographers or historians of collecting, such owners’ signatures, dealers’ notations, and archivists’ symbols are important historical evidence. With unprinted documents and rare printed materials, such entries can be helpful in determining the item’s provenance, and they may need to be included in the description of the source text’s history.
Still other categories of nonauthorial inscription require more careful notice and may even warrant reproduction in the edition itself. Few members of an author’s family have resisted the temptation to edit literary memorabilia with pen, pencil, or scissors. Perhaps the worst offender in this category was Sophia Peabody Hawthorne, whose contributions to her husband’s posthumous literary image are immortalized in the defaced notebooks and other manuscripts that she prepared for publication in her years of widowhood. Mrs. Hawthorne’s activities as editor and censor had such a pervasive influence on her husband’s reputation that they could not be ignored by the editors of the Centenary Edition of his works. No physical description of the notebooks would be complete without reference to Sophia’s emendations and mutilation, and students of American cultural history would be ill served by an edition that did not report them. Thus, the textual notes to the clear texts of Hawthorne’s writings include detailed descriptions of Sophia’s handiwork, as well as notes recording similar revisions by the Hawthornes’ son, Julian.
It is the degree of historical significance of any nonauthorial additions—their independent documentary value—that determines how fully they need to be recorded in a scholarly edition or whether they should be mentioned at all. If a writer’s spouse marked the deceased’s letters and papers for a print edition that was never published, the nonauthorial emendations clearly have less importance than they would do if bowdlerized texts appeared and influenced a wide reading public.
Some examples of posthumous editing by friends, relatives, and publishers can be ignored because the resulting printed texts have not been as influential as Sophia Hawthorne’s. The editors of the George Washington Papers, for instance, ignore Jared Sparks’s “styling” of punctuation and spelling on the pages of the Washington manuscripts entrusted temporarily to his care while he prepared his selected edition of Washington’s writings. Had Sparks’s Washington volumes been the only ones available to scholars and laymen in the century and a half before the inauguration of the new George Washington project, an argument could have been made for recording his emendations in the new edition. Luckily, in those intervening decades, scholars had access to the original Washington manuscripts on which Sparks based his texts. The source texts were used for several other, and better, editions of Washington’s letters and papers, and the public was not left at the mercy of Sparks’s version of the documentary record.
In some ways, the treatment of such nonauthorial revisions in manuscripts is comparable to a critical editor’s approach to the works of an author whose publisher demanded or imposed changes in a manuscript for the sake of literary style or public acceptance. In these cases, the modern editor must offer readers both the text that the author originally considered final and the revised version that the public actually read. When an author accepted such revisions, they bear directly on her or his sense of craftsmanship and aesthetic convictions. Even if they were imposed over the author’s objections, it was they, and not the original words, that were circulated to the world and became known as that author’s text. Just as the writer’s original intentions form one historically significant document, so does the revised and published version that became a part of literary history through its influence on those who read it.
Just as a single manuscript can contain many versions of the same document or even the texts of distinct documents, two or more sources may be combined to produce a single editorial text. Few documentary editors will entirely escape the task of conflating, or combining, the elements of two or more sources into one reading text, although some of their methods of presenting the new texts may differ from those traditional to critical textual editing.
a. Fragmentary Source Texts
Conflation occurs most frequently when the best source text survives in fragmentary form, while less authoritative versions exist with a more complete text. It is no novelty to catalog a manuscript letter whose last page has been lost but for which a contemporary copy, later transcription, or even printed text will furnish the missing material. David Nordloh and Wayne Cutler argued the problem of conflating fragmentary sources in the Newsletter of the Association for Documentary Editing in 1980 and 1982. Nordloh questioned Cutler’s treatment of a letter from Andrew Jackson to James Polk for which two manuscript sources survived. The first was a draft in the hand of Jackson’s secretary, revised and signed by Jackson. The second was a copy of the letter made by Polk from the version he had actually received. Polk’s copy contained a postscript added when the fair copy was made from the draft. Three editorial choices were available. Nordloh argued for clear text, in which the postscript would have been printed as part of the letter, with the change in authority indicated in a back-of-book note. An inclusive- or expanded-text editor might have printed the postscript as part of the editorial text, noting the change in authority in a footnote. Cutler chose the most conservative solution to the problem, printing only the contents of the draft as the letter’s reading text, with the postscript transcribed verbatim in a note adjacent to the text. Nordloh defended his position with a discussion of the primacy of authorial final intentions. Cutler explained his own decision by analyzing the special reverence of documentary editors for their source texts.
In any documentary edition, conservative methods of conflation best suit the reader. Even if the conflated passages appear in one reading text, notes adjacent to the letter alert the reader to editorial intervention and provide easier and more convenient access to the necessary information. In clear text, without a superscript number to indicate that annotation is present, readers would be ignorant of the crucial textual and evidentiary problem at hand and the need to consult the back-of-book textual record.
When overlapping fragments of the text of the same document survive, and when each version can be considered reliable in terms of substantives if not in terms of accidentals, overt conflation of the sources into one editorial text may be preferable. The fact and location of such conflation can be indicated using numbered notes or other devices. Even here, documentary editors resist the temptation to impose a single pattern of accidentals on the resulting conflated text, although this can produce a text in which one three-paragraph section represents the author’s usage in a surviving eighteenth-century manuscript, while another three-paragraph passage shows the style imposed on the text by a late nineteenth-century transcriber.
If all the pages of a manuscript source text survive in mutilated form—as with documents damaged by fire or water or defaced by descendants or collectors—the editor may have to supply missing words or phrases at regular intervals throughout the editorial text rather than conflate the texts at a single point where one source ends and another continues. If this is a consistent problem in the edition, editors often devise a system of symbols that indicate such routine conflation. It would be needlessly intrusive to accomplish this purpose with dozens of footnotes indicating the source of words or phrases from the supplementary source text. The editors of Mark Twain’s Letters give their readers a chance to evaluate supplied material in mutilated manuscripts by providing photo facsimiles of such pages in their back-of-book textual records.
Frequent conflation may also be required when the author’s drafts were routinely copied for transmittal as letters or other communications by a scribe who was less than conscientious. Some editors solve this problem by adopting a special bracket to enclose words in final versions of letters and state papers supplied from the more authoritative draft versions. Such simple devices give the reader simultaneous knowledge of authorial intentions and the text of the document as read by its intended recipient.
A remarkable instance of what might be termed “facsimile conflation” can be found in Dickinson W. Adams’s twentieth-century facsimile edition of Jefferson’s Extracts from the Gospels. Jefferson arranged two compilations of New Testament texts, one known as “The Philosophy of Jesus of Nazareth,” and the other as “The Life and Morals of Jesus of Nazareth.” While “The Life and Morals,” a compilation of Greek, Latin, French, and English versions of Gospel verses, survived and was eventually preserved at the Smithsonian Institution, “The Philosophy of Jesus” collection was lost. All that remained were the two mutilated copies of the King James Version of the Gospels at the University of Virginia from which Jefferson had clipped selections, and what appears to be a copy of the list Jefferson followed in removing these passages from the New Testaments. Working with photostats of intact copies of the same editions of the Gospels used by Jefferson, Adams created a new body of clipped photocopies. Annotated, indexed versions of the facsimile compilations were published as the first volume of the second series of The Papers of Thomas Jefferson, Jefferson’s Extracts from the Gospels.
b. Reconciling Accounts of Independent Witnesses
In classical scholarship, the surviving witnesses to a lost archetype are usually in the form of scribal copies. Each must be collated with the others to isolate patterns of error that indicate transcriptional descent, to determine whether one or more was a copy made from an earlier and thus more reliable copy. Once this process of textual filiation is complete, variants among the witnesses are used to reconstruct the missing archetype so that the editorial text can represent the best readings provided by the imperfect witnesses.
For editors of modern documentary materials, the problem of reconciling discordant witnesses is most likely to appear when only verbatim, even shorthand, accounts survive of words communicated orally in the form of a speech, conversation, or interview. Editors who confront this textual challenge may wish to consult the descriptions of widely differing treatments of such records in the Woodrow Wilson Papers (24:viii-xiii) and the Douglass Speeches (1:lxxv ff.). Remember, though, that these editorial procedures were adopted in the 1960s and 1970s. Today, computer-assisted collation of machine-readable transcriptions of such accounts makes their comparison and analysis far easier and more accurate.
In both the Wilson and the Douglass series, the longest surviving verbatim account was chosen as the basic text when the editors needed to conflate variant accounts. (If nothing else, the longest report was usually made by the reporter who stayed alert after rival scribes had lost interest.) Collating each variant version against this basic text often showed that in some cases one variant was based on another. The editors could establish patterns of textual filiation and ignore those reports that were obviously the scribal descendants of another stenographic version. From that point in the editorial process, however, the two groups of editors followed different courses.
Once the Wilson editors identified those verbatim accounts with claims to authenticity, they isolated and analyzed every crux, or unaccountable variant between the texts. Wilson himself helped his later editors, for he often reviewed transcriptions of the shorthand reports of his speeches prepared by his personal secretary and corrected his aide’s inaccurate reporting. The pattern of variants and cruxes determined the final editorial treatment of Wilson’s oral texts. Variants were often comparatively minor in length, and many cruxes were easily explained in terms of the mishearing of similar spoken words or the misreading of a reporter’s shorthand notes. Here the Wilson editors silently combined the accurate words and phrases from two or more reports into one text. Only when variants were substantial and cruxes inexplicable did they intervene with brackets or numbered footnotes.
While the editors of Wilson’s oral communications turned to the techniques of classical and literary scholarship to solve this problem, other documentary editors performed conflations more overtly. In the Douglass Speeches, for instance, the editors could not afford the luxury of conflation and emendation, even with the use of a full textual record. Shorthand-based newspaper reports of Douglass’s speeches contained variants that were not only cruxes (in orally transmitted documents, anomalies in reporting the same words) but also reflections of inconsistent reporting of the same passages in the speeches. Such inconsistencies resulted in one newspaper’s publishing long passages, which must have taken twenty or thirty minutes to deliver, while another paper ignored this section of an oration completely.
The Douglass editors could not conflate such variant texts as gracefully as editors whose cruxes were largely confined to minor, easily explained anomalies. Whenever it was necessary to add passages to the basic text from a version that had reported a section more fully, the conflated material was added to the basic text in angle brackets, and its source indicated immediately in a note. If the basic text contained a summary of the sentences or paragraphs reported more completely in the second text, a dagger in the editorial text leads the reader to a note where the basic text’s summary version is reported verbatim.
Most editions of nineteenth-century documents follow the patterns used by the Douglass edition. It is rare to find even one complete stenographic report for a speech in this era, and the modern editor is spared the process of collating variants in two independent witnesses of comparable length and detail. Most, like the editors of the papers of Thaddeus Stevens, use the most complete and reliable report of a speech as their basic source text. Footnotes supply significant variants from other reports of the same speech, that is, different versions of text in the source text or passages omitted from the source text but included in other reports.
Once they have achieved fame, twentieth- and twenty-first-century figures usually provide their editors with fairly reliable source texts for prepared speeches. Press releases containing advance copies of speeches were routinely supplied to newspapers by World War II. The files of wire services and major newspapers usually hold these documents even if they are missing from the records of the woman or man who delivered the speech. Radio and television recordings of speeches as actually delivered often survive to complicate the matter, providing future editors with two versions of an author’s “final intentions”—one on paper, the other on tape or a disc.
The experience of Margaret Sanger’s editors is instructive. Their approach to the various records of her speeches is a lesson plan in practical ingenuity. Although some of Sanger’s orally presented remarks are recorded in their entirety with both images and words (see above, chap. 3), others fared less well. Often, the editors could rely on the typescript of a speech that Sanger had delivered, but in one notable instance, there was neither a recording nor a typed reading copy for Sanger’s radio address at Vassar College, 5 August 1926. What survived were an abbreviated version published in the Birth Control Review and excerpts published in several Poughkeepsie newspapers. No two versions were close enough to permit conflation, so the editors sensibly printed the longest version available (that in the Poughkeepsie Evening Star), supplementing this text with material from other newspaper accounts absent from the source text (Sanger Papers, 1:447–49).
Initial decisions on handling foreign-language, shorthand, and ciphered materials are discussed, above, in chapter 3. A few lucky editors enjoy access to the foreign-language translations actually used by their subjects to read letters or other documents written in an unfamiliar language or code. Here the appropriate source text is that contemporary translation, whatever its shortcomings as an accurate rendering of the original.
The George Washington Papers editors had this luxury for documents in French sent to Washington during the Confederation and his presidency. In early volumes of the edition’s Presidential Series, the editorial reading texts for such foreign-language materials were transcriptions of the translations read by Washington, with the French texts transcribed in the last footnote to the document. In later volumes of the Presidential Series, the French texts can be omitted, as they’ll be readily available in the electronic edition of the Washington Papers.
Most editors, however, will have to find a way to translate such sources.
A. Foreign-Language Materials
Realistically, few editors will be completely fluent in every language used to create the materials relevant to their editions. (Editors working on the Papers of Jacob Leisler for instance, dealt with documents in German, French, Dutch, and pidgin English-Dutch.) The preparation of these documents for publication will be a cooperative and iterative process among editors and translators.
The translator must be expert not simply in the language in question but in the language as spoken and written at the time the document was created. Spelling, punctuation, and other usages do not stand still in any tongue. Facility in twenty-first-century Spanish is inadequate preparation for dealing with seventeenth- or eighteenth-century Castilian.
Because of the collaborative nature of preparing translations for a documentary series, the translator who is not a native English speaker must also have exceptionally high English-language skills. The translator must be able to understand not only the editors’ questions voiced in modern English but also the appropriate patterns of English usage for the period in which the documents were created.
Translating into English for a documentary edition raises many issues. While some editors might be tempted to retain the punctuation, capitalization, or sentence structure of the foreign-language source in the translation, this may create a less-than-readable document in English. A more reasonable course is creating in the translation a document that parallels the original in its style and tone. A formal document can be translated into formal English; an informal one, into more colloquial phrases. If translations will be published side by side with documents of the same period inscribed in English, translators can use the English-language documents as a guide in translating nouns, forms of address, or other diction that appears in both languages. The translation cannot replace the original foreign-language document, but it can give the English-speaking reader the original’s substance and a sense of the language in which that meaning was conveyed.
Editors intending to publish translations must decide on how to treat irregular spellings of proper nouns—whether to give them spellings that are correct and consistent in English usage or to retain unusual spellings from the foreign-language document.
The choice may be easy for geographical place-names: it would, for example, be disconcerting to retain the Spanish “Londres” for “London” in an English translation of a document. Other proper nouns will raise more difficult questions. Editors have made different choices in this matter, basing their decisions on the nature of their documents and their audience. Some use the correct English spellings of proper nouns in translation (when these can be determined), providing in a footnote variant spellings used in the original; others retain the author’s usage from the original, untranslated document. These and any other translation policies should be made clear in an edition’s introduction or in source notes to the specific translations. Readers can then evaluate the translation’s relationship to the original.
Some of the original’s texture can be conveyed in a translation with little effort. Formal elements such as underlining and block letters can easily be retained in a translation. But if the editor’s goal is a readable text in English, other features may have to be ignored or relegated to notes.
Authorial methods of rewriting or correcting the text create especially difficult problems. Interlineations or insertions may have to be recorded in footnotes when significant. An attempt to reproduce them in the translated text itself may defeat the purpose of providing a convenient, readable English version, for words or groups of words in one language seldom translate into exactly the same number of words or phrases in another tongue. One word in English may be the perfect equivalent of a several-word phrase in a foreign language, or it may take half a line of English words to convey the meaning of a single word in Greek.
It may be impossible to find an appropriate place to insert translations of interlineations or substitutions when they differ in length from the original. For example, the common French term il y a (literally, “it there has”) is rendered in English as “there is” or “there are.” If, however, a French author carelessly omitted the “y” in drafting the phrase and then inserted it into the text with a caret, there would be no way to show this insertion in the translated phrase. An editorial note could dispose of the matter clearly and concisely.
A final issue is documents that intermingle languages. Again, the goal should be a readable text for the intended audience. In Family Letters of Victor and Meta Berger, letters that Victor Berger wrote entirely in German are printed entirely in English translation, but when Berger wrote in English and dropped in an occasional German phrase, the German words remain in the text, with a translation in a footnote. The same system is used by most editors whose subjects sprinkle their correspondence with mottoes and sayings in modern and classical languages. For more complex language problems in a collection of documents, it may be necessary to experiment with several of the most difficult documents. Only after trying out a number of different solutions can an editor settle on the conventions for translation that will produce the most usable texts for the audience likely to consult the edition. The same conventions should be kept in mind if the editor later finds it necessary to provide a translation in annotation for an unfamiliar foreign-language word or phrase appearing in a text.
As a convenience for the readers of most editions, translation should immediately follow the foreign-language original. The edition will “privilege” that source text while giving readers the text most will need to understand it.
For source texts in some standardized form of shorthand or code, the translator will probably find it more convenient to work from an image of the source text, not a transcription. Experts in the Gregg system are still fairly easy to find, but the Woodrow Wilson editors’ search for someone familiar with Wilson’s favorite Graham shorthand remains a scholarly legend. Once translations from Graham to English were completed, more challenges arose.
Most of the source texts that served the Woodrow Wilson Papers were printed in a very conservatively emended form. There was no question of publishing hand-set printed facsimiles of the notes and drafts that Wilson made in Graham shorthand. The translations, however, still demanded editorial attention. Since Wilson usually indicated his preferred marks of punctuation and paragraph breaks in his shorthand materials, these could be honored in the editorial texts. Still, the editors never pretended that they could guess all of the author’s intentions for translation from Graham, and their standard for the treatment of shorthand materials differs from that for materials drafted by Wilson in clear English. A statement of these special methods appears in editorial notes (1:19, 128–31). Notes to individual documents alert the reader to the use of translated shorthand source texts.
Other writers use personal “shorthand” that may now be comprehensible only to their editors. It’s only these experts who have the opportunity to compare and analyze a wide selection of such idiosyncratic forms, while scholars without access to the project’s editorial archives are denied this luxury. This imposes a special duty on the editor to determine the meaning of such forms and to represent them verbally in the editorial text for the reader’s convenience and enlightenment. If the nature of the shorthand allows the expansion of alphabetic contractions within square brackets, the editor’s responsibility is fulfilled. If the symbols make such expansion impractical, the editor may need to use a special typeface to represent translation of the author’s do-it-yourself shorthand, with footnotes (or hyperlinks in an electronic edition) explaining the symbols in the source.
Whatever policy is adopted, the initial transcription and deciphered version should reflect the peculiarities of the original. The editor can decide on any necessary emendations at a later stage. The reader must be warned that all such emendations have been supplied by the editor, and in the substance of texts translated from shorthand, the reader deserves to know when editorial guesswork or imagination has been employed.
C. Codes and Ciphers
With ciphered materials, as with shorthand, photocopies of the source text are a more reliable basis for translation than a transcription of numbers and symbols. The modern editors of personal or diplomatic codes and ciphers from earlier centuries will find no ready-made experts to assist them. The editorial staff itself will have to learn to work from surviving keys to these ciphered materials to create an understandable reading text.
Editors with access to the key to a cipher or code have a concern beyond the choice of typeface for encoded passages: the author’s accuracy in enciphering the passages and the recipient’s skill in decoding them. The translated clear text of coded documents cannot stop with enabling the reader to see just which sections were entered in code and cipher and what those codes meant. The text or the accompanying notes must also record what the editor has been able to determine about the recipient’s success in mastering the ciphered passages. Indicating which words, phrases, or sentences were significant enough to deserve encoding allows the reader to see exactly which information in the letter was judged confidential and which facts the writer felt free to leave open to prying eyes. Noting both the author’s skill in encoding his or her own words and the correspondent’s accuracy in using the key to the same code is critical in showing the effectiveness of the transmission of the ciphered information.
Madison’s editors found that a recipient’s errors in deciphering the coded text had led earlier, less conscientious editors to publish inaccurate versions of significant political correspondence (see Madison Papers, 6:177–79). John Jay’s editors discovered that the inventor of one code misused his own system so badly that his correspondent was unable to decipher his letters (Papers of John Jay, 2:117–18). The editor should indicate instances in which a significant difference exists between what the author intended and what the second party was, in fact, able to comprehend. The simplest solution is an editorial text that approximates the author’s intentions, no matter how badly fulfilled or how poorly the recipient or other readers managed to grasp the writer’s meaning. Numbered footnotes can describe discrepancies between intentions and perceptions. (A detailed analysis of a cipher document representing the full array of such problems of communication can be seen in the Madison Papers, 6:177–79.)
With ciphers, as with shorthand, it may be impossible to guess the author’s intentions as to capitalization and punctuation. Writers in cipher and code often deliberately omit marks of punctuation or paragraph breaks to avoid assisting the efforts of enemy cryptologists. Many editors have decided against supplying any of these omissions. They argue that since the coded message’s recipient had to guess at punctuation, it may be more accurate to print the newly decoded text in the same ambivalent fashion. Other editors punctuate the deciphered text based on known patterns of the author.
Whatever policy is adopted, the initial transcription and deciphered version should reflect the peculiarities of the original, and the editor can decide on any necessary emendations at a later stage. As with shorthand translations, the reader must be warned when and why such emendations have been supplied by the editor. For ciphers, the usual device is to print ciphered passages in small capital letters or some other typeface that does not appear elsewhere in the editorial texts of handwritten sources. This method eliminates the need for additional footnotes and ensures that the reader cannot confuse it with any other textual device.
The textual problems presented by telegraphic communications are intimately related to those of other forms of coded transmission and translation. Editors of nineteenth-century archives that contain a substantial number of telegrams follow a conservative policy in emending received telegraph texts. In this era, the decoded message was written out by hand in the receiving telegraph office, and the words and phrases were usually copied in conventional form, using upper- and lowercase letters and marks of punctuation.
Editors of cables received in the modern era face new decisions. The twentieth century brought automated printing of decoded telegraph messages in uppercase characters only, with the additional convention of writing out the names of marks of punctuation (e.g., “stop”) instead of translating the words back into the symbols themselves. Editors of World War II leaders like Marshall and Eisenhower chose readability over documentary fidelity, translating “stop” to a period and supplying appropriate patterns of uppercase and lowercase letters. This decision was further justified by the fact that these generals customarily saw the incoming messages as summaries neatly retyped by their aides, not in their original form as telegrams.
IV. Current “Good Practice”
Again and again in our survey of editorial treatment of the problems discussed in this chapter, we’ve used the terms “cautious” and “conservative” to describe the appropriate approach. These are now the accepted watchwords among documentary editors for considering all textual matters. The reasons are ruthlessly practical.
As a practical matter, the chosen textual method should be the one that best serves the majority of the sources being edited. This spares readers unnecessary announcements of exceptions to some editorial rule or other. Beyond this, American editors now generally agree that conservative emendation is more effective than liberal emendation. A survey of veteran editors in the early 1990s revealed that nearly all had adopted less intrusive editorial policies than the ones announced in their original statements of editorial methods. While several scholars who had become directors of editions initiated by earlier editors had revised textual policies in the direction of more conservative emendation, no successor editor had instituted a new policy of more generous editorial intervention. It is no longer possible to find editors who endorse without question the practices sanctioning liberal editorial intervention commonly accepted a decade or two ago.
G. Thomas Tanselle’s 1978 evaluation of the methodology of documentary editions prompted some to reexamine their methods, but most editors who revised their methods did so on the basis of their day-to-day experience. This underscores the point that no statement of principles is likely to cover all the issues and problems to be encountered in any documentary edition and that the fledgling editor is well advised to examine in detail the practices of a wide range of editions before launching a new documentary project. In particular, the lessons of editors who modified their method in midstream are instructive.
The best-known example of evolving editorial methods among editions of famous American authors is probably the Mark Twain project, but the experience of other editors is equally helpful. Among literary series, the first volume of the Howells Letters was emended far more heavily than any other number in the series. In part, this is because Howells himself standardized his letter-writing style as he approached middle age; the later source texts simply needed less emendation. But the frequency of emendation also decreased as the editors themselves grew more accustomed to Howells’s usage. Patterns of punctuation or spelling that appeared odd or ambiguous in the early years of the project no longer seemed to need correction or explanation because they had become familiar to the project’s staff.
Historical editions, too, are not without their noticeably modified editorial methods. From its inception, the Ratification of the Constitution series gave conservative, almost diplomatic treatment to certain documents (such as government records), labeled “LT” to indicate a literal transcription of the source. Other materials, such as private correspondence, diary entries, and the like, were heavily emended in one of the most liberal applications of expanded transcription on record. As the project continued, this dual standard of textual treatment was abandoned in favor of a single general policy described in volume 13 of the series: “With only a few exceptions all documents are transcribed literally.” When the Woodrow Wilson editors reached volume 31, they provided a revised introduction to explain to readers that they had come to follow an increasingly rigorous adherence to the rule of verbatim et literatim. The Henry Laurens Papers, once an example of expanded transcription, adopted policies that presented text in near diplomatic form. The Franklin Papers editors’ textual policies, while still far from literal or diplomatic transcription, became more deliberately conservative in the late 1980s, and the same pattern has been seen in the editions of the papers of Jefferson and the Adams family in more recent years.
From the beginning, one group of editors ignored the temptation to emend or “improve” their source texts—those who dealt with the records of less-educated people. Historians and historian-editors have become sensitive to the significance of the evidence available in the writings and records of the poorly educated. Political, social, and economic historians now focus on the less educated as well as on the literate elite in our society, and the documentary records of such groups and individuals are the subject of scholarly editing as well as general scholarly interest.
It should be no surprise that the methods employed in editions such as the papers of black Abolitionists and Southern freedmen differ from those designed to serve the correspondence of Adams and Jefferson. Editors of the traditional statesmen’s papers are concerned with documents that fall within the realm of conventional scholarly research. The methods of emendation and normalization used in the earliest historical papers projects were chosen to illuminate the writings of individuals who were not merely literate but exceptionally well educated. As newer editorial projects confronted the texts of documents that recorded the words and thoughts of the ill educated, even the wholly illiterate, they discovered that these traditional conventions were unsatisfactory. The documents demanded different methods and even different skills. Randall M. Miller pointed out in his instructive 1985 essay “Documentary Editing and Black History” that the evaluation of historical documents generated by the African American experience might require expertise in fields as diverse as cultural anthropology, folklore, linguistics, and musicology.
The imposition of normalized punctuation, for instance, is based on the assumption that a source text’s author understands the functions of such marks and that he or she would approve such repunctuation if given the chance to be his or her own editor. But correction of spelling errors in the writings of an ill-educated writer imposes a false sense of authorial intentions. Worse still, it can destroy much of the special value inherent in documents inscribed by the semiliterate—the phonetic rendering of colloquial language and dialect that make such documents useful to philologists and cultural historians.
In many instances, the records of the less educated suffered nonauthorial normalization long before they became the subjects of scholarly editing. Illiterate individuals had no choice but to dictate letters or memoirs to second parties, who imposed their own notions of correct spelling and punctuation and even syntax upon them. The only way to emend such a dictated source text to bring it closer to authorial intent would be to make it less correct syntactically and to introduce phonetic misspellings to match the author’s dialect. Luckily, no editor has been tempted to follow such a course. Documents dictated by the illiterate, like documents inscribed laboriously if incorrectly by the semiliterate, must be allowed to stand, even though they may reflect a degree of elegance superimposed by the amanuensis and completely foreign to the authors themselves. This phenomenon is addressed squarely by editors of the WPA “Slave Narratives,” in which patronizing white interviewers affected both the form and content of the reminiscences they elicited from former slaves (Documentary Editing 18 : 92).
In earlier decades, there was a flurry of debate over whether the writings of certain categories of professional authors or historical figures demanded special treatment in terms of textual methods. For a time, this centered on the desirability of special requirements for editing the writings of women. It has now been agreed that the author’s sex in and of itself dictates no special methods in establishing a text. It is the subject’s level of education or habits and patterns of writing and the needs and expectations of the edition’s audience that should mold such methods.
Reconstructions of vanished early versions of historic documents or sources that demand translation remain exceptions to the general rule of conservative treatment for documentary sources. Still, the important elements of such sources are more likely to survive the flexible application of a conservative editorial approach than more liberal editorial intervention. The reader with a taste for watchwords may be reminded of A. E. Housman’s comments on the “science” and the “art” of textual criticism:
A textual critic engaged upon his business is not at all like Newton investigating the motions of the planets: he is much more like a dog hunting for fleas. If a dog hunted for fleas on mathematical principles, basing his researches on statistics of area and population, he would never catch a flea except by accident. They require to be treated as individuals; and every problem which presents itself to the textual critic must be regarded as possibly unique. . . . If a dog is to hunt for fleas successfully he must be quick and he must be sensitive. It is no good for a rhinoceros to hunt for fleas: he does not know where they are, and could not catch them if he did. (“The Application of Thought to Textual Criticism,” 132–33)
Documentary editors, too, must be knowing and sensitive flea hounds. The fact that their editorial products will be used as documentary evidence imposes a special responsibility. Their imaginations should be directed toward reconstructing inscribed truth, not distracting their readers with uninformed guesses. Liberal policies of editorial emendation and intervention represent the “rhinoceros” approach to documentary editing, for they miss the fleas and crush the source texts under their own weight.
For a discussion of the nature of an established text see Fredson Bowers, “Established Texts and Definitive Editions.” Anecdotal but useful accounts of the lessons to be learned in establishing texts are found in Ronald Gottesman and David Nordloh, “The Quest for Perfection: Or Surprises in the Consummation of Their Wedding Journey.” As for the textual record required in an edition of private writings, see Nordloh’s “Substantives and Accidentals vs. New Evidence: Another Strike in the Game of Distinctions.” G. Thomas Tanselle addresses the problems of editorial records for literary works in “Some Principles for Editorial Apparatus,” but there are no comparable studies of the special problems arising in creating such an apparatus for documentary sources.
For cautionary words on an edition’s failure to recognize the limitations of a microfilm as a source text without verification of a transcription against the original, see Jo Zuppan’s review of The Correspondence of John Bartram.
For a summary of the Douglass edition’s methods, see John McKivigan, “Capturing the Oral Event: Editing the Speeches of Frederick Douglass.” For the audio archive and transcriptions of the Presidential Recordings Program, go to their Web site at http://millercenter.virginia.edu/scripps/digitalarchive/presidentialrecordings/index?PHPSESSID=91c3bcdaeff359cfb8140f5545147724.
Hans Walter Gabler, director of the Ulysses edition, describes their methods in “The Text as Process and the Problem of Intentionality.” Robert Spoo offers an excellent brief history of the controversy surrounding those methods in “Ulysses and the Ten Years War: A Survey of Missed Opportunities.”
For further discussion of Jerome McGann and the socialization of texts, see Jack Stillinger, Multiple Authorship and the Myth of Solitary Genius.
Sargent Bush discusses the array of “multiple-text” problems in Jonathan Edwards’s “Miscellanies,” the thirty-five-year record of Edward’s reflections on theology and philosophy, in “Watching Jonathan Edwards Think.” The results of editorial work on these texts can be seen in vols. 13, 18, 20, and 23 of The Works of Jonathan Edwards.
Edward A. Levenston’s discussions of foreign-language translation for scholarly editions in The Stuff of Literature: Physical Aspects of Texts and Their Relation to Literary Meaning provide much food for thought on both a theoretical and practical level. A detailed analysis of a cipher document representing the full array of such problems of communication can be seen in the Madison Papers, 6:177–79.
For still more arguments in favor of conservative methods in emending source texts with documentary elements, see Hershel Parker’s review of two Hawthorne volumes in Nineteenth-Century Fiction; Ernest W. Sullivan, “The Problem of Text in Familiar Letters”; and G. Thomas Tanselle’s “The Editing of Historical Documents,” as well as his remarks on the overrated virtues of readability in his essay “Literary Editing.” Examples of clear reading texts without the use of textual symbols can be found in such editions as the Howells Letters and the Franklin Papers.
Excellent discussions of the many facets of the documentary records of the ill educated can be found in John W. Blassingame’s introduction to Slave Testimony: Two Centuries of Letters, Interviews, and Autobiographies; and in C. Vann Woodward’s review essay “History from Slave Sources,” in the American Historical Review.
Finally, the examples of textual method and practice offered in Philip Gaskell, From Writer to Reader: Studies in Editorial Method, still remain among the most helpful available.