Untitled

Name /csc_mla_677001/04/Mp_51 09/22/2005 04:38PM Plate # 0 J E R O M E M C G A N N A N D D I N O B U Z Z E T T I
Justasthemachineryofthecodexopenedradicallynewwaystostore, organize, study, transform, and disseminate knowledge and information,digital technology represents an epochal watershed for anyone involved with semiotic materials. For scholars of books and texts, and in particular foreditorial scholars, digital tools have already begun their disciplinary transfor-mations, and we can see as well the promise of further, perhaps even moreremarkable, changes on the near horizon of our work.
In this essay, we describe and reflect on these changes, but first we briefly review the forms and procedures of scholarly editing that are licensed bycodex technology. This survey is important because present work and futuredevelopments in digital scholarship evolve from critical models that we haveinherited. The basic procedures and goals of scholarly editing will not changebecause of digital technology. True, the scale, range, and diversity of materialsthat can be subjected to scholarly formalization and analysis are all vastlyaugmented by these new tools. Besides, the emergence of born-digital arti-facts creates entirely new critical opportunities, as well as problems, for li-brarians, archivists, and anyone interested in the study and interpretation ofworks of culture. Nonetheless, the goals of the scholar remain unaltered— preservation, access, dissemination, and analysis-interpretation—as does the basic critical method, formalization.
If our traditional goals remain, however, these new technologies are forcing us to revisit and rethink some of the most basic problems of textualityand theory of text. We address these matters in the two central sections ofthis essay, and in the final section we reflect on certain practical methodo-logical implications of these reflections. First, however, we must step back Name /csc_mla_677001/04/Mp_52 09/22/2005 04:38PM Plate # 0 J E R O M E M C G A N N A N D D I N O B U Z Z E T T I
and make a brief review of the current state of text-editing theory andmethod.
C O D E X - B A S E D S C H O L A R S H I P A N D C R I T I C I S M Scholarly editing is the source and end and test of every type of investigativeand interpretational activity that critical minds may choose to undertake.1Well understood by scholars until fairly recently, the foundational status ofeditorial work is now much less surely perceived. Hermeneuts of every kindregularly regard such work, in Rene´ Wellek’s notoriously misguided descrip-tion, as “preliminary operations” in literary studies (Wellek and Warren 57).
Odd though it may seem, that view is widely shared even by bibliographersand editors, who often embrace a positivist conception of their work.
Scholarly editing is grounded in two procedural models: facsimile ed- iting of individual documents and critical editing of a set of related docu-mentary witnesses. In the first case, the scholar’s object is to provide as ac-curate a simulation of some particular document as the means of reproductionallow. Various kinds of documentary simulation are possible, from digitalimages and photoduplications on one end to printed diplomatic transcrip-tions on the other. Facsimile editing is sometimes imagined as a relativelystraightforward and even simple scholarly project, but in fact the facsimileeditor’s task is every bit as complex and demanding as the critical editor’s. Incertain respects it can be more difficult, precisely because of the illusions thatcome with the presence of a single documentary witness, which can appearas a simple, self-transparent, and self-identical object.2 Securing a clear and thorough facsimile brings with it more problems than the manifest and immediate technical ones, though they are real enough.
In addition, the facsimile editor can never forget that the edition being madecomes at a certain place and time. At best, therefore, the edition is an effortto simulate the document at that chosen moment. The document bearswithin itself the evidence of its life and provenance, but that evidence, becauseof the document’s historical passage, will always be more or less obscure,ambiguous in meaning, or even unrecoverable.
Every document exhibits this kind of dynamic quality, and a good schol- arly edition will seek to expose that volatility as fully as possible. Being clearabout the dynamic character of a document is the beginning of scholarlywisdom, whatever type of work one may undertake (hermeneutical oreditorial) and—in editorial work—whatever type of edition one has chosento do.
Name /csc_mla_677001/04/Mp_53 09/22/2005 04:38PM Plate # 0 E D I T I N G I N A D I G I T A L H O R I Z O N
The other foundational pole (or pillar) of scholarly editing is critical editing. This work centers in the comparative analysis of a set of documentarywitnesses each of which instantiates some form or state of the work in ques-tion. We name, for example, Dante Gabriel Rossetti’s “The Blessed Damo-zel” with that one name, as if it were a single, self-identical thing (which, inthat special perspective, it in fact is—that is to say, is taken to be). But thework so named descends to us in multiple documentary forms. Critical ed-iting involves the careful study of that documentary corpus. Its main purposeis to make various kinds of clarifying distinctions among the enormous num-ber of textual witnesses that instantiate a certain named work.
The critical editor’s working premise is that textual transmission involves a series of translations. Works get passed on by being reproduced in freshdocumentary forms. This process of reproduction necessarily involves textualchanges of various kinds, including changes that obscure and corrupt earliertextual forms. Some of these changes are made deliberately, many others not.
A classical model of critical editing, therefore, has involved the effort to dis-tinguish the corruptions that have entered the body of the work as a resultof its transmission history. That model often postulates a single, authoritative,original state of the work. The scholar’s analytic procedures are bent on theeffort to recover the text of that presumably pristine original.
A key device for pursuing such a goal is stemmatic analysis. This is a procedure by which the evolutionary descent of the many textual witnessesis arranged in specific lines. A stemma of documents exposes, simply, whichtexts were copied from which texts. Understanding the lines of textual trans-mission supplies a scholar with information that guides and controls the ed-itorial work when decisions have to be made between variant forms of thetext.
The problematic character of every documentary witness remains a key issue for the critical editor. That difficulty emerges at the initial stage ofeditorial work—that is to say, at the point when a decision is made aboutwhich documents will come into the textual analysis. In no case can all thewitnesses be included. On one hand, the number of actually available doc-uments will be far too numerous; on the other, many documents that formedpart of the transmission history will be inaccessible.
In some cases—they are a distinct minority—a relatively small and man- ageable set of documents offers itself for scholarly analysis. Print technologybrought about a massive proliferation of textual works. These are passed onto us edition by edition, and of course each edition differs from every other,nor are the differences between editions always easy to see or understand.
Name /csc_mla_677001/04/Mp_54 09/22/2005 04:38PM Plate # 0 J E R O M E M C G A N N A N D D I N O B U Z Z E T T I
But the play of these kinds of textual differences is still more extreme. Anunschooled view, for example, will assume that every copy of a print editionof some work is identical to every other copy. Editorial scholars themselvesoften make this assumption, and sometimes deliberately (in order to simplify,for analytic purposes, the ordering of the editorial materials). But the textualscholar usually knows better, and in producing a critical edition from ananalysis of printed documents, editors regularly understand that multiple cop-ies of a single edition must be examined. (As we see below, even multiplecopies that appear to be textually identical always incorporate material dif-ferences that can be, from the scholar’s point of view, crucial for anyone tryingto understand the work in question.) In recent years a special type of critical editing gained wide currency: the so-called eclectic editing procedure, promoted especially by FredsonBowers. This method chooses a copy-text as the editor’s point of departure.
The editor then corrects (or, more strictly, changes) this text on the basis ofa comparative study of the available readings in the witnesses that are judgedto be authoritative by the editor. When variant readings appear equally au-thoritative, the editor uses judgment to choose between them.
In considering these matters, the scholar must never lose sight of the fundamentally volatile character of the textual condition. The pursuit of acorrect or authoritative text is what a poet might call “a hopeless flight”(Byron 4.70.5). Editors can only work to the best of their judgment, for thetexts remain, in the last analysis, ambiguous. One sees how and why byreflecting on, for example, the editorial commitment to achieve an author-itative text—which is to say, a text that represents the author’s intention.
That pervasive editorial concept is fraught with difficulties. Authors regularlychange their works, so that one often must wrestle with multiple intentions.
Which intentions are the most authoritative? first intentions? intermediate?final? Are we certain that we know each line of intentionality, or that we canclearly distinguish one from another? For that matter, how do we deal withthose textual features and formations that come about through nonauthorialagencies like publishers? In respect to the idea of textual authority, moreauthorities sit at the textual table than the author.
Scholars have responded to that textual condition with a number of interesting, specialized procedures. Three important variations on the twobasic approaches to scholarly editing are especially common: best-texteditions, genetic editions, and editions with multiple versions. The best-textedition aims to generate a reproduction of the particular text of a certainwork—let’s say, the Hengwrt manuscript of Chaucer’s Canterbury Tales—that Name /csc_mla_677001/04/Mp_55 09/22/2005 04:38PM Plate # 0 E D I T I N G I N A D I G I T A L H O R I Z O N
will be cleared of its errors. Collation is used to locate and correct what arejudged to be corrupt passages. Unlike the eclectic edition, a best-text editiondoes not seek to generate a heteroglot text but one that accurately representsthe readings of the target document. If a facsimile or diplomatic approach tothe editorial task is taken, editors will even preserve readings that they judgeto be corrupted. If the approach is critical, editors will try to correct sucherrors and restore the text to what they judge to have been its (most) au-thoritative state.
Genetic editing procedures were developed in order to deal with the dynamic character of an author’s manuscript texts. These editions examineand collate all the documents that form part of the process that brought acertain work into a certain state. Usually these editions aim to expose andtrace the authorial process of composition to some point of its completion(for example, to the point where the text has been made ready forpublication).3 Multiple version editions may take a best-text, an eclectic, or a genetic approach to their work. They aim, in any of these cases, to present multiple-reading versions of some particular work. (Paull Baum edited Rossetti’s “TheBlessed Damozel” in this way, and Wordsworth’s The Prelude as well as Cole-ridge’s“The Rime of the Ancient Mariner” have regularly been treated in aversioning editorial approach.)4 Finally, we should mention the proposal for social text editing that was especially promoted in recent years by D. F. McKenzie. In McKenzie’s view,the scholar’s attention should be directed not only at the text—the linguisticfeatures of a document—but at the entirety of the material character of therelevant witnesses. McKenzie regarded documents as complex semiotic fieldsthat bear within themselves the evidence of their social emergence. Thecritical editor, in his view, should focus on that field of relations and notsimply on the linguistic text. Unfortunately, McKenzie died before he couldcomplete the project he had in mind to illustrate his editorial approach—hisedition of William Congreve.
T E X T U A L A N D E D I T O R I A L S C H O L A R S H I PW I T H D I G I T A L T O O L S The advent of information technology in the last half of the twentieth centuryhas transformed in major ways the terms in which editorial and textual studiesare able to be conceived and conducted. This transformation has come aboutbecause the critical instrument for studying graphical and bibliographic Name /csc_mla_677001/04/Mp_56 09/22/2005 04:38PM Plate # 0 J E R O M E M C G A N N A N D D I N O B U Z Z E T T I
works, including textual works, is no longer the codex (see McGann, “Ra-tionale”). Because the digital computer can simulate any material object orcondition in a uniform electronic coding procedure, vast amounts of infor-mation that are contained in objects like books can be digitally transformedand stored for many different uses. In addition, information stored in differentkinds of media—musical and pictorial information as well as textual andbibliographic information—can be gathered and translated into a uniform(digital) medium and of course can be broadcast electronically. We go onlineand access the card catalogs, and often the very holdings, of major researcharchives, museums, and libraries all over the world.
The implications of this situation for scholarly editing are especially remarkable. For example, one can now design and build scholarly editionsthat integrate the functions of the two great editorial models, the facsimileand the critical edition. In a codex framework these functions are integratedonly at the level of the library or archive, so that comparative analysis—whichis the basis of all scholarship—involves laborious transactions among manyindividuals separated in different ways and at various scales. A complete criti-cal edition of the multimedia materials produced by figures like Rossetti,Blake, or Roberts Burns can be designed and built, and Shakespeare’s workneed no longer be critically treated in purely textual and linguistic terms butcan be approached for what it was and still is: a set of theater documents.
Digitization also overcomes the codex-enforced spatial limitations on theamount of material that can be uniformly gathered and re-presented. In short,digital tools permit one to conceive of an editorial environment incorporat-ing materials of many different kinds that might be physically locatedanywhere.
The accessibility of these resources and the relative ease with which one can learn to make and use them have produced a volatile Internet environ-ment. The Web is a petri dish for humanities sites devoted to every conceiv-able topic or figure or movement or event. Noncopyrighted texts are availableeverywhere as well as masses of commentary and associated information. Andof course therein lies the problem, for scholarship and education demanddisciplined work. Scholars commit themselves to developing and maintainingrigorous standards for critical procedures and critical outcomes. The truthabout humanities on the Internet, however, is that tares are rampant amongthe wheat. Nor do we have in place as yet the institutions we need to organizeand evaluate these materials. Those resources are slowly being developed, butin the meantime we have metastasis.
Here is an example of the kind of problem that now must be dealt with.
Name /csc_mla_677001/04/Mp_57 09/22/2005 04:38PM Plate # 0 E D I T I N G I N A D I G I T A L H O R I Z O N
We have in mind not one of the thousands of slapdash, if also sometimeslively, Web sites that can be found with a simple Google search. We ratherchoose the widely used (and in fact very useful if also very expensive) EnglishPoetry Full-Text Database (600–1900) developed and sold by Chadwyck-Healey. From a scholar’s point of view, this work is primarily an electronicconcordance for the authors and works in question. While its texts have beenfor the most part carefully proofed, they are nearly all noncopyrighted. Thestatus of the source text therefore must be a primary concern for any butthose whose use is of the most casual kind. Thomas Lovell Beddoes, forexample, comes in the 1851 Pickering edition—an edition no scholar nowwould use except in the context of inquiries about Beddoes’s reception his-tory. In addition, although the database calls itself full-text, it is not. Prosematerials in the books that served the database as copy-text are not part ofthe database. The prefaces, introductions, notes, appendixes, and so forth thataccompany the poetry in the original books, and that are so often clearly anintegral part of the poetry, have been removed.
Economic criteria largely determined the database’s choice of texts (and, presumably, the removal of the prose materials). The decision brings certainadvantages, however. The 1851 edition of Beddoes, edition for example,while not a rare book, is not common (the University of Virginia, which hasstrong nineteenth-century holdings, does not own a copy). The database isof course far from a complete collection of all books of poetry written,printed, or published between 600 and 1900, but it does contain the (poetical)texts of many books that are rare or difficult to find.
Two further important scholarly facts about the database. First, it is a proprietary work. This means that it does not lie open to Internet access,which would allow its materials to be integrated with other related materials.
The database is thus an isolated work in a medium where interoperability— the capacity to create and manipulate relations among scattered and diverse types of materials—is the key function. Second (and along the same faultline), its texts can only be string-searched: they have not been editoriallyorganized or marked up for structured search and analysis operations or foranalytic integration with other materials in something like what has beenimagined as a semantic web (see below).
Scholars whose work functions within the great protocols of the co- dex—one of the most amazing inventions of human ingenuity—appear tothink that the construction of a Web site fairly defines digital scholarship inthe humanities. This view responds to the power of Internet technology tomake materials available to people who might not otherwise, for any number Name /csc_mla_677001/04/Mp_58 09/22/2005 04:38PM Plate # 0 J E R O M E M C G A N N A N D D I N O B U Z Z E T T I
of reasons, be able to access them. It registers as well the power of digitizationto supply the user with multimedia materials. These increased accessibilitiesare indeed a great boon to everyone, not least of all to students of the hu-manities. But in a scholarly perspective, these digital functions continue toobscure the scholarly and educational opportunities that have been openedto us by the new technology.
Access to those opportunities requires one to become familiar with dig- ital text representation procedures and in particular with how such materialscan be marked and organized for formal analysis. That subject cannot beusefully engaged, however, without a comprehensive and adequate theory oftextuality in general.
M A R K I N G A N D S T R U C T U R I N G D I G I T A LT E X T R E P R E S E N T A T I O N S Traditional text—printed, scripted, oral—is regularly taken, in its materialinstantiations, as self-identical and transparent. It is taken for what it appearsto be: nonvolatile. In this view, volatility is seen as the outcome of an inter-pretive action on an otherwise fixed text. The inadequacy of this view, ortheory, of text must be clearly grasped by scholars, and especially by scholarswho wish to undertake that foundational act of criticism and interpretation,the making of a scholarly edition.
We may usefully begin to deconstruct this pervasive illusion about text by reflecting on the view of text held by a computer scientist, for whom textis“information coded as characters or sequences of characters” (Day 1).
Coded information is data, and data is a processable material object. By pro-cessing data, we process the information it represents. But digital text, unlikethe information it conveys, is not volatile. It is a physical thing residing inthe memory cells of a computer in a completely disambiguated condition.
That precise physical structure matters for digital text, just as precise physicalstructure, very different, matters for paper-based text. The digital form of thetext defines it as an object on which computers can operate algorithmicallyto convey sense and information. A digital text is coded information, and acode has a syntax that governs the ordering of the physical signs it is madeof. In principle, therefore, digital text is marked by the syntax of its code, bythe arrangement of the physical tokens that stand for binary digits.
Any explicit feature of a text can be conceived as a mark. We may thus say that digital text is marked by the linear ordering of the string of codedcharacters that constitutes it as a data type, for the string shows explicitly its Name /csc_mla_677001/04/Mp_59 09/22/2005 04:38PM Plate # 0 E D I T I N G I N A D I G I T A L H O R I Z O N
own linear structure. The primary semiotic code of digital text is cast by thestructural properties of a string of characters. The linearity of digital text asa data type puts an immediate constraint on its semiotics. It is a stream ofcoded characters, and each character has a position in an ordered linearsuccession.
But in common technical parlance, a string of coded characters is re- garded as unmarked text. Markup is a special kind of coding, one laid on atextual object that has already been coded in another textual order entirely— that is to say, in the textual order marked by bibliographic codes. When we mark up a text with TEI or XML code, we are actually marking the preex-istent bibliographic markup and not the content, which has already beenmarked in the bibliographic object. This situation is the source of great con-fusion and must be clearly grasped if one is to understand what markup canand cannot do for bibliographically coded texts.
Markup is described, correctly, as “the denotation of specific positions in a text with some assigned tokens” (Raymond, Tompa, and Wood, Markup).
In this sense, it adds to the linear string of digital characters its “embeddedcodes, known as tags.” A marked-up text is then commonly and properlyunderstood as a tagged string of characters. But what function do tags performwith respect to bibliographic text, which is most definitely not a linear char-acter string (though it can appear to be that to a superficial view)? Let us continue to approach the problem from a computational point of view. In first-generation procedural markup systems, tags were used to addformatting instructions to a string of characters. With the introduction ofdeclarative markup languages, such as SGML and its humanities derivativeTEI, tags came to be used as “structure markers” (Joloboff 87). By addingstructure to the string, semiotic properties of the digital text emerge as de-pendent functions of the markup with respect to the linear string of char-acters. It has been observed that adding structure to text in this way—thatis, to text seen as flat or unstructured character data—enables “a new ap-proach to document management, one that treats documents as databases”(Raymond, Tompa, and Wood, “From Data Representation” 3). But whatdoes that understanding mean in semiotic terms? The answer depends on the status of markup in relation to the biblio- graphically coded text. Markup makes explicit certain features of an originallypaper-based text; it exhibits them by bringing them forth visibly into theexpression of the text. It is therefore essentially notational. It affects the text’sexpression, both digital and bibliographic, adding a certain type of structureto both.
Name /csc_mla_677001/04/Mp_60 09/22/2005 04:38PM Plate # 0 J E R O M E M C G A N N A N D D I N O B U Z Z E T T I
But then we want to ask, How is that structure related to the content of the bibliographic object it is meant to (re)mark? To show the crucial status of that question, let us make a thought ex- periment. Suppose we choose SGML as the markup language. Its syntax, acontext-free grammar expressed by a document’s DTD (document type def-inition), assigns a given hierarchical structure—chapters, sections, paragraphs,and so on—to the linear string of characters, the computer scientist’s text.
Text can thus be conceived as an “ordered hierarchy of content objects”(DeRose, Durand, Mylonas, and Renear 6; this is the OHCO textual thesis).
But can textual content be altogether modeled as a mere set of hierarchicallyordered objects? Are all textual relations between content elements hierar-chical and linear? The answer is, clearly, no. Traditional texts are riven withoverlapping and recursive structures of various kinds, just as they always en-gage, simultaneously, hierarchical and nonhierarchical formations. Hierar-chical ordering is simply one type of formal arrangement that a text may beasked to operate with, and often it is not by any means the chief formaloperative. Poetical texts in particular regularly deploy various complex kindsof nonlinear and recursive formalities.
Whatever the complexity of a bibliographic text’s structure, however, that structure may be defined as “the set of latent relations” among the de-fined parts of the text (Segre and Kememy 34). Only through markup doesthat formal structure show explicitly at the level of textual expression. Inprinciple, markup must therefore be able to make evident all implicit andvirtual structural features of the text. Much depends on the properties of themarkup system and on the relation between the markup tags and the stringof character data. The position of the tags in the data may or may not beinformation-bearing. Forms of inline markup, like those based in an SGMLmodel, can exhibit only internal structure, that is, a structure dependent on“a subset of character positions” in textual data (Raymond, Tompa, andWood, Markup 4). But textual structures, and in particular the content featuresof the text’s structure, are “not always reducible to a functional descriptionof subcomponents” of a string of characters (7). Textual structure is notbound, in general, to structural features of the expression of the text.
From a purely computational point of view, in-line “markup belongs not to the world of formalisms, but to the world of representations” (4). Aformalism is a calculus operating on abstract objects of a certain kind, whereasa representation is a format or a coding convention to record and to storeinformation. Unlike a calculus, a format does not compute anything, it simply Name /csc_mla_677001/04/Mp_61 09/22/2005 04:38PM Plate # 0 E D I T I N G I N A D I G I T A L H O R I Z O N
provides a coding mechanism to organize physical tokens into data sets rep-resenting information. We can say, then, that markup is essentially a format,or again, in a semiotic idiom, that markup is primarily notational. Inasmuchas it assigns structure to character strings or to the expression of textual in-formation, it can refer only indirectly to textual content. In computationalterms, it describes data structures but does not provide a data model or asemantics for data structures and an algebra that can operate on their values.
Attempts to use the DTDs in SGML systems as a constraint language, or formalism, to operate on textual data face a major difficulty in dealingwith the multiple and overlapping hierarchical structures that are essentialfeatures of all textualities. Some circuitous ways out have been proposed, butin the end the solutions afforded provide “no method of specifying constraintson the interrelationship of separate DTDs” (Sperberg-McQueen and Huit-feldt, “Concurrent Document Hierarchies” 41). The use of embedded de-scriptive markup for managing documents as databases that can operate ontheir content is thus severely hampered by the dependence of SGML systemson internal structure. Content relations are best dealt with by forms of out-of-line markup, which “is more properly considered a specific type of ex-ternal structure” (Raymond, Tompa, and Wood, Markup 4).
In general, we may therefore say that adding structure to textual data does not necessarily imply providing a model for processing the content of atext. A model applicable to document or internal structures can be appro-priate only for directly corresponding content relations. To adequately processtextual content, an external data model that can implement a database oroperate some suitable knowledge representation scheme is required. The cru-cial problem for digital text representation and processing lies therefore in theability to find consistent ways of relating a markup scheme to a knowledge-representation scheme and to its data model.
The Semantic Web project proceeds in that direction with its attempt to “bring structure to the meaningful content of Web pages” (Berners-Lee,Hendler, and Lassila). It is an effort to assign a formal model to the textualdata available on the Web. The introduction of XML, a markup languageprofile that defines a generalized format for documents and data accessibleon the Web, provides a common language for the schematic reduction ofboth the structure of documents—that is, their expression or expressiveform—and the structure of their content. In this approach, the problem tosolve consists precisely in relating the scheme that describes the format of thedocuments to the scheme that describes their content. The first would be an Name /csc_mla_677001/04/Mp_62 09/22/2005 04:38PM Plate # 0 J E R O M E M C G A N N A N D D I N O B U Z Z E T T I
XML schema, “a document that describes the valid format of an XML data-set” (Stuart), and the second would be a metadata schema such as the resourcedescription framework (RDF) being developed for the Semantic Web.5 An RDF schema can be described as an “assertion model” that “allows an entity-relationship-like model to be made for the data.” This assertionmodel gives the data the semantics of standard predicate calculus (Berners-Lee). Both an XML schema and an RDF schema can assign a data model toa document, but in the first case the model depends on internal relationsamong different portions of the document, whereas in the second case itconsists in an external structure independent of the structure of the docu-ment. In this context, XML documents act “as a transfer mechanism forstructured data” (Cambridge Communique´). XML works as a transfer syntax tomap document-dependent or internal data structures into semantic or exter-nal data structures and vice versa. It is through markup that textual structuresshow up explicitly and become processable.
M A R K U P A N D T H E G E N E R A LT H E O R Y O F T E X T U A L I T Y In this context, an important question rises to clear view. Since text is dy-namic and mobile and textual structures are essentially indeterminate, howcan markup properly deal with the phenomena of structural instability? Nei-ther the expression nor the content of a text are given once and for all. Textis not self-identical.6 The structure of its content very much depends on someact of interpretation by an interpreter, nor is its expression absolutely stable.
Textual variants are not simply the result of faulty textual transmission. Textis unsteady, and both its content and expression keep constantly quivering.
As Valentin Voloshinov has it, “what is important about a linguistic form isnot that it is a stable and always self-equivalent signal, but that it is an alwayschangeable and adaptable sign” (68).
Textual mobility originates in what has been described as “the dy- namic[s] of structures and metastructures [that lie] in the heart of any se-miotic activity” (Y. Neuman 67), and it shows up specifically in the semanticproperties of those kinds of expression that set forth what linguists callreflexive metalinguistic features of natural language.7 Diacritical signs areself-describing expressions of this kind, and markup can be viewed as a sortof diacritical mark. A common feature of self-reflexive expressions is thatthey are semantically ambiguous. They are part of the text and they also Name /csc_mla_677001/04/Mp_63 09/22/2005 04:38PM Plate # 0 E D I T I N G I N A D I G I T A L H O R I Z O N
describe it; they are at once textual representations and representations of atextual representation. Markup, therefore, can be seen either as a metalin-guistic description of a textual feature or as a new kind of construction thatextends the expressive power of the object language and provides a visiblesign of some implicit textual content.
A diacritical device such as punctuation, for instance, can be regarded as a kind of markup (Coombs, Renear, and DeRose 935), and by addingpunctuation to a medieval text, a modern editor actually marks it up. Editorialpunctuation, therefore, can be considered either as part of the text or as anexternal description related to it. In the first case, it produces a textual variant;in the second, a variant interpretation. Accordingly, any punctuation mark isambivalent: it can be seen as the mark of an operation or as the mark of anoperational result. If it is regarded as part of the text, it brings in a variantreading and has to be seen as a value for an operation of rephrasing; at thesame time, by introducing an alternative reading, it casts a new interpretationon the text and must be seen as a rule to an action of construing. Yet thevery same punctuation mark can be regarded as an external description ofthe text. In that case, it assigns a meaning to the text and must be seen as avalue for an operation of construal. By providing a new interpretation, how-ever, it adds structure to the wording of the text and must be seen as a ruleto an action of “deformance” (for a definition of this term, see McGann,Radiant Textuality). Marks of this kind, viewable either way, behave just asLudwig Wittgenstein’s famous duck-rabbit picture (Philosophical Investigations2.11).
This sort of semantic ambivalence enables any diacritical mark, or for that matter any kind of markup, to act as a conversion device between textualand interpretational variants. Far from stabilizing the text, the markup actuallymobilizes it. Through markup, an interpretational variant assumes a specifictextual form; conversely, that explicit form immediately opens itself to in-terpretive indeterminacy. Markup has to do with structure or logical form.
It describes the form or exposes it in the text. But the logical form of atextual expression is only apt to show or to express itself in language, and, asWittgenstein puts it, “that which mirrors itself in language, language cannotrepresent.”8 The only way to represent a logical form is to describe it bymeans of a metalanguage. The markup, on its part, may either exhibit ordescribe a logical form, but it can perform both functions only by changingits logical status: it has to commute between object language and metalan-guage, so as to frame either an external metalinguistic description or anobject-language, self-reflexive expression. Markup, therefore, is essentially Name /csc_mla_677001/04/Mp_64 09/22/2005 04:38PM Plate # 0 J E R O M E M C G A N N A N D D I N O B U Z Z E T T I
ambivalent and sets forth self-reflexive ambiguous aspects of the text, whichcan produce structural shifts and make it unstable and mobile.
Text is thus open to indeterminacy, but textual indetermination is not totally unconstrained. Because of textual mobility, we may say that text is notself-identical. But putting things the other way around, we may also say thattext is, virtually, identical with itself, because the whole of all its possiblevariant readings and interpretations makes up a virtual unity identical withitself. Text in this view is not an arbitrary unity, for if it were seen as such,no text would differ from any other. The entirety of all latent capacities ofthe text is virtually one and the same, and this self-identity imposes limitingconditions on mobility and indetermination. The latent unity of the textbrings about phenomena of mutual compensation between the stability ofthe expression and the variety of its possible interpretations or, conversely,between the instability of the expression and the steadiness of its conceptualimport. With any given expression comes an indefinite number of possibleinterpretations, just as for any given conceptual content we may imagine anindefinite number of possible concrete formulations. But for a given text, thevariation of either component is dependent on the invariance of its relatedcounterpart, and such variation can come about only under this condition.
Semantic ambiguity may be thought of as an obstacle to an automatic processing of textual information, but actually it can serve that very purpose.
Markup can provide a formal representation of textual dynamics precisely onaccount of its diacritical ambivalence and its capacity to induce structuralindeterminacy and compensation. The OHCO thesis about the nature of thetext is radically insufficient, because it does not recognize of structural mo-bility as an essential property of the textual condition. The OHCO viewbuilds on the assumption of a syntactically well-determined expression, notacknowledging that a fixed syntactic structure leaves the corresponding se-mantic structure open to indetermination. A nonsemantically identifiablestring of characters is thus regarded as the vehicle of a specific content. Adigital text representation need not assume that meaning can be fully repre-sented in a syntactic logical form.9 The automatic processing of the text doesnot depend on a condition of this kind and need not fall victim to the snaresof classical artificial intelligence. A formal representation of textual infor-mation does not require an absolute coincidence between syntactic and se-mantic logical form. In this respect, the role of markup can be of paramountimportance in bringing their interconnections to the fore. Markup, turningto account its operational dimension, can act as a tranfer mechanism between Name /csc_mla_677001/04/Mp_65 09/22/2005 04:38PM Plate # 0 E D I T I N G I N A D I G I T A L H O R I Z O N
one structure and the other. It can behave as a performative injunction andswitch to a different logical function.
Viewing markup as an operator in this sense, we may say, as has been proposed, that “to describe the meaning of the markup in a document, itsuffices to generate the set of inferences about the document which are li-censed by the markup,” or even more assertively, that “in some ways, we canregard the meaning of the markup as being constituted, not only described,by that set of inferences” (Sperberg-McQueen, Huitfeldt, and Renear 231).
Actually, to describe markup in this way amounts to seeing it as a kind of“inference-ticket,” to use Gilbert Ryle’s locution—as an assertion, belongingto a “different level of discourse” from that to which belong the expressionsit applies to (121). So described, markup functions as a higher-order object-language statement—as a rule that licenses the reader, or for that matter themachine, to interpret the text in a certain way and to assign dynamically astructure to its content. Markup can therefore be conceived as a transfermechanism from a document’s structure to its semantic structure or, the otherway around, from a semantic structure to a document’s structure (as in theimplementations being imagined in the Semantic Web project).
Diacritical ambiguity, then, enables markup to provide a suitable type of formal representation for the phenomena of textual instability. By seeingmarkup in this way, we can regard it as a means of interpretation and de-formance (see Samuels and McGann), as a functional device both to interpretand to modify the text. But in the OHCO view, the structure assigned tothe expression of a text (by marking it up) and the structure assigned to itscontent coincide, with the result that the capacity of the markup to accountfor textual dynamics is prevented. Markup should not be thought of as in-troducing—as being able to introduce—a fixed and stable layer to the text.
To approach textuality in this way is to approach it in illusion. Markup shouldbe conceived, instead, as the expression of a highly reflexive act, a mappingof text back onto itself: as soon as a (marked) text is (re)marked, the meta-markings open themselves to indeterminacy. This reflexive operation leadsone to the following formulation of the logical structure of the textualcondition: (Buzzetti, “Digital Representation” 84) Name /csc_mla_677001/04/Mp_66 09/22/2005 04:38PM Plate # 0 J E R O M E M C G A N N A N D D I N O B U Z Z E T T I
In this view markup (m) is conceived as the expression of an operation, notof its value, for the nature of text is basically injunctive. Text can actually beseen as the physical mark of a sense-enacting operation (an act of Besinnung).
But in its turn, the result of this operation, the expression of the text, mustbe seen not as a value but as an operation mark, otherwise its interpretationis prevented. Such an the expression of the text is then regarded as a rule foran act of interpretation, an operation that is essentially undertermined. In-terpretation, as an act of deformance, flags explicitly its result as a self-reflexivetextual mark, which imposes a new structuring on the expression of the text.
Again, the newly added structural mark, the value of the interpreting opera-tion, converts back into an injunction for another, indeterminate act ofinterpretation.
Textual dynamics is thus the continual unfolding of the latent structural articulations of the text. Any structural determination of one of its two pri-mary subunits, expression and content, leaves the other undertermined andcalls for a definition of its correlative subunit, in a constant process of im-permanent codetermination. In more detail, and referring to the interweav-ing of textual content and expression, we may say that an act of compositionis a sense-constituting operation that brings about the formulation of a text.
The resulting expression can be considered as the self-identical value of asense-enacting operation. By fixing it, we allow for the indetermination ofits content. To define the content, we assume the expression as a rule for aninterpreting operation. An act of interpretation brings about a content, andwe can assume it as its self-identical value. A defined content provides a modelfor the expression of the text and can be viewed as a rule for its restructuring.
A newly added structure mark can in turn be seen as a reformulation of theexpression, and so on, in a permanent cycle of compensating actions betweendetermination and indetermination of the expression and the content of thetext.
This continual oscillation and interplay between indetermination and determination of the physical and the informational parts of the text rendersits dynamic instability very similar to the functional behaviour of self-orga-nizing systems. Text can thus be thought of as a simulation machine for sense-organizing operations of an autopoietic kind. Text works as a self-organizingsystem inasmuch as its expression, taken as a value, enacts a sense-definingoperation, just as its sense or content, taken as a value, enacts an expression-defining operation. Text provides an interpreter with a sort of prostheticdevice to perform autopoietic operations of sense communication andexchange.
Name /csc_mla_677001/04/Mp_67 09/22/2005 04:38PM Plate # 0 E D I T I N G I N A D I G I T A L H O R I Z O N
Textual indeterminacy and textual instability can thus be formally de- scribed, like most self-organization processes, through the calculus of indi-cations introduced by George Spencer-Brown (see Buzzetti, “Ambiguita”).
His “nondualistic attempt” to set proper foundations for mathematics anddescriptions in general “amounts to a subversion of the traditional under-standing on the basis of descriptions” inasmuch as “it views descriptions asbased on a primitive act (rather than a logical value or form).” In Spencer-Brown’s calculus “subject and object are interlocked” (Varela 110), just asexpression and content are interlocked in a self-organizing textual system.
Only an open and reversible deforming or interpreting act can keep themconnected as in a continually oscillating dynamic process. Lou Kauffman’sand Francisco Varela’s extension of Spencer-Brown’s calculus of indications(Kauffman and Varela; Varela, ch. 12) accounts more specifically for the “dy-namic unfoldment” (113) of self-organizing systems and may therefore beconsistently applied to an adequate description of textual mobility.
F R O M T E X T T O W O R K :A N E W H O R I Z O N F O R S C H O L A R S H I P Exposing the autopoietic logic of the textual condition is, in a full Peirceansense, a pragmatic necessity. As Varela, Humberto Maturana, and others haveshown, this logic governs the operation of all self-organizing systems (Ma-turana and Varela; Varela et al.). Such systems develop and sustain themselvesby marking their operations self-reflexively. The axiom that all text is markedtext defines an autopoietic function. Writing systems, print technology, andnow digital encoding license a set of markup conventions and procedures(algorithms) that facilitate the self-reflexive operations of human communi-cative action.
Scholarly editions are a special, highly sophisticated type of self-reflexive communication, and the fact is that we now must build such devices in digitalspace. This necessity is what Charles Sanders Peirce would call a “pragmatis-tic” fact: it defines a kind of existential (as opposed to a categorical) imperativethat scholars who wish to make these tools must recognize and implement.
We may better explain the significance of this imperative by shifting thediscussion to a concrete example. Around 1970, various kinds of social texttheories began to gain prominence, pushing literary studies toward a morebroadly cultural orientation. Interpreters began shifting their focus from thetext toward any kind of social formation in a broadly conceived discourse Name /csc_mla_677001/04/Mp_68 09/22/2005 04:38PM Plate # 0 J E R O M E M C G A N N A N D D I N O B U Z Z E T T I
field of semiotic works and activities. Because editors and bibliographers ori-ented their work to physical phenomena—the materials, means, and modesof production—rather than to the readerly text and hermeneuties, this tex-tonic shift in the larger community of scholars barely registered on bibliog-raphers’ instruments.
A notable exception among bibliographic scholars was D. F. McKenzie, whose 1985 Panizzi lectures climaxed almost twenty years of work on a socialtext approach to bibliography and editing. When they were published in1986, the lectures brought into focus a central contradiction in literary andcultural studies (Bibliography). Like their interpreter counterparts, textual andbibliographic scholars maintained an essential distinction between empirical-analytic disciplines on one hand and readerly-interpretive procedures on theother. In his Panizzi lectures McKenzie rejected this distinction and showedby discursive example why it could not be intellectually maintained.
His critics—most notably Thomas Tanselle and T. Howard-Hill—re- marked that while McKenzie’s ideas had a certain theoretical appeal, theycould not be practically implemented (Howard-Hill; Tanselle, “Textual Criti-cism and Literary Sociology”). The ideas implicitly called for the criticalediting of books and other socially constructed material objects. But criticalediting, as opposed to facsimile and diplomatic editing, was designed to in-vestigate texts—linguistic forms—not books or (what seemed even morepreposterous) social events.
In fact one can transform social and documentary aspects of a book into computable code. Working from the understanding that facsimile editing andcritical editing need not be distinct and incommensurate critical functions,the Rossetti Archive proves the correctness of a social text approach to ed-iting: it pushes traditional scholarly models of editing and textuality beyondthe Masoretic wall of the linguistic object we call the text. The proof ofconcept would be the making of the Archive. If our breach of the wall wasminimal, as it was, its practical demonstration was significant. We were ableto build a machine that organizes for complex study and analysis, for collationand critical comparison, the entire corpus of Rossetti’s documentary mate-rials, textual as well as pictorial. Critical, which is to say computational,attention was kept simultaneously on the physical features and conditions ofactual objects (specific documents and pictorial works) as well as on theirformal and conceptual characteristics (genre, metrics, iconography).10 TheArchive’s approach to Rossetti’s so-called double works is in this respect ex-emplary. Large and diverse bodies of material that comprise works like “TheBlessed Damozel” get synthetically organized: scores of printed texts, some Name /csc_mla_677001/04/Mp_69 09/22/2005 04:38PM Plate # 0 E D I T I N G I N A D I G I T A L H O R I Z O N
with extensive manuscript additions; two manuscripts; dozens of pictorialworks. These physical objects orbit around the conceptual thing we namefor convenience “The Blessed Damozel.” All the objects relate to that gravityfield in different ways, and their differential relations metastasize when subsetsof relations among them are revealed. At the same time, all the objects func-tion in an indefinite number of other kinds of relations. to other textual andpictorial works, to institutions of various kinds, to different persons, to vary-ing occasions. With the archive one can draw these materials into computablesynthetic relations at macro- as well as microlevels. In the process the archivediscloses the hypothetical character of its materials and their component partsas well as the relations one discerns among these things. Though completelyphysical and measurable (in different ways and scales), neither the objects northeir parts are self-identical, all can be reshaped and transformed in the en-vironment of the archive.
The autopoietic functions of the social text can also be computationally accessed through user logs. This set of materials—the use records, or hits,automatically stored by the computer—has received little attention by schol-ars who develop digital tools in the humanities. Formalizing its dynamicstructure in digital terms will allow us to produce an even more complexsimulation of social textualities. Our neglect of this body of informationreflects, I believe, an ingrained commitment to the idea of the positive textor material document. The depth of this commitment can be measured byreading McKenzie, whose social text editing proposals yet remain faithful tothe idea of the primacy of the physical object as a self-identical thing (see“What’s Past”).
Reflecting on digital technology in his lecture “What’s Past Is Pro- logue,” McKenzie admitted that its simulation capacities were forcing him torethink that “primary article of bibliographical faith” (00). He did not liveto undertake an editorial project in digital form. Had he done so, we believehe would have seen his social text approach strengthened by the new technicaldevices. All editors engage with a work in process. Even if only one textualwitness were to survive—say, that tomorrow a manuscript of a completelyunrecorded play by Shakespeare were unearthed—that document would bea record of the process of its making and its transmission. Minimal as theymight seem, its user logs would not have been completely erased, and thoselogs are essential evidence for anyone interested in reading (or editing) sucha work. We are interested in documentary evidence precisely because it en-codes, however cryptically at times, the evidence of the agents who were Name /csc_mla_677001/04/Mp_70 09/22/2005 04:38PM Plate # 0 J E R O M E M C G A N N A N D D I N O B U Z Z E T T I
involved in making and transmitting the document. Scholars do not edit self-identical texts. They reconstruct a complex documentary record of textualmakings and remakings, in which their own scholarly work directlyparticipates.
No text, no book, no social event is one thing. Each is many things, fashioned and refashioned repeatedly in repetitions that often occur (as itwere) simultaneously. The works evolve and mutate in their use. And becauseall such uses are always invested in real circumstances, these multiplying formsare socially and physically coded in and by the works themselves. They bearthe evidence of the meanings they have helped to make.
One advantage digitization has over paper-based instruments comes not from the computer’s modeling powers but from its greater capacity for sim-ulating phenomena—in this case, bibliographic and sociotextual. Books aresimulation machines as well, of course. Indeed, the hardware and software ofbook technology have evolved into a state of sophistication that dwarfs com-puterization as it currently stands. In time this situation will change throughthe existential imperative—digitization—that now defines our semiotic ho-rizon. That imperative is already leading us to design critical tools that or-ganize our textual condition as an autopoietic set of social objects—that isto say, objects that are themselves the emergent functions of the measurementsthat their users and makers implement for certain purposes. Our aim is notto build a model of one made thing, it is to design a system that can simulatethe system’s realizable possibilities—that are known and recorded as well asthose that have yet to be (re)constructed.
McKenzie’s central idea, that bibliographical objects are social objects, begs to be realized in digital terms and tools, begs to be realized by thosetools and by the people who make them.
1. The best introduction in English to this broad subject is Greetham, Textual Schol- arship; see also his important Theories of the Text. A brief introduction can befound in Williams and Abbott.
2. On the nonself-identity of material objects, see McGann, Radiant Textuality, 3. This genetic work was initiated with Friedrich Beissner’s project (begun in 1943) to edit the work of Ho¨lderlin. It was continued in the edition of D. E. Sattler, Name /csc_mla_677001/04/Mp_71 09/22/2005 04:38PM Plate # 0 E D I T I N G I N A D I G I T A L H O R I Z O N
begun in 1975. The best known English-language genetic edition is Hans Gab-ler’s Ulysses: A Critical and Synoptic Edition (1984; Gabler, Steppe, and Melchior).
4. A good example of versioning is provided by Wordsworth’s The Prelude: 1799, 5. For good bibliographic sources on RDF, see www.w3.org/RDF/.
6. For a more thorough discussion of this assertion, see McGann, Radiant Textuality, especially chapter 5 and the appendix to chapter 6.
7. Cf. Hjelmslev 132: “Owing to the universalism of everyday language, an every- day language can be used as metalanguage to describe itself as object language.” 8. Tractatus 4.121: see also 4.1212: “What can be shown cannot be said.”9. “If you take care of the syntax, the semantics will take care of itself” (Haugeland 10. Since its initial conception, Rossetti Archive has been subjected to further digital transformations—most notably a translation into XML format—that extend thearchive’s translinguistic critical functions. The digital logic of the archive’s struc-ture leaves it open to more-comprehensive scales of interoperability, such as thosebeing developed through the Semantic Web and the Open Knowledge Initiative(OKI). For an introduction to OKI see www.okiproject.org.

Source: http://web.dfc.unibo.it/buzzetti/dbuzzetti/pubblicazioni/CriticalEditing.pdf

Microsoft word - cessation of a seizure disorder.doc

CESSATION OF A SEIZURE DISORDER: Correction of the Atlas Subluxation Complex Robert J. Goodman, D.C., John S. Mosby Jr., D.C., M.D. ABSTRACT Observations of one patient presenting with a seizure disorder are reported. Relief of symptoms is noted subsequent to correction of the misalignment of the occipito-atlanto-axial complex. The authors suggest a relationship between the misaligned skull and

Rhm42755 1.3

placenta, was stopped when results showed thatof misoprostol for prevention of post-partummisoprostol was found to have no clinically sig-nificant beneficial effect in women with retainedAn operational research project in North-Westplacenta. The study was carried out in TanzaniaNigeria was designed based on previous commu-between April 2008 and November 2011, withnity dialogues with loca

Copyright © 2012-2014 Medical Theses