Licensed under a Creative Commons Attribution ShareAlike 3.0 License
TEI by Example offers a series of freely available online tutorials walking individuals through the different stages in marking up a document in TEI (Text Encoding Initiative). Besides a general introduction to text encoding, step-by-step tutorial modules provide example-based introductions to eight different aspects of electronic text markup for the humanities. Each tutorial module is accompanied with a dedicated examples section, illustrating actual TEI encoding practise with real-life examples. The theory of the tutorial modules can be tested in interactive tests and exercises.
Computers can only process texts whose characters are represented by a system that relates to the binary system computers can interpret. This is called
The Text Encoding Initiative (TEI) is a standard for the representation of textual material in digital form through the means of text encoding. This standard is the collaborative product of a community of scholars, chiefly from the humanities, social sciences, and linguistics, who are organised in the TEI Consortium (TEI-C,
This introductory tutorial first introduces the concepts of text encoding and markup languages in the humanities and then introduces the TEI encoding principles. Next, this tutorial provides a brief historical survey of the TEI Guidelines and ends with a presentation of the Consortium’s organisation.
Since the earliest uses of computers and computational techniques in the humanities at the end of the 1940s, scholars, projects, and research groups had to look for systems that could provide representations of data which the computer could process. Computers, as Michael Sperberg-McQueen has reminded us, are binary machines that
can contain and operate on patterns of electronic charges, but they cannot contain numbers, which are abstract mathematical objects not electronic charges, nor texts, which are complex, abstract cultural and linguistic objects. (Sperberg-McQueen 1991, 34). This is clearly seen in the mechanics of early input devices such as punched cards where a hole at a certain coordinate actually meant a I or 0 (true or false) for the character or numerical represented by this coordinate, according to the specific character set of the computer used. Because different computers used different character sets with a different number of characters, texts first had to be transcribed into that character set. All characters, punctuation marks, diacritics, and significant changes of type style had to be encoded with an inadequate budget of characters. This resulted in a complex of
Although several projects were able to produce meaningful scholarly results with this internally consistent approach, the particular nature of each set of conventions or encoding scheme had lots of disadvantages. Texts prepared in such a proprietary scheme by one project could not readily be used by other projects; software developed for the analysis of such texts could hence not be used outside the project due to an incompatibility of encoding schemes and non-standardisation of hardware. However, with the increase of texts being prepared in machine-readable format, the call for an economic use of resources increased as well. Already in 1967, Martin Kay argued in favour of a
standard code in which any text received from an outside source can be assumed to be (Kay 1967, 171). This code would behave as an exchange format which allowed the users to use their own conventions at output and at input (Kay 1967, 172).
When human beings read texts, they perceive both the information stored in the linguistic code of the text and the meta-information which is inferred from the appearance and interpretation of the text. By convention, italics are, for instance, used as a code signalling a title of a book, play, or movie; a foreign word or phrase; or emphatic use of the language. Through their cognitive abilities, readers usually have no problems selecting the most appropriate interpretation of an italic string of text. Computers, however, need to be informed about these issues in order to be able to process them. This can be done by way of a markup language that provides rules to formally separate information (the text in a document) from meta-information (information about the text in a document). Whereas markup languages in use in the typesetting community were mainly of a procedural nature—that is, they indicate procedures that a particular application should follow—(e.g., printing a string of text in italics), the humanities were also and mainly considered with descriptive markup that identifies the entity type of tokens (e.g., identifying that a string of text is a title of a book or a foreign word). Unlike procedural or presentational markup, descriptive markup establishes a one to one mapping between logical elements in the text and their markup. In order to achieve this, descriptive markup languages tend to formally separate information (the text in a document) from meta-information (information about the text in a document).
Some sort of standardisation of markup for the encoding and analysis of literary texts was reached by the COCOA encoding scheme originally developed for the COCOA program in the 1960s and 1970s (Russel 1967), but used as an input standard by the Oxford Concordance Program (OCP) in the 1980s (Hockey 1980) and by the Textual Analysis Computing Tools (TACT) in the 1990s (Lancashire et al. 1996). For the transcription and encoding of classical Greek texts, the Beta-transcription/encoding system reached some level of standardised use (Berkowitz, Squitier, and Johnson 1986).
The call for a markup language that could guarantee reusability, interchange, system- and software-independence, portability and collaboration in the humanities was answered by the publication of the Standard Generalized Markup Language (SGML) as an ISO standard in 1986 (ISO 8879:1986) (Goldfarb 1990). Based on IBM’s Document Composition Facility Generalized Markup Language, SGML was developed mainly by Charles Goldfarb as a metalanguage for the description of markup schemes that satisfied at least seven requirements for an encoding standard (Barnard, Fraser, and Logan 1988, 28–29):
In order to achieve universal exchangeability and software and platform independence, SGML made use exclusively of the ASCII codes. As mentioned above, SGML is not a markup language itself, but a metalanguage by which one can create separate markup languages for separate purposes. This means that SGML defines the rules and procedures to specify the vocabulary and the syntax of a markup language in a formal Document Type Definition (DTD). Such a DTD is a formal description of, for instance, names for all elements, names and default values for their attributes, rules about how elements can nest and how often they can occur, and names for re-usable pieces of data (entities). The DTD enables full control, parsing, and validation of SGML encoded documents. By and large the most popular SGML DTD is the Hypertext Markup Language (HTML) developed for the exchange of graphical documents over the internet.
A markup scheme with all these qualities was exactly what the humanities were looking for in their quest for a descriptive encoding standard for the preparation and interchange of electronic texts for scholarly research. There was a strong consensus among the computing humanists that SGML offered a better foundation for research oriented text encoding than other such schemes (Barnard, Fraser, and Logan 1988, 26–31; Barnard et al. 1988). From the beginning, however, SGML was also criticised for at least two problematic matters: SGML’s hierarchical perspective on text, i.e., the representation of text as a hierarchical tree structure, and SGML’s verbose markup system (Barnard et al. 1988). These two issues have since been central to the theoretical and educational debates on markup languages in the humanities.
The publication of the eXtensible Markup Language (XML) 1.0 as a W3C recommendation in 1998 (Bray, Paoli, and Sperberg-McQueen 1998) brought together the best features of SGML and HTML and soon achieved huge popularity. Among the power XML borrowed from SGML are the explicitness of descriptive markup, the expressive power of hierarchic models, the extensibility of markup languages, and the possibility to validate a document against a DTD. From HTML it borrowed simplicity and the possibility to work without a DTD. Technically speaking, XML is a subset of SGML and the recommendation was developed by a group of people with a long standing experience in SGML, many of whom were TEI members.
Because of its advantages and widespread popularity, XML became the metalanguage of choice for expressing the rules for descriptive text encoding in TEI.
XML is a metalanguage by which one can create separate markup languages for separate purposes. It is platform-, software-, and system-independent and no-one
The big selling point of XML is that it is text-based. This means that each XML encoding language is entirely built up in ASCII (American Standard Code for Information Interchange), or plain text, and can be created and edited using a simple text-editor like Notepad or its equivalents on other platforms. However, when you start working with XML, you will soon find that it is better to edit XML documents using a professional XML editor. While plain text-editors don’t know that you’re writing TEI, XML editors will help you write error-free XML documents, validate your XML against a DTD or a schema, force you to stick to a valid XML structure, and enable you to perform transformations.
Since ASCII only provides for characters commonly found in the English language, different character encoding systems have been designed such as Isolat-1 (ISO-8859-1) for Western languages, and Unicode (UTF-8 and UTF-16). By using these character encoding systems, non-ASCII characters such as French
Any XML encoding language consists of five components:
For example, a simple two-paragraph document could be encoded as follows in XML:
An XML document is introduced by the XML declaration:
The question mark
? in the XML declaration signals that this is a processing instruction. The following bits state that what follows is XML which complies with version 1.0 of the recommendation and that the used character encoding is UTF-8 or Unicode. An XML declaration is optional, but can only appear as the first content of an XML file.
The two-paragraph document above is an example of an XML document, representing both information and meta-information. Information (plain text) is contained in XML elements, delimited by start tags (e.g.,
<!--) and end markers (
-->). Everything inside comments is ignored by XML processing software: it is said to be
Entity references are predefined strings of data that a parser must resolve before parsing the XML document.
An entity reference starts with an ampersand
& and closes with a semicolon
;. The entity name is the string between these two symbols. For instance, the entity reference for the
<. The entity reference for the ampersand is
Not all computers necessarily support the Unicode encoding scheme XML works with. Portability of individual characters from the Unicode system, however, is supported by entity references that refer to their numeric or hexadecimal notation. For example, the character
00F8 and decimal value
0248. For exporting an XML document containing this character, it may be represented with the corresponding character reference
ø respectively, with the
x indicating that what follows is a hexadecimal value. References of this type do not need to be predefined, since the underlying character encoding for XML is always the same.
For legibility purposes, however, it is also possible to refer to this character by use of a mnemonic name, such as
The ENTITY declaration uses a non-XML syntax inherited from SGML and starts with an opening delimiter
< followed by an exclamation mark
! signalling that this is a declaration. The keyword
ENTITY names that an entity is being declared here. What follows next is the entity name - here the mnemonic name
ø - for which a declaration is given and the declaration itself inside quotation marks. In this example, it is the hexadecimal value of the character.
The same character can also be declared in the following ways:
Character entities must also be used in XML to escape the
< and the ampersand
& which are illegal because the XML parser could mistake them for markup.
Entities are not only capable to refer to character declarations but can also refer to strings of text with an unlimited extent. This way, repetitive keying of repeated information can be avoided (aka string substitution), or standard expressions or formulae can be kept up to date. The first is useful, for instance, for the expansion of
TEI by Example before the test is validated. This corresponding ENTITY declaration is as follows:
The second is used in contracts, books of laws, etc. in which updating would otherwise mean the complete re-keying of the same (extensive) string of text. For example, the expression:
The substitution of the entities by their values in the given example results in the following expression:
ENTITY declarations are placed inside a DOCTYPE declaration which follows the XML declaration at the beginning of the XML document:
The DOCTYPE declaration starts with the opening delimiter
<! which is followed by the keyword
DOCTYPE. Next comes the name of the root element of the document. In the case of a TEI document, this will be
TEI. The entity references which must be interpreted by the XML processor are put inside square brackets. An XML parser encountering this DOCTYPE declaration will expand the entities with the values given in the ENTITY declaration before the document itself is validated.
All text in an XML document will normally be parsed by a parser. When an XML element is parsed, the text between the XML tags is also parsed. The parser does that because XML elements can nest as in the following example:
The XML parser will break this string up into an element
An XML document often contains data which need not be parsed by an XML parser. For instance, characters like
& are illegal in XML elements because the parser will interpret them as the beginning of new elements or the beginning of an entity reference which will result in an error message. Therefore, these characters can be escaped by the use of the entity references
&) is included in an XML document, it should not be parsed by the XML parser. We can avoid this by treating it as
A CDATA section starts with
<![CDATA[ and ends with
]]>. Everything inside a CDATA section is treated as plain text by the parser.
Depending on the nature of your XML documents and what you want to use them for, you will need different tools, ranging from freely available open source tools to expensive industrial software. In principle, the simplest plain text editor suffices to author or edit XML. In order to validate or transform XML, additional tools will be needed which often come included in dedicated XML editors: a validating parser, an XSLT processor, a tree-structure viewer etc. For publishing purposes, XML documents may be transformed to other XML formats, HTML or PDF—to name just a few of the possibilities—using XSLT and XSLFO scripts which are processed by an off-the-shelf or custom-made XSL processor. These published documents can be viewed in generic web browsers or PDF viewers. XML documents can further be indexed, excerpted, questioned, and analysed with tools specifically designed for the job.
The conclusions and the work of the TEI community are formulated as guidelines, rules, and recommendations rather than standards, because it is acknowledged that each scholar must have the freedom of expressing their own theory of text by encoding the features they think important in the text. A wide array of possible solutions to encoding matters is demonstrated in the TEI Guidelines which therefore should be considered a reference manual rather than a tutorial. Mastering the complete TEI encoding scheme implies a steep learning curve, but few projects require a complete knowledge of the TEI. Therefore, a manageable subset of the full TEI encoding scheme was published as TEI Lite, currently describing 140 elements (Burnard and Sperberg-McQueen 2006). Originally intended as an introduction and a didactic stepping stone to the full recommendations, TEI Lite has, since its publication in 1995, become one of the most popular TEI customisations and proves to meet the needs of 90% of the TEI community, 90% of the time.
A significant part of the rules in the TEI Guidelines apply to the expression of descriptive and structural meta-information about the text. Yet, the TEI defines concepts to represent a much wider array of textual phenomena, amounting to a total of 580 elements and 265 attributes. These are organised into 21 modules, grouping related elements and attributes:
Each of these modules and the use of the elements they define are discussed extensively in a dedicated chapter of the TEI Guidelines.
Among more technical ones, Steven DeRose pointed out substantial advantages of XML to the TEI community: by allowing for more flexible automatic parsing strategies and easy delivery of electronic documents with cheap ubiquitous tools such as web browsers, XML could spread the notion of descriptive markup to a wide audience that will thus be acquainted with the concepts articulated in the TEI Guidelines (DeRose 1999, 19).
In order to use TEI for the encoding of texts, users must make sure that their texts belong to the TEI namespace (
Besides the common minimal TEI structure (
In the vein of
In order to accommodate the process of creating customised TEI schemas and prose documentation, the TEI has developed a dedicated piece of software called
A TEI schema, declaring all structural conditions and restraints for the elements and attributes in TEI texts can then be used to automatically validate actual TEI documents with an XML parser. Consider, for example, following fragments:
When validated against a TEI schema derived from the previous ODD file, file [A] will be recognised as a valid TEI document, while file [B] won’t:
After the concise overview of the most recent version of TEI (P5) in
Shortly after the publication of the SGML specification as an ISO Standard, a diverse group of 32 humanities computing scholars gathered at Vassar College in Poughkeepsie, New York in a two-day meeting (11 & 12 November 1987) called for by the Association for Computers and the Humanities (ACH), funded by the National Endowment for the Humanities (NEH), and convened by Nancy Ide and Michael Sperberg McQueen. The main topic of the meeting was the question how and whether an encoding standard for machine-readable texts intended for scholarly research should be developed. Amongst the delegates were representatives from the main European text archives and from important North American academic and commercial research centres. Contrary to the disappointing outcomes of other such meetings in San Diego in 1977 or in Pisa in 1980, this meeting did reach its goal with the formulation of and the agreement on the following set of methodological principles—the so called Poughkeepsie Principles—for the preparation of text encoding guidelines for literary, linguistic, and historical research (Burnard 1988, 132–133; Ide and Sperberg-McQueen 1988, E.6–4; Ide and Sperberg-McQueen 1995, 6):
For the implementation of these principles the ACH was joined by the Association for Literary and Linguistic Computing (ALLC) and the Association for Computational Linguistics (ACL). Together they established the Text Encoding Initiative (TEI) whose mission it was to develop the
From the Poughkeepsie Principles the TEI concluded that the TEI Guidelines should:
A Steering Committee consisting of representatives of the ACH, the ACL, and the ALLC appointed Michael Sperberg-McQueen as editor-in-chief and Lou Burnard as European editor of the Guidelines.
The first public proposal for the TEI Guidelines was published in July 1990 under the title
The following step was the publication of the TEI P3
Recognising the benefits for the TEI community, the P4 revision of the TEI Guidelines was published in 2002 by the newly formed TEI Consortium in order to provide equal support for XML and SGML applications using the TEI scheme (Sperberg-McQueen and Burnard). The chief objective of this revision was to implement proper XML support in the Guidelines, while ensuring that documents produced to earlier TEI specifications remained usable with the new version. The XML support was realised by the expression of the TEI Guidelines in XML and the specification of a TEI conformant XML DTD. The TEI P4 generated a set of DTD fragments that can be combined together to form either SGML or XML DTDs and thus achieved backwards compatibility with TEI P3 encoded texts. In other words, any document conforming to the TEI P3 SGML DTD was guaranteed to conform to the TEI P4 XML version of it. This
In 2003 the TEI Consortium asked their membership to convene Special Interest Groups (SIGs) whose aim could be to advise revision of certain chapters of the Guidelines and suggest changes and improvements in view of the P5. With the establishment of the new TEI Council, which superintends the technical work of the TEI Consortium, it became possible to agree on an agenda to enhance and modify the Guidelines more fundamentally which resulted in a full revision of the Guidelines published as TEI P5 (TEI Consortium 2007). TEI P5 contains a full XML expression of the TEI Guidelines and introduces new elements, revises content models, and reorganises elements in a modular class system that facilitates flexible adaptations to users’ needs. Contrary to its predecessor, TEI P5 does not offer backwards compatibility with previous versions of the TEI. The TEI Consortium has, however, maintained and corrected errors in the P4 Guidelines for 5 more years, up to the end of 2012. Since that date, the TEI Consortium has ceased official support for TEI P4, and deprecated it in favour of TEI P5. The P5 version is being updated continuously with regular releases: the most up-to-date version can be found at
The TEI Consortium was established in 2000 as a not-for-profit membership organisation to sustain and develop the Text Encoding Initiative (TEI). The Consortium is supported by a number of host institutions. It is managed by a Board of Directors, and its technical work is overseen by an elected technical Council who take responsibility over the content of the TEI Guidelines.
The TEI charter outlines the consortium’s goals and fundamental principles. Its goals are:
The Consortium honours four fundamental principles:
Involvement in the consortium is possible in two categories: individual membership, and institutional memebership. Only members have the right to vote on consortium issues and in elections to the Board and the Council, have access to a restricted website with pre-release drafts of Consortium working documents and technical reports, announcements and news, and a database of members, Sponsors, and Subscribers, with contact information, and benefit from discounts on training, consulting, and certification. The Consortium members meet annually at a Members’ Meeting where current critical issues in text encoding are discussed, and members of the Council and members of the Board of Directors are elected.
Computers can only deal with explicit data. The function of markup is to represent textual material into digital form through the explicating act of text-encoding. Descriptive markup reveals what the encoder thinks to be implicit or hidden aspects of a text, and is thus an interpretive medium which often documents scholarly research next to structural information about the text. In order for this research to be exchangeable, analyzable, re-usable, and preservable, texts in the field of the humanities should be encoded according to a standard which defines a common vocabulary, grammar, and syntax, whilst leaving the implementation of the standard up to the encoder. A result of communal efforts among computing humanists, the Text Encoding Initiative documents such a standard in the TEI Guidelines. These guidelines are fully adaptable and customisable to one’s specific project whilst enhancing this project’s compatibility with other projects employing the TEI. Since over two decades, the TEI has been used extensively in projects from different disciplines, fields, and subjects internationally. The ongoing engagements of a broad user community through the organization of the TEI Consortium consolidates the importance of the text encoding standard and informs its continuous development and maintenance.
You have reached the end of this tutorial module providing an introduction to the TEI and text encoding for the humanities. You can now either