21,710 research outputs found
Recommended from our members
Content, Format, and Interpretation
The connection between notation and the content it expresses is always contingent, and mediated through complex layers of interpretation. Some content bears directly on the encoder's intention to convey a particular meaning, while other content concerns the structures in and through which that meaning is expressed and organized. Interpretive frames are abstractions that serve as context for symbolic expressions. They form a backdrop of dependencies for data management and preservation strategies. Situation semantics offers a theoretical grounding for interpretive frames that integrates them into a general theory of communication through markup and other notational structures
The caBIG™ Annotation and Image Markup Project
Image annotation and markup are at the core of medical interpretation in both the clinical and the research setting. Digital medical images are managed with the DICOM standard format. While DICOM contains a large amount of meta-data about whom, where, and how the image was acquired, DICOM says little about the content or meaning of the pixel data. An image annotation is the explanatory or descriptive information about the pixel data of an image that is generated by a human or machine observer. An image markup is the graphical symbols placed over the image to depict an annotation. While DICOM is the standard for medical image acquisition, manipulation, transmission, storage, and display, there are no standards for image annotation and markup. Many systems expect annotation to be reported verbally, while markups are stored in graphical overlays or proprietary formats. This makes it difficult to extract and compute with both of them. The goal of the Annotation and Image Markup (AIM) project is to develop a mechanism, for modeling, capturing, and serializing image annotation and markup data that can be adopted as a standard by the medical imaging community. The AIM project produces both human- and machine-readable artifacts. This paper describes the AIM information model, schemas, software libraries, and tools so as to prepare researchers and developers for their use of AIM
Multiple hierarchies : new aspects of an old solution
In this paper, we present the Multiple Annotation approach, which solves two problems: the problem of annotating overlapping structures, and the problem that occurs when documents should be annotated according to different, possibly heterogeneous tag sets. This approach has many advantages: it is based on XML, the modeling of alternative annotations is possible, each level can be viewed separately, and new levels can be added at any time. The files can be regarded as an interrelated unit, with the text serving as the implicit link. Two representations of the information contained in the multiple files (one in Prolog and one in XML) are described. These representations serve as a base for several applications
Legislatures, Agencies, Courts and Advocates: How Laws are Made, Interpreted and Modified
This chapter explains the nature and practice of lawmaking, legal advocacy, and legal research as they relate to the field of work and family. Through reference to the Family and Medical Leave Act of 1993 as a case study, the authors explain the dynamic processes by which laws are made, interpreted and modified by legislatures, administrative agencies and courts, with the help of legal advocates. Their goal is not to provide substantive analysis of laws related to work and family, but rather to enable researchers from a range of disciplines to understand and access the legal system, as it currently exists and as it is evolving. In addition, for those inclined to change the current system through legal advocacy, this chapter provides a window into how advocates may use the lawmaking process to promote their preferred work and family policies
Putting the Text back into Context: A Codicological Approach to Manuscript Transcription
Textual scholars have tended to produce editions which present the text without its
manuscript context. Even though digital editions now often present single-witness
editions with facsimiles of the manuscripts, nevertheless the text itself is still transcribed
and represented as a linguistic object rather than a physical one. Indeed, this is explicitly
stated as the theoretical basis for the de facto standard of markup for digital texts: the
Guidelines of the Text Encoding Initiative (TEI). These explicitly treat texts as semantic
units such as paragraphs, sentences, verses and so on, rather than physical elements
such as pages, openings, or surfaces, and some scholars have argued that this is the only
viable model for representing texts. In contrast, this chapter presents arguments for
considering the document as a physical object in the markup of texts. The theoretical
arguments of what constitutes a text are first reviewed, with emphasis on those used
by the TEI and other theoreticians of digital markup. A series of cases is then given in
which a document-centric approach may be desirable, with both modern and medieval
examples. Finally a step forward in this direction is raised, namely the results of
the Genetic Edition Working Group in the Manuscript Special Interest Group of the
TEI: this includes a proposed standard for documentary markup, whereby aspects of
codicology and mise en page can be included in digital editions, putting the text back
into its manuscript context
- …