114 research outputs found

    The use of technical metadata in still digital imaging by the newspaper industry

    Get PDF
    Newspapers are increasingly capturing images digitally. Included with these digital files is technical information about the conditions of the image and the conditions surrounding image capture. Technical metadata has the potential to be a valuable resource in image reproduction, management, and archiving. Nevertheless, even though digital devices capture a large amount of technical metadata, the use of such data in the digital imaging workflow is not widespread. The use of technical metadata requires a uniform set of technical metadata standards and an open encoding scheme to embed data. From their inception, image file formats, such as TIFF and JPEG, have allowed the inclusion of technical metadata tags. The Exif schema has extended the metadata inclusion capabilities of both of these formats. Additionally, XML has emerged as a standard for users to add metadata to image files. Consequently, organizations such as the World Wide Web Consortium and Adobe Systems all support XML. Moreover, organizations such as the Digital Imaging Group (DIG35) and the National Information Standards Organization (NISO) are defining standards for technical metadata inclusion. The purpose of this study was to answer two fundamental questions about technical metadata in the newspaper industry. First, it assessed the ability of technical metadata to improve the newspaper digital imaging workflow; and second, it determined how technical metadata could be used to preserve the integrity of newspaper digital images. This study examined five large newspaper organizations: The Chicago Tribune, The New York Times, The Rochester Democrat & Chronicle, USA Today, and The Washington Post. Based on interviews and questionnaire responses, each organization?s use of technical metadata in the digital imaging workflow was examined through case studies. Interviews were conducted with those individuals responsible for image capture, adjustment, database management, and output. Furthermore, participants were asked to rank the importance of selected fields of technical metadata through a questionnaire. It was found that the use of technical metadata classified by NISO as Basic Image Parameters, which includes file size, type, and compression, was universal in newspaper organizations. The use of Image Creation metadata was not widespread with the exception of two fields that established date and time of capture and assigned each image a unique identifier. Image Performance Assessment metadata, such as test targets, was not widely used except by The Rochester Democrat & Chronicle. Change History fell victim to the short cycle time in the newspaper industry, and for the most part, a history of change was kept at various handoffs in the digital workflow through versioning. The use of technical metadata to improve the digital workflow, to an extent, was at cross-purposes with newspapers? need to visually examine each image to determine its usefulness. However, software designed to visually present technical metadata through a well designed graphic user interface was popular. It appeared that technical metadata had the potential to benefit newspapers when repurposing images for other media. Additionally, large newspaper organizations were creating their own image databases; while the use of technical metadata in these archives was unclear, it would be prudent to include too much technical metadata, rather than too little. The foremost concern of all organizations was preservation of the editorial integrity of the image. Newspapers defined editorial integrity as the ability to capture as much detail of an event as possible, and then present that information to their readers in a truthful, unambiguous way. Research pointed out that image reproduction quality was only one of a series of variables that determined newspaper image quality. With the advent of digital photography, photographers are editing more in the field, and as a result they are making decisions regarding image content. The use of technical metadata has the potential to provide greater tractability of these outtakes. Additionally, the industry is moving toward the Camera Raw file format to acquire image data that is unprocessed by camera software. The adjustment of Camera Raw files through a GUI, and their subsequent conversion to another file format, represented a de facto use of technical metadata to preserve editorial integrity

    Documents as functions

    Get PDF
    Treating variable data documents as functions over their data bindings opens opportunities for building more powerful, robust and flexible document architectures to meet the needs arising from the confluence of developments in document engineering, digital printing technologies and marketing analysis. This thesis describes a combination of several XML-based technologies both to represent and to process variable documents and their data, leading to extensible, high-quality and 'higher-order' document generation solutions. The architecture (DDF) uses XML uniformly throughout the documents and their processing tools with interspersing of different semantic spaces being achieved through namespacing. An XML-based functional programming language (XSLT) is used to describe all intra-document variability and for implementing most of the tools. Document layout intent is declared within a document as a hierarchical set of combinators attached to a tree-based graphical presentation. Evaluation of a document bound to an instance of data involves using a compiler to create an executable from the document, running this with the data instance as argument to create a new document with layout intent described, followed by resolution of that layout by an extensible layout processor. The use of these technologies, with design paradigms and coding protocols, makes it possible to construct documents that not only have high flexibility and quality, but also perform in higher-order ways. A document can be partially bound to data and evaluated, modifying its presentation and still remaining variably responsive to future data. Layout intent can be re-satisfied as presentation trees are modified by programmatic sections embedded within them. The key enablers are described and illustrated through example

    Optimised editing of variable data documents via partial re-evaluation

    Get PDF
    With the advent of digital printing presses and the continued development of associated technologies, variable data printing (VDP) is becoming more and more common. VDP allows for a series of data instances to be bound to a single template document in order to produce a set of result document instances, each customized depending upon the data provided. As it gradually enters the mainstream of digital publishing there is a need for appropriate and powerful editing tools suitable for use by creative professionals. This thesis investigates the problem of representing variable data documents in an editable visual form, and focuses on the technical issues involved with supporting such an editing model. Using a document processing model where the document is produced from a data set and an appropriate programmatic transform, this thesis considers an interactive editor developed to allow visual manipulation of the result documents. It shows how the speed of the reprocessing necessary in such an interactive editing scenario can be increased by selectively re-evaluating only the required parts of the transformation, including how these pieces of the transformation can be identified and subsequently re-executed. The techniques described are demonstrated using a simplified document processing model that closely resembles variable data document frameworks. A workable editor is also presented that builds on this processing model and illustrates its advantages. Finally, an analysis of the performance of the proposed framework is undertaken including a comparison to a standard processing pipeline

    Dynamically generated multi-modal application interfaces

    Get PDF
    This work introduces a new UIMS (User Interface Management System), which aims to solve numerous problems in the field of user-interface development arising from hard-coded use of user interface toolkits. The presented solution is a concrete system architecture based on the abstract ARCH model consisting of an interface abstraction-layer, a dialog definition language called GIML (Generalized Interface Markup Language) and pluggable interface rendering modules. These components form an interface toolkit called GITK (Generalized Interface ToolKit). With the aid of GITK (Generalized Interface ToolKit) one can build an application, without explicitly creating a concrete end-user interface. At runtime GITK can create these interfaces as needed from the abstract specification and run them. Thereby GITK is equipping one application with many interfaces, even kinds of interfaces that did not exist when the application was written. It should be noted that this work will concentrate on providing the base infrastructure for adaptive/adaptable system, and does not aim to deliver a complete solution. This work shows that the proposed solution is a fundamental concept needed to create interfaces for everyone, which can be used everywhere and at any time. This text further discusses the impact of such technology for users and on the various aspects of software systems and their development. The targeted main audience of this work are software developers or people with strong interest in software development

    Optimised editing of variable data documents via partial re-evaluation

    Get PDF
    With the advent of digital printing presses and the continued development of associated technologies, variable data printing (VDP) is becoming more and more common. VDP allows for a series of data instances to be bound to a single template document in order to produce a set of result document instances, each customized depending upon the data provided. As it gradually enters the mainstream of digital publishing there is a need for appropriate and powerful editing tools suitable for use by creative professionals. This thesis investigates the problem of representing variable data documents in an editable visual form, and focuses on the technical issues involved with supporting such an editing model. Using a document processing model where the document is produced from a data set and an appropriate programmatic transform, this thesis considers an interactive editor developed to allow visual manipulation of the result documents. It shows how the speed of the reprocessing necessary in such an interactive editing scenario can be increased by selectively re-evaluating only the required parts of the transformation, including how these pieces of the transformation can be identified and subsequently re-executed. The techniques described are demonstrated using a simplified document processing model that closely resembles variable data document frameworks. A workable editor is also presented that builds on this processing model and illustrates its advantages. Finally, an analysis of the performance of the proposed framework is undertaken including a comparison to a standard processing pipeline

    Software documentation

    Get PDF
    This thesis report is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering, 2006.Cataloged from PDF version of thesis report.Includes bibliographical references (page 92).The main objective of my thesis is to generate a user manual that would be very much comprehensible at the same time well structured and would act as an effective navigator. Documentation is mainly requisite for better communication among the different members of a software development team, such as designers of finer grained components, builders of interfacing system, implementers, testers, performance engineers, technical managers, analysts, quality specialists. In order to develop a very comprehensive documentation there are certain conventions that are requisite to be taken care of. Those conventions and rules have been high lighted extensively. There are different types of documentation based on the requirements of each individual associated with the software development life cycle and they are design, code, user, architectural, trade study and marketing are few to mention. However, my focus area is user documentation. Unlike code documents, user documents are usually far divorced from the source code of the program, and instead simply describe how it is used. The use of XML and Docbook is there. DocBook simply provides a framework. All the presentation issues are devolved to style sheets.Tahmina Zaman KhanB. Computer Science and Engineerin

    Investigating the Efficacy of XML and Stylesheets to Render Electronic Courseware for Multiple Learning Styles

    Get PDF
    The objective of this project was to test the efficacy of using Extensible Markup Language (XML) - in particular the DocBook 5.0b5 schema - and Extensible Stylesheet Language Transformation (XSLT) to render electronic courseware that can be dynamically re-formatted according to a student’s individual learning style. The text of a typical lesson was marked up in XML according to the DocBook schema, and several XSLT stylesheets were created to transform the XML document into different versions, each according to particular learning needs. These learning needs were drawn from the Felder-Silverman learning style model. The notes had links to trigger JavaScript functions that allowed the student to reformat the notes to produce different views of the lesson. The dynamic notes were tested on twelve users who filled out a feedback questionnaire. Feedback was largely positive. It suggested that users were able to navigate according to their learning style. There were some usability issues caused by lack of compatibility of the program with some browsers. However, the user test is not the most critical part of the evaluation. It served to confirm that the notes were usable, but the analysis of the use of XSLT and DocBook is the key aspect of this project. It was found that XML, and in particular the DocBook schema, was a useful tool in these circumstances, being easy to learn, well supported and having the appropriate structure for a project of this type. The use of XSLT on the other hand was not so straightforward. Learning a declarative language was a challenge, as was using XSLT to transform the notes as necessary for this project. A particular problem was the need to move content from one area of the document to another - to hide it in some cases and reveal it in others. The solution was not straightforward to achieve using XSLT, and does not take proper advantage of the strengths of this technology. The fact that the XSLT processor uses the DOM API, which necessitates the loading of the entire XML document into memory, is particularly problematic in this instance where the document is constantly transformed and re-transformed. The manner in which stylesheets are assigned, as well as the need to use DOM objects to edit the source tree, necessitated the use of JavaScript to create the necessary usability. These mechanisms introduced a limitation in terms of compatibility with browsers and caused the program to freeze on older machines. The problems with browser compatibility and the synchronous loading of data are not insurmountable, and can be overcome with the appropriate use of JavaScript and the use of asynchronous data retrieval as is made possible by the use of AJAX

    Presenting multi-language XML documents : an adaptive transformation and validation approach

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore