65 research outputs found

    Постреляційні бази даних

    Get PDF
    Методичні вказівки розроблені на підставі робочої програми кредитного модуля «Постреляційні бази даних» та призначені для якісної організації самостійної роботи студентів при вивчені кредитного модуля, підвищення свідомості студентів у навчанні і поліпшення результатів навчання

    Постреляційні бази даних

    Get PDF

    A Multivariate Analysis of the Human Factors and Preferences Towards Digital Publishing Platforms for the iPad

    Get PDF
    Tablet computers have been widely adopted in America today, with 34% of American adults ages 18+ owning this type of digital device (PEW, 2013). With the emergence of new portable computer technology, reading on digital devices has become more popular than ever before. In particular, tablet computers have enabled users to read enhanced e-book material that, while still text-driven, incorporates all facets of multimedia and technology. With many different digital publishing solutions available for publishers to deploy their content, the goal of this research study was to determine whether there are significant differences in user preferences and comprehension for a publication re-created with three different digital publishing solutions (i.e., Adobe DPS, iBooks Author, and EPUB). The methodology of this research study was a human factors experiment testing for a significant difference in the reading experience of subjects exposed to one of three digital publications. A field experiment consisting of ninety subjects assessed these publications, thirty for each of the three output formats. No significant difference among the publications was found for readers\u27 pleasure with the overall experience or for their interaction with the multimedia elements. A marginally significant difference among the publications was found for the value added by the multimedia elements of the publication. A significant difference among the publications was found for the readers\u27 ability to recognize information and comprehend material from the publication. Ultimately, these results showed a trend that readers\u27 of the digital publishing platforms that allowed for greater interactivity experienced more value added by the multimedia elements of the publication and increased ability to recognize information from the publication. However, the pleasure with the overall experience of the publication and the readers\u27 interaction with the multimedia elements in the publication was determined to not have a significant difference between the publications. Therefore, while readers did not tend to interact differently with the multimedia content or experience any greater pleasure based on the publication they read, readers of more interactive publications did tend to see more value added by the multimedia elements and were better able to recognize the information they had experienced

    Extending document models to incorporate semantic information for complex standards

    Get PDF
    This paper presents the concept of hybrid semantic-document models to aid information management when using standards for complex technical domains such as military data communication. These standards are traditionally text based documents for human interpretation, but prose sections can often be ambiguous and can lead to discrepancies and subsequent implementation problems. Many organisations will produce semantic representations of the material to ensure common understanding and to exploit computer aided development. In developing these semantic representations, no relationship is maintained to the original prose. Maintaining relationships between the original prose and the semantic model has key benefits, including assessing conformance at a semantic level rather than prose, and enabling original content authors to explicitly define their intentions, thus reducing ambiguity and facilitating computer aided functionality. A framework of relationships is proposed which can integrate with common document modeling techniques and provide the necessary functionality to allow semantic content to be mapped into document views. These relationships are then generalised for applicability to a wider context

    Reasoning & Querying – State of the Art

    Get PDF
    Various query languages for Web and Semantic Web data, both for practical use and as an area of research in the scientific community, have emerged in recent years. At the same time, the broad adoption of the internet where keyword search is used in many applications, e.g. search engines, has familiarized casual users with using keyword queries to retrieve information on the internet. Unlike this easy-to-use querying, traditional query languages require knowledge of the language itself as well as of the data to be queried. Keyword-based query languages for XML and RDF bridge the gap between the two, aiming at enabling simple querying of semi-structured data, which is relevant e.g. in the context of the emerging Semantic Web. This article presents an overview of the field of keyword querying for XML and RDF

    Multiple hierarchies : new aspects of an old solution

    Get PDF
    In this paper, we present the Multiple Annotation approach, which solves two problems: the problem of annotating overlapping structures, and the problem that occurs when documents should be annotated according to different, possibly heterogeneous tag sets. This approach has many advantages: it is based on XML, the modeling of alternative annotations is possible, each level can be viewed separately, and new levels can be added at any time. The files can be regarded as an interrelated unit, with the text serving as the implicit link. Two representations of the information contained in the multiple files (one in Prolog and one in XML) are described. These representations serve as a base for several applications

    Studying Micro-Processes in Software Development Stream

    Get PDF
    In this paper we propose a new streaming technique to study software development. As we observed software development consists of a series of activities such as edit, compilation, testing, debug and deployment etc. All these activities contribute to development stream, which is a collection of software development activities in time order. Development stream can help us replay and reveal software development process at a later time without too much hassle. We developed a system called Zorro to generate and analyze development stream at Collaborative Software Development Laboratory in University of Hawaii. It is built on the top of Hackystat, an in-process automatic metric collection system developed in the CSDL. Hackystat sensors continuously collect development activities and send them to a centralized data store for processing. Zorro reads in all data of a project and constructs stream from them. Tokenizers are chained together to divide development stream into episodes (micro iteration) for classification with rule engine. In this paper we demonstrate the analysis on Test-Driven Development (TDD) with this framework

    Investigating the Efficacy of XML and Stylesheets to Render Electronic Courseware for Multiple Learning Styles

    Get PDF
    The objective of this project was to test the efficacy of using Extensible Markup Language (XML) - in particular the DocBook 5.0b5 schema - and Extensible Stylesheet Language Transformation (XSLT) to render electronic courseware that can be dynamically re-formatted according to a student’s individual learning style. The text of a typical lesson was marked up in XML according to the DocBook schema, and several XSLT stylesheets were created to transform the XML document into different versions, each according to particular learning needs. These learning needs were drawn from the Felder-Silverman learning style model. The notes had links to trigger JavaScript functions that allowed the student to reformat the notes to produce different views of the lesson. The dynamic notes were tested on twelve users who filled out a feedback questionnaire. Feedback was largely positive. It suggested that users were able to navigate according to their learning style. There were some usability issues caused by lack of compatibility of the program with some browsers. However, the user test is not the most critical part of the evaluation. It served to confirm that the notes were usable, but the analysis of the use of XSLT and DocBook is the key aspect of this project. It was found that XML, and in particular the DocBook schema, was a useful tool in these circumstances, being easy to learn, well supported and having the appropriate structure for a project of this type. The use of XSLT on the other hand was not so straightforward. Learning a declarative language was a challenge, as was using XSLT to transform the notes as necessary for this project. A particular problem was the need to move content from one area of the document to another - to hide it in some cases and reveal it in others. The solution was not straightforward to achieve using XSLT, and does not take proper advantage of the strengths of this technology. The fact that the XSLT processor uses the DOM API, which necessitates the loading of the entire XML document into memory, is particularly problematic in this instance where the document is constantly transformed and re-transformed. The manner in which stylesheets are assigned, as well as the need to use DOM objects to edit the source tree, necessitated the use of JavaScript to create the necessary usability. These mechanisms introduced a limitation in terms of compatibility with browsers and caused the program to freeze on older machines. The problems with browser compatibility and the synchronous loading of data are not insurmountable, and can be overcome with the appropriate use of JavaScript and the use of asynchronous data retrieval as is made possible by the use of AJAX
    corecore