767 research outputs found

    Automatic translation of formal data specifications to voice data-input applications.

    Get PDF
    This thesis introduces a complete solution for automatic translation of formal data specifications to voice data-input applications. The objective of the research is to automatically generate applications for inputting data through speech from specifications of the structure of the data. The formal data specifications are XML DTDs. A new formalization called Grammar-DTD (G-DTD) is introduced as an extended DTD that contains grammars to describe valid values of the DTD elements and attributes. G-DTDs facilitate the automatic generation of Voice XML applications that correspond to the original DTD structure. The development of the automatic application-generator included identifying constraints on the G-DTD to ensure a feasible translation, using predicate calculus to build a knowledge base of inference rules that describes the mapping procedure, and writing an algorithm for the automatic translation based on the inference rules.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2006 .H355. Source: Masters Abstracts International, Volume: 45-01, page: 0354. Thesis (M.Sc.)--University of Windsor (Canada), 2006

    Teaching CiarƔn Carson: Classroom Approaches to the Postdigital, Conflict-Zone Text

    Get PDF
    In recent decades, Belfast writer CiarĆ”n Carson has emerged as one of the most inventive of contemporary literary voices, in part for his unique style of textualizing space. Driven in some ways by the very specific technological challenges of the conflict zone of Troubles-era Belfast, Carson's poetry and prose are marked by what we might describe as tech paranoiaā€”but, in a constructive poetic answer, his texts create new logics for using tech materials, machines, and high-tech spaces in ways that privilege creativity. It is no coincidence, notes literary and technology theorist Katherine Hayles, that "the condition of virtuality is most pervasive and advanced" where centers of power are most concentrated and conflicted intersections most frequently occur. Carson's oeuvre illustrates the point, employing the technology of the printed page to simulate and process the zone of conflict in new, postdigital ways. This article poses Carson's texts as ideal for exploring issues that connect regional identities, technology, and the artsā€”including highly topical issues around terrorism and nationhoodā€”that are highly relevant for contemporary students of literature

    Developing an in house vulnerability scanner for detecting Template Injection, XSS, and DOM-XSS vulnerabilities

    Get PDF
    Web applications are becoming an essential part of today's digital world. However, with the increase in the usage of web applications, security threats have also become more prevalent. Cyber attackers can exploit vulnerabilities in web applications to steal sensitive information or take control of the system. To prevent these attacks, web application security must be given due consideration. Existing vulnerability scanners fail to detect Template Injection, XSS, and DOM-XSS vulnerabilities effectively. To bridge this gap in web application security, a customized in-house scanner is needed to quickly and accurately identify these vulnerabilities, enhancing manual security assessments of web applications. This thesis focused on developing a modular and extensible vulnerability scanner to detect Template Injection, XSS, and DOM-based XSS vulnerabilities in web applications. Testing the scanner against other free and open-source solutions on the market showed that it outperformed them on Template injection vulnerabilities and nearly all on XSS-type vulnerabilities. While the scanner has limitations, focusing on specific injection vulnerabilities can result in better performance

    New Architectural Models for Visibly Controllable Computing: The Relevance of Dynamic Object Oriented Architectures and Plan Based Computing Models

    Get PDF
    Traditionally, we've focussed on the question of how to make a system easy to code the first time, or perhaps on how to ease the system's continued evolution. But if we look at life cycle costs, then we must conclude that the important question is how to make a system easy to operate. To do this we need to make it easy for the operators to see what's going on and to then manipulate the system so that it does what it is supposed to. This is a radically different criterion for success. What makes a computer system visible and controllable? This is a difficult question, but it's clear that today's modern operating systems with nearly 50 million source lines of code are neither. Strikingly, the MIT Lisp Machine and its commercial successors provided almost the same functionality as today's mainstream sytsems, but with only 1 Million lines of code. This paper is a retrospective examination of the features of the Lisp Machine hardware and software system. Our key claim is that by building the Object Abstraction into the lowest tiers of the system, great synergy and clarity were obtained. It is our hope that this is a lesson that can impact tomorrow's designs. We also speculate on how the spirit of the Lisp Machine could be extended to include a comprehensive access control model and how new layers of abstraction could further enrich this model

    Visualization for Biological Models, Simulation, and Ontologies

    Get PDF
    In this dissertation, I present three browsers that I have developed for the purpose of exploring, understanding, and analyzing models, simulations, and ontologies in biology and medicine. The ļ¬rst browser visualizes multidimensional simulation data as an animation. The second browser visualizes the equations of a complex model as a network and puts structure and organization on top of equations and variables. The third browser is an ontology viewer and editor, directly intended for the Foundational Model of Anatomy (FMA), but applicable to other ontologies as well. This browser has two contributions. First, it is a lightweight deliverable that lets someone easily dabble with the FMA. Second, it lets the user edit an ontology to create a view of it. For the ontology browser, I also conduct user studies to reļ¬ne and evaluate the software

    Managing the consistency of distributed documents

    Get PDF
    Many businesses produce documents as part of their daily activities: software engineers produce requirements specifications, design models, source code, build scripts and more; business analysts produce glossaries, use cases, organisation charts, and domain ontology models; service providers and retailers produce catalogues, customer data, purchase orders, invoices and web pages. What these examples have in common is that the content of documents is often semantically related: source code should be consistent with the design model, a domain ontology may refer to employees in an organisation chart, and invoices to customers should be consistent with stored customer data and purchase orders. As businesses grow and documents are added, it becomes difficult to manually track and check the increasingly complex relationships between documents. The problem is compounded by current trends towards distributed working, either over the Internet or over a global corporate network in large organisations. This adds complexity as related information is not only scattered over a number of documents, but the documents themselves are distributed across multiple physical locations. This thesis addresses the problem of managing the consistency of distributed and possibly heterogeneous documents. ā€œDocumentsā€ is used here as an abstract term, and does not necessarily refer to a human readable textual representation. We use the word to stand for a file or data source holding structured information, like a database table, or some source of semi-structured information, like a file of comma-separated values or a document represented in a hypertext markup language like XML [Bray et al., 2000]. Document heterogeneity comes into play when data with similar semantics is represented in different ways: for example, a design model may store a class as a rectangle in a diagram whereas a source code file will embed it as a textual string; and an invoice may contain an invoice identifier that is composed of a customer name and date, both of which may be recorded and managed separately. Consistency management in this setting encompasses a number of steps. Firstly, checks must be executed in order to determine the consistency status of documents. Documents are inconsistent if their internal elements hold values that do not meet the properties expected in the application domain or if there are conflicts between the values of elements in multiple documents. The results of a consistency check have to be accumulated and reported back to the user. And finally, the user may choose to change the documents to bring them into a consistent state. The current generation of tools and techniques is not always sufficiently equipped to deal with this problem. Consistency checking is mostly tightly integrated or hardcoded into tools, leading to problems with extensibility with respect to new types of documents. Many tools do not support checks of distributed data, insisting instead on accumulating everything in a centralized repository. This may not always be possible, due to organisational or time constraints, and can represent excessive overhead if the only purpose of integration is to improve data consistency rather than deriving any additional benefit. This thesis investigates the theoretical background and practical support necessary to support consistency management of distributed documents. It makes a number of contributions to the state of the art, and the overall approach is validated in significant case studies that provide evidence of its practicality and usefulness
    • ā€¦
    corecore