286 research outputs found

    Quality of Data Entry Using Single Entry, Double Entry and Automated Forms Processing–An Example Based on a Study of Patient-Reported Outcomes

    Get PDF
    Background: The clinical and scientific usage of patient-reported outcome measures is increasing in the health services. Often paper forms are used. Manual double entry of data is defined as the definitive gold standard for transferring data to an electronic format, but the process is laborious. Automated forms processing may be an alternative, but further validation is warranted. Methods: 200 patients were randomly selected from a cohort of 5777 patients who had previously answered two different questionnaires. The questionnaires were scanned using an automated forms processing technique, as well as processed by single and double manual data entry, using the EpiData Entry data entry program. The main outcome measure was the proportion of correctly entered numbers at question, form and study level. Results: Manual double-key data entry (error proportion per 1000 fields = 0.046 (95 % CI: 0.001–0.258)) performed better than single-key data entry (error proportion per 1000 fields = 0.370 (95 % CI: 0.160–0.729), (p = 0.020)). There was no statistical difference between Optical Mark Recognition (error proportion per 1000 fields = 0.046 (95 % CI: 0.001–0.258)) and double-key data entry (p = 1.000). With the Intelligent Character Recognition method, there was no statistical difference compared to single-key data entry (error proportion per 1000 fields = 6.734 (95 % CI: 0.817–24.113), (p = 0.656)), as well as double-key data entry (error proportion per 1000 fields = 3.367 (95 % CI: 0.085–18.616)), (p = 0.319))

    Multimission Modular Spacecraft Ground Support Software System (MMS/GSSS) state-of-the-art computer systems/ compatibility study

    Get PDF
    The compatibility of the Multimission Modular Spacecraft (MMS) Ground Support Software System (GSSS), currently operational on a ModComp IV/35, with the VAX 11/780 system is discussed. The compatibility is examined in various key areas of the GSSS through the results of in depth testing performed on the VAX 11/780 and ModComp IV/35 systems. The compatibility of the GSSS with the ModComp CLASSIC is presented based upon projections from ModComp supplied literature

    Digging by Debating: Linking massive datasets to specific arguments

    Get PDF
    We will develop and implement a multi-scale workbench, called "InterDebates", with the goal of digging into data provided by hundreds of thousands, eventually millions, of digitized books, bibliographic databases of journal articles, and comprehensive reference works written by experts. Our hypotheses are: that detailed and identifiable arguments drive many aspects of research in the sciences and the humanities; that argumentative structures can be extracted from large datasets using a mixture of automated and social computing techniques; and, that the availability of such analyses will enable innovative interdisciplinary research, and may also play a role in supporting better-informed critical debates among students and the general public. A key challenge tackled by this project is thus to uncover and represent the argumentative structure of digitized documents, allowing users to find and interpret detailed arguments in the broad semantic landscape of books and articles

    Development of an Expert System to Convert Knowledge-based Geological Engineering Systems into Fortran

    Get PDF
    A knowledge-based geographic information system (KBGIS) for geological engineering map (GEM) production was developed in GoldWorks, an expert system development shell. GoldWorks allows the geological engineer to develop a rule base for a GEM application. Implementation of the resultant rule base produced a valid GEM, but took too much time. This proved that knowledge-based GEM production was possible but in GoldWorks implementation failed as a practical production system. To solve this problem, a Conversion Expert System was developed which accepted, as input, a KBGIS and produced, as output, the equivalent Fortran code. This allowed the engineer to utilize GoldWorks for development of the rule base while implementing the rule base in a more practical manner (as a Fortran program). Testing of the Fortran program generated by this Conversion System confirmed that the GEMs produced were identical to those from the KBGIS, and execution time was significantly reduced. There was an additional benefit; since use of the Fortran program did not require access to the GoldWorks System, a single GoldWorks package could be used with the Conversion System to develop several Fortran production systems. These systems could then be used at remote production sites. However, each Fortran production system still required access to the Earth Resources Data Analysis System (ERDAS) that supplied the GIS input and output files. Thus, this Conversion System achieved two major objectives; it dramatically reduced GEM production time, and it added versatility

    What is the influence of genre during the perception of structured text for retrieval and search?

    Get PDF
    This thesis presents an investigation into the high value of structured text (or form) in the context of genre within Information Retrieval. In particular, how are these structured texts perceived and why are they not more heavily used within Information Retrieval & Search communities? The main motivation is to show the features in which people can exploit genre within Information Search & Retrieval, in particular, categorisation and search tasks. To do this, it was vital to record and analyse how and why this was done during typical tasks. The literature review highlighted two previous studies (Toms & Campbell 1999a; Watt 2009) which have reported pilot studies consisting of genre categorisation and information searching. Both studies and other findings within the literature review inspired the work contained within this thesis. Genre is notoriously hard to define, but a very useful framework of Purpose and Form, developed by Yates & Orlikowski (1992), was utilised to design two user studies for the research reported within the thesis. The two studies consisted of, first, a categorisation task (e-mails), and second, a set of six simulated situations in Wikipedia, both of which collected quantitative data from eye tracking experiments as well as qualitative user data. The results of both studies showed the extent to which the participants utilised the form features of the stimuli presented, in particular, how these were used, which ocular behaviours (skimming or scanning) and actual features were used, and which were the most important. The main contributions to research made by this thesis were, first of all, that the task-based user evaluations employing simulated search scenarios revealed how and why users make decisions while interacting with the textual features of structure and layout within a discourse community, and, secondly, an extensive evaluation of the quantitative data revealed the features that were used by the participants in the user studies and the effects of the interpretation of genre in the search and categorisation process as well as the perceptual processes used in the various communities. This will be of benefit for the re-development of information systems. As far as is known, this is the first detailed and systematic investigation into the types of features, value of form, perception of features, and layout of genre using eye tracking in online communities, such as Wikipedia

    From Big Data to Argument Analysis and Automated Extraction: A Selective Study of Argument in the Philosophy of Animal Psychology from the Volumes of the Hathi Trust Collection

    Get PDF
    The Digging by Debating (DbyD) project aimed to identify, extract, model, map and visualise philosophical arguments in very large text repositories such as the Hathi Trust. The project has: 1) developed a method for visualizing points of contact between philosophy and the sciences; 2) used topic modeling to identify the volumes, and pages within those volumes, which are ‘rich’ in a chosen topic; 3) used a semiformal discourse analysis technique to manually identify key arguments in the selected pages; 4) used the OVA argument mapping tool to represent and map the key identified arguments and provide a framework for comparative analysis; 5) devised and used a novel analysis framework applied to the mapped arguments covering role, content and source of propositions, and the importance, context and meaning of arguments; 6) created a prototype tool for identifying propositions, using naive Bayes classifiers, and for identifying argument structure in chosen texts, using propositional similarity; 7) created tools to apply topic modeling to tasks of rating similarity of papers in the PhilPapers repository. The methods from 1 to 5 above, have enabled us to locate and extract the key arguments from each text. It is significant that, in applying the methods, a nonexpert with limited or no domain knowledge of philosophy has both identified the volumes of interest from a key ‘Big Data Set’ (Hathi Trust) AND identified key arguments within these texts. This provided several key insights about the nature and form of arguments in historical texts, and is a proofofconcept design for a tool that will be usable by scholars. We have further created a dataset with which to train and test prototype tools for both proposition and argument extraction. Though at an early stage, these preliminary results are promising given the complexity of the task. Specifically, we have prototyped a set of tools and methods that allow scholars to move between macroscale, global views of the distributions of philosophical themes in such repositories, and microscale analyses of the arguments appearing on specific pages in texts belonging to the repository. Our approach spans bibliographic analysis, science mapping, and LDA topic modeling conducted at Indiana University and machineassisted argument markup into Argument Interchange Format (AIF) using the OVA (Online Visualization of Argument) tool from the University of Dundee, where the latter has been used to analyse and represent arguments by the team based at the University of East London, who also performed a detailed empirical analysis of arguments in selected texts. This work has been articulated as a proof of concept tool – linked to the repository PhilPapers – designed by members linked to the University of London. This project is showing for the first time how big data text processing techniques can be combined with deep structural analysis to provide researchers and students with navigation and interaction tools for engaging with the large and rich resources provided by datasets such as the Hathi Trust and PhilPapers. Ultimately our efforts show how the computational humanities can bridge the gulf between the “big data” perspective of firstgeneration digital humanities and the close readings of text that are the “bread and butter” of more traditional scholarship in the humanities

    Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation, appendix A

    Get PDF
    A generic computer simulation for manipulator systems (ROBSIM) was implemented and the specific technologies necessary to increase the role of automation in various missions were developed. The specific items developed were: (1) Capability for definition of a manipulator system consisting of multiple arms, load objects, and an environment; (2) Capability for kinematic analysis, requirements analysis, and response simulation of manipulator motion; (3) Postprocessing options such as graphic replay of simulated motion and manipulator parameter plotting; (4) Investigation and simulation of various control methods including manual force/torque and active compliance control; (5) Evaluation and implementation of three obstacle avoidance methods; (6) Video simulation and edge detection; and (7) Software simulation validation. This appendix is the user's guide and includes examples of program runs and outputs as well as instructions for program use

    The engineering design integration (EDIN) system

    Get PDF
    A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described

    An Assistive Tool for Authoring Visualization Thumbnails

    Get PDF
    Department of Computer Science and EngineeringVisualization of data is continuously stimulating for its potential to describe narratives inside data. It is a well-known medium in the digital era for expressing the insights of data. In recent times, storytelling in data-driven articles that comes under the category of data journalism is significantly adapting by news organizations. However, current data-driven visualization thumbnail tools either are lacking support for extracting the information from documents that contain unstructured text, tables, and graphical data and telling the story on it or require expressive technical expertise. Therefore, I introduce an integrated authoring tool, which is a combination of model and user interface. The objective of this study is to simplify the informative thumbnail creation process in the field of journalism. Generally, the current prevailing systems involve manually selecting and formatting entity from textual or tabular source, a process that leads to being tiresome and error-prone. Furthermore, there is no tool exist that extracts the insights from the document???s unstructured text, tables and graphics data simultaneously and provides graphical visuals for thumbnail or static visualization. With VTComp, data-driven news article contents are automatically extracted and converted into graphics and formatted textual layout, to enable journalists for further usage of results. We presented a user interface, which consists of all the essential components required for narrative visualization. Our system expresses the data insights with separate categories like summary text, graphical response view which contains chart visuals and document related visuals differentiated by label text and the final output of the system is interactive static visual graphics contributed by machine and user. By enabling storytelling without programming, the VTComp interface overcomes the interaction gap between user and system-generated results. We evaluated VTComp through multiple measurements such as benchmark comparison of automatically extraction of target entities against manual extraction, and system compatibility with different news organizations??? data-driven articles. Besides, an introductory evaluation of the user experience of thumbnail authoring using iPad and touch pencil by performing user study session and a follow-up quantitative and qualitative analysis. Finally, The results of the user study acknowledge that VTComp beneficial for journalists to create the data-rich and informative graphics, thumbnails from an unstructured text document with-in a short period, ithout any special expertise and efforts.clos
    corecore