17,116 research outputs found

    Metadata Extraction in Database Testing

    Get PDF
    The need for an automated testing tool to test the correctness of the database applications is crucial in our current day since databases play an important role in almost all organizations. Also, database’s behavior need to be verified in order to avoid costly errors and false information being extracted from them. The main aim of this paper was to create a component-based tester called DBSoft that tests the correctness of database application systems. The DBSoft toolkit consists of five tools as follows: information collection with the Parser tool, test case generation with the Input Generator tool, test case implementation with the Output Generator tool, test case validation with the Output Validator tool and report generation with the Report Generator tool

    EXODUS: Integrating intelligent systems for launch operations support

    Get PDF
    Kennedy Space Center (KSC) is developing knowledge-based systems to automate critical operations functions for the space shuttle fleet. Intelligent systems will monitor vehicle and ground support subsystems for anomalies, assist in isolating and managing faults, and plan and schedule shuttle operations activities. These applications are being developed independently of one another, using different representation schemes, reasoning and control models, and hardware platforms. KSC has recently initiated the EXODUS project to integrate these stand alone applications into a unified, coordinated intelligent operations support system. EXODUS will be constructed using SOCIAL, a tool for developing distributed intelligent systems. EXODUS, SOCIAL, and initial prototyping efforts using SOCIAL to integrate and coordinate selected EXODUS applications are described

    Strategies for Creating MIS Technology to Improve Social Work Practice and Research

    Get PDF
    This paper illustrates the potential for management information system (MIS) technology to integrate information collection, management and reporting within a single program or network of organizations. Properly devised and created, MIS applications improve administration, service delivery and practice evaluation. Three strategies are offered to guide the design and development of MIS software. This paper is based on lessons from the production and implementation of MIS software that serves as a management and evaluation tool for a nationwide policy demonstration. Data from the MIS have helped to shape state and federal policy

    Hypothesis exploration with visualization of variance.

    Get PDF
    BackgroundThe Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes-to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics-wide-scale, systematic study of phenotypes-to neuropsychiatry research.ResultsThis paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles-patterns of values across phenotypes-that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes.ConclusionsThe ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports 'natural selection' on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics

    Software Reuse Issues

    Get PDF
    NASA Langley Research Center sponsored a Workshop on NASA Research in Software Reuse on November 17-18, 1988 in Melbourne, Florida, hosted by Software Productivity Solutions, Inc. Participants came from four NASA centers and headquarters, eight NASA contractor companies, and three research institutes. Presentations were made on software reuse research at the four NASA centers; on Eli, the reusable software synthesis system designed and currently under development by SPS; on Space Station Freedom plans for reuse; and on other reuse research projects. This publication summarizes the presentations made and the issues discussed during the workshop

    Approach for testing the extract-transform-load process in data warehouse systems, An

    Get PDF
    2018 Spring.Includes bibliographical references.Enterprises use data warehouses to accumulate data from multiple sources for data analysis and research. Since organizational decisions are often made based on the data stored in a data warehouse, all its components must be rigorously tested. In this thesis, we first present a comprehensive survey of data warehouse testing approaches, and then develop and evaluate an automated testing approach for validating the Extract-Transform-Load (ETL) process, which is a common activity in data warehousing. In the survey we present a classification framework that categorizes the testing and evaluation activities applied to the different components of data warehouses. These approaches include both dynamic analysis as well as static evaluation and manual inspections. The classification framework uses information related to what is tested in terms of the data warehouse component that is validated, and how it is tested in terms of various types of testing and evaluation approaches. We discuss the specific challenges and open problems for each component and propose research directions. The ETL process involves extracting data from source databases, transforming it into a form suitable for research and analysis, and loading it into a data warehouse. ETL processes can use complex one-to-one, many-to-one, and many-to-many transformations involving sources and targets that use different schemas, databases, and technologies. Since faulty implementations in any of the ETL steps can result in incorrect information in the target data warehouse, ETL processes must be thoroughly validated. In this thesis, we propose automated balancing tests that check for discrepancies between the data in the source databases and that in the target warehouse. Balancing tests ensure that the data obtained from the source databases is not lost or incorrectly modified by the ETL process. First, we categorize and define a set of properties to be checked in balancing tests. We identify various types of discrepancies that may exist between the source and the target data, and formalize three categories of properties, namely, completeness, consistency, and syntactic validity that must be checked during testing. Next, we automatically identify source-to-target mappings from ETL transformation rules provided in the specifications. We identify one-to-one, many-to-one, and many-to-many mappings for tables, records, and attributes involved in the ETL transformations. We automatically generate test assertions to verify the properties for balancing tests. We use the source-to-target mappings to automatically generate assertions corresponding to each property. The assertions compare the data in the target data warehouse with the corresponding data in the sources to verify the properties. We evaluate our approach on a health data warehouse that uses data sources with different data models running on different platforms. We demonstrate that our approach can find previously undetected real faults in the ETL implementation. We also provide an automatic mutation testing approach to evaluate the fault finding ability of our balancing tests. Using mutation analysis, we demonstrated that our auto-generated assertions can detect faults in the data inside the target data warehouse when faulty ETL scripts execute on mock source data

    Using Design Research to Improve Data Modelling Performance among Novice End Users

    Get PDF
    As a first foray into the design research area, the study described in this paper was designed to improve novice users’ understanding of data modelling. The paper commences with a brief description of design research, then continues with an explanation as to why design research might be a successful methodology to use in information systems. What follows is a description of the genesis of this research project, with reference to the first iteration of the design research project. The paper then proceeds to describe the development of the various components of the experiment, including the evaluation scheme and the artefact, and concludes with brief comments as to the implications of the results

    Field-Testing a PC Electronic Documentation System using the Clinical Care Classification© System with Nursing Students

    Get PDF
    Schools of nursing are slow in training their students to keep up with the fast approaching era of electronic healthcare documentation. This paper discusses the importance of nursing documentation, and describes the field-testing of an electronic health record, the Sabacare Clinical Care Classification (CCC©) system. The PC-CCC©, designed as a Microsoft Access® application, is an evidence-based electronic documentation system available via free download from the internet. A sample of baccalaureate nursing students from a mid-Atlantic private college used this program to document the nursing care they provided to patients during their sophomore level clinical experience. This paper summarizes the design, training, and evaluation of using the system in practice

    Benchmark and comparison between hyperledger and MySQL

    Get PDF
    In this paper, we report the benchmarking results of Hyperledger,  a Distributed Ledger, which is the derivation Blockchain Technology.  Method to evaluate Hyperledger in a limited infrastructure is developed. Themeasured infrastructure consists of 8 nodes with a load of up to 20000 transactions/second. Hyperledger consistently runs all evaluation, namely, for 20,000 transactions, the run time 74.30s, latency 73.40ms latency, and 257 tps. The benchmarking of Hyperledger shows better than a database system in a high workload scenario. We found that the maximum size data volume in one transaction on the Hyperledger network is around ten (10) times of MySQL. Also, the time spent on processing a single transaction in the blockchain network is 80-200 times faster than MySQL. This initial analysis can provide an overview for practitioners in making decisions about the adoption of blockchain technology in their IT systems
    • …
    corecore