1,326 research outputs found

    Quality measures for ETL processes: from goals to implementation

    Get PDF
    Extraction transformation loading (ETL) processes play an increasingly important role for the support of modern business operations. These business processes are centred around artifacts with high variability and diverse lifecycles, which correspond to key business entities. The apparent complexity of these activities has been examined through the prism of business process management, mainly focusing on functional requirements and performance optimization. However, the quality dimension has not yet been thoroughly investigated, and there is a need for a more human-centric approach to bring them closer to business-users requirements. In this paper, we take a first step towards this direction by defining a sound model for ETL process quality characteristics and quantitative measures for each characteristic, based on existing literature. Our model shows dependencies among quality characteristics and can provide the basis for subsequent analysis using goal modeling techniques. We showcase the use of goal modeling for ETL process design through a use case, where we employ the use of a goal model that includes quantitative components (i.e., indicators) for evaluation and analysis of alternative design decisions.Peer ReviewedPostprint (author's final draft

    An overview of virtual city modelling : emerging organisational issues

    Get PDF
    This paper presents a recent overview of the increasing use of Virtual Reality (VR) technologies for the simulation of urban environments. It builds on previous research conducted on the identification of three-dimensional (3D) city models and offers an analysis of the development, utilization and construction of VR city models. Issues pertaining to advantages, barriers and ownership are identified. The paper describes a case study of the development of a VR model for the city of Newcastle upon Tyne in the UK and outlines the role that academic institutions can play in both the creation and utilization of urban models. The study offers a new approach for the creation, management and update of urban models and reflects on issues which are emerging. Areas for future research are discussed

    Introducing a novel model for simulating large loop excision of the transformation zone (LLETZ) using 3D printing technique

    Get PDF
    Purpose: Electrosurgery is the gold-standard procedure for the treatment of cervical dysplasia. The quality of the outcome depends on the accuracy of performance, which underlines the role of adequate training of surgeons, especially, as this procedure is often performed by novice surgeons. According to our knowledge, medical simulation has up until now lacked a model, which focuses on realistically simulating the treatment of cervical dysplasia with the concerning anatomy. Methods and result: In our work, we present a model created using 3D printing for holistically simulating diagnostic, as well as surgical interventions of the cervix, as realistically as possible. Conclusion: This novel simulator is compared to an existing model and both are evaluated. By doing so, we aim to provide novice gynecologists with standardized and high-quality simulation models for practicing to improve their proficiency. © 2021, The Author(s)

    Formal Scenario-Based Testing of Autonomous Vehicles: From Simulation to the Real World

    Full text link
    We present a new approach to automated scenario-based testing of the safety of autonomous vehicles, especially those using advanced artificial intelligence-based components, spanning both simulation-based evaluation as well as testing in the real world. Our approach is based on formal methods, combining formal specification of scenarios and safety properties, algorithmic test case generation using formal simulation, test case selection for track testing, executing test cases on the track, and analyzing the resulting data. Experiments with a real autonomous vehicle at an industrial testing facility support our hypotheses that (i) formal simulation can be effective at identifying test cases to run on the track, and (ii) the gap between simulated and real worlds can be systematically evaluated and bridged.Comment: 9 pages, 6 figures. Full version of an ITSC 2020 pape

    MINING AND VERIFICATION OF TEMPORAL EVENTS WITH APPLICATIONS IN COMPUTER MICRO-ARCHITECTURE RESEARCH

    Get PDF
    Computer simulation programs are essential tools for scientists and engineers to understand a particular system of interest. As expected, the complexity of the software increases with the depth of the model used. In addition to the exigent demands of software engineering, verification of simulation programs is especially challenging because the models represented are complex and ridden with unknowns that will be discovered by developers in an iterative process. To manage such complexity, advanced verification techniques for continually matching the intended model to the implemented model are necessary. Therefore, the main goal of this research work is to design a useful verification and validation framework that is able to identify model representation errors and is applicable to generic simulators. The framework that was developed and implemented consists of two parts. The first part is First-Order Logic Constraint Specification Language (FOLCSL) that enables users to specify the invariants of a model under consideration. From the first-order logic specification, the FOLCSL translator automatically synthesizes a verification program that reads the event trace generated by a simulator and signals whether all invariants are respected. The second part consists of mining the temporal flow of events using a newly developed representation called State Flow Temporal Analysis Graph (SFTAG). While the first part seeks an assurance of implementation correctness by checking that the model invariants hold, the second part derives an extended model of the implementation and hence enables a deeper understanding of what was implemented. The main application studied in this work is the validation of the timing behavior of micro-architecture simulators. The study includes SFTAGs generated for a wide set of benchmark programs and their analysis using several artificial intelligence algorithms. This work improves the computer architecture research and verification processes as shown by the case studies and experiments that have been conducted

    Towards Effective Context for Meta-Reinforcement Learning: an Approach based on Contrastive Learning

    Full text link
    Context, the embedding of previous collected trajectories, is a powerful construct for Meta-Reinforcement Learning (Meta-RL) algorithms. By conditioning on an effective context, Meta-RL policies can easily generalize to new tasks within a few adaptation steps. We argue that improving the quality of context involves answering two questions: 1. How to train a compact and sufficient encoder that can embed the task-specific information contained in prior trajectories? 2. How to collect informative trajectories of which the corresponding context reflects the specification of tasks? To this end, we propose a novel Meta-RL framework called CCM (Contrastive learning augmented Context-based Meta-RL). We first focus on the contrastive nature behind different tasks and leverage it to train a compact and sufficient context encoder. Further, we train a separate exploration policy and theoretically derive a new information-gain-based objective which aims to collect informative trajectories in a few steps. Empirically, we evaluate our approaches on common benchmarks as well as several complex sparse-reward environments. The experimental results show that CCM outperforms state-of-the-art algorithms by addressing previously mentioned problems respectively.Comment: Accepted to AAAI 202

    Sensitivity of air pollution exposure and disease burden to emission changes in China using machine learning emulation

    Get PDF
    Machine learning models can emulate chemical transport models, reducing computational costs and enabling more experimentation. We developed emulators to predict annual−mean fine particulate matter (PM(2.5)) and ozone (O(3)) concentrations and their associated chronic health impacts from changes in five major emission sectors (residential, industrial, land transport, agriculture, and power generation) in China. The emulators predicted 99.9% of the variance in PM(2.5) and O(3) concentrations. We used these emulators to estimate how emission reductions can attain air quality targets. In 2015, we estimate that PM(2.5) exposure was 47.4 μg m(−3) and O(3) exposure was 43.8 ppb, associated with 2,189,700 (95% uncertainty interval, 95UI: 1,948,000–2,427,300) premature deaths per year, primarily from PM(2.5) exposure (98%). PM(2.5) exposure and the associated disease burden were most sensitive to industry and residential emissions. We explore the sensitivity of exposure and health to different combinations of emission reductions. The National Air Quality Target (35 μg m(−3)) for PM(2.5) concentrations can be attained nationally with emission reductions of 72% in industrial, 57% in residential, 36% in land transport, 35% in agricultural, and 33% in power generation emissions. We show that complete removal of emissions from these five sectors does not enable the attainment of the WHO Annual Guideline (5 μg m(−3)) due to remaining air pollution from other sources. Our work provides the first assessment of how air pollution exposure and disease burden in China varies as emissions change across these five sectors and highlights the value of emulators in air quality research

    Detection and Modeling of High-Dimensional Thresholds for Fault Detection and Diagnosis

    Get PDF
    Many Fault Detection and Diagnosis (FDD) systems use discrete models for detection and reasoning. To obtain categorical values like oil pressure too high, analog sensor values need to be discretized using a suitablethreshold. Time series of analog and discrete sensor readings are processed and discretized as they come in. This task isusually performed by the wrapper code'' of the FDD system, together with signal preprocessing and filtering. In practice,selecting the right threshold is very difficult, because it heavily influences the quality of diagnosis. If a threshold causesthe alarm trigger even in nominal situations, false alarms will be the consequence. On the other hand, if threshold settingdoes not trigger in case of an off-nominal condition, important alarms might be missed, potentially causing hazardoussituations. In this paper, we will in detail describe the underlying statistical modeling techniques and algorithm as well as the Bayesian method for selecting the most likely shape and its parameters. Our approach will be illustrated by several examples from the Aerospace domain
    corecore