423,483 research outputs found

    GARDSim - A GPS Receiver Simulation Environment for Integrated Navigation System Development and Analysis

    Get PDF
    Airservices Australia has recently proposed the use of a Ground-based Regional Augmentation System (GRAS) to improve the safety of using the NAVSTAR Global Positioning System (GPS) in aviation. The GRAS Airborne Receiver Development project (GARD) is being conducted by QUT in conjunction with Airservices Australia and GPSat Systems. The aim of the project is to further enhance the safety and reliability of GPS and GRAS by incorporating smart sensor technology including advanced GPS signal processing and Micro-Electro-Mechanical-Sensor (MEMS) based inertial components. GARDSim is a GPS and GRAS receiver simulation environment which has been developed for algorithm development and analysis in the GARD project. GARDSim is capable of simulating any flight path using a given aeroplane flight model, simulating various GPS, GRAS and inertial system measurements and performing high integrity navigation solutions for the flight. This paper discusses the architecture and capabilities of GARDSim. Simulation results will be presented to demonstrate the usefulness of GARDSim as a simulation environment for algorithm development and evaluation

    Photoelastic Stress Analysis

    Get PDF

    High-Integrity Performance Monitoring Units in Automotive Chips for Reliable Timing V&V

    Get PDF
    As software continues to control more system-critical functions in cars, its timing is becoming an integral element in functional safety. Timing validation and verification (V&V) assesses softwares end-to-end timing measurements against given budgets. The advent of multicore processors with massive resource sharing reduces the significance of end-to-end execution times for timing V&V and requires reasoning on (worst-case) access delays on contention-prone hardware resources. While Performance Monitoring Units (PMU) support this finer-grained reasoning, their design has never been a prime consideration in high-performance processors - where automotive-chips PMU implementations descend from - since PMU does not directly affect performance or reliability. To meet PMUs instrumental importance for timing V&V, we advocate for PMUs in automotive chips that explicitly track activities related to worst-case (rather than average) softwares behavior, are recognized as an ISO-26262 mandatory high-integrity hardware service, and are accompanied with detailed documentation that enables their effective use to derive reliable timing estimatesThis work has also been partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant TIN2015-65316-P and the HiPEAC Network of Excellence. Jaume Abella has been partially supported by the MINECO under Ramon y Cajal postdoctoral fellowship number RYC-2013-14717. Enrico Mezzet has been partially supported by the Spanish Ministry of Economy and Competitiveness under Juan de la Cierva-Incorporación postdoctoral fellowship number IJCI-2016- 27396.Peer ReviewedPostprint (author's final draft

    A Model-based transformation process to validate and implement high-integrity systems

    Get PDF
    Despite numerous advances, building High-Integrity Embedded systems remains a complex task. They come with strong requirements to ensure safety, schedulability or security properties; one needs to combine multiple analysis to validate each of them. Model-Based Engineering is an accepted solution to address such complexity: analytical models are derived from an abstraction of the system to be built. Yet, ensuring that all abstractions are semantically consistent, remains an issue, e.g. when performing model checking for assessing safety, and then for schedulability using timed automata, and then when generating code. Complexity stems from the high-level view of the model compared to the low-level mechanisms used. In this paper, we present our approach based on AADL and its behavioral annex to refine iteratively an architecture description. Both application and runtime components are transformed into basic AADL constructs which have a strict counterpart in classical programming languages or patterns for verification. We detail the benefits of this process to enhance analysis and code generation. This work has been integrated to the AADL-tool support OSATE2

    Exploring the impact of different cost heuristics in the allocation of safety integrity levels

    Get PDF
    Contemporary safety standards prescribe processes in which system safety requirements, captured early and expressed in the form of Safety Integrity Levels (SILs), are iteratively allocated to architectural elements. Different SILs reflect different requirements stringencies and consequently different development costs. Therefore, the allocation of safety requirements is not a simple problem of applying an allocation "algebra" as treated by most standards; it is a complex optimisation problem, one of finding a strategy that minimises cost whilst meeting safety requirements. One difficulty is the lack of a commonly agreed heuristic for how costs increase between SILs. In this paper, we define this important problem; then we take the example of an automotive system and using an automated approach show that different cost heuristics lead to different optimal SIL allocations. Without automation it would have been impossible to explore the vast space of allocations and to discuss the subtleties involved in this problem

    Geospatial Narratives and their Spatio-Temporal Dynamics: Commonsense Reasoning for High-level Analyses in Geographic Information Systems

    Full text link
    The modelling, analysis, and visualisation of dynamic geospatial phenomena has been identified as a key developmental challenge for next-generation Geographic Information Systems (GIS). In this context, the envisaged paradigmatic extensions to contemporary foundational GIS technology raises fundamental questions concerning the ontological, formal representational, and (analytical) computational methods that would underlie their spatial information theoretic underpinnings. We present the conceptual overview and architecture for the development of high-level semantic and qualitative analytical capabilities for dynamic geospatial domains. Building on formal methods in the areas of commonsense reasoning, qualitative reasoning, spatial and temporal representation and reasoning, reasoning about actions and change, and computational models of narrative, we identify concrete theoretical and practical challenges that accrue in the context of formal reasoning about `space, events, actions, and change'. With this as a basis, and within the backdrop of an illustrated scenario involving the spatio-temporal dynamics of urban narratives, we address specific problems and solutions techniques chiefly involving `qualitative abstraction', `data integration and spatial consistency', and `practical geospatial abduction'. From a broad topical viewpoint, we propose that next-generation dynamic GIS technology demands a transdisciplinary scientific perspective that brings together Geography, Artificial Intelligence, and Cognitive Science. Keywords: artificial intelligence; cognitive systems; human-computer interaction; geographic information systems; spatio-temporal dynamics; computational models of narrative; geospatial analysis; geospatial modelling; ontology; qualitative spatial modelling and reasoning; spatial assistance systemsComment: ISPRS International Journal of Geo-Information (ISSN 2220-9964); Special Issue on: Geospatial Monitoring and Modelling of Environmental Change}. IJGI. Editor: Duccio Rocchini. (pre-print of article in press

    Towards The Development of Biosensors for the Detection of Microbiologically Influenced Corrosion (MIC)

    Get PDF
    Corrosion is one of the biggest concerns for mechanical integrity of infrastructure and infrastructural components, such as oil refineries, bridges and roads. The economic cost of corrosion is typically estimated to be between 1 to 5 % of the gross national product (GNP) of countries, of which the contribution of microbiologically influenced corrosion (MIC) is estimated to be between 10% and 50%. Current state-of-the-art approaches for detecting MIC primarily rely on ex-situ tests, including bacterial test kits (bug bottles); corrosion coupons, pigging deposits analysis and destructive analysis of MIC affected sites using SEM, TEM, and XRD. These ex-situ measurements do not capture the complexities and time sensitivities underlying MIC. This is owed to the fact that the proliferation of the microbial contamination is a dynamic and rapid process, and any delay can prove expensive as it is estimated that once the biofilm formation takes place the amount of biocides needed is magnitude of orders more as compared to when the bacteria are in planktonic form. Additionally, the field environment is a complex biotic and abiotic environment which is often difficult to replicate even in high fidelity laboratory models. Hence a real-time/pseudo real-time method of detection would greatly help reduce the costs and optimize biocide-based mitigation of MIC. To overcome the above-mentioned shortcomings associated with the state-of-the-art; this work is aimed at the development of a sensor substrate whereby highly specific detection can be carried out in the environment where the corrosion exists, in a real-time/pseudo real-time basis. More specifically, the research is aimed at the development of sensors based on a nanowire matrix functionalized with biomolecules which can perform this specific and real-time detection of MIC in the pipeline environment. Here, the detection of MIC is based on the binding of specific biomolecules causing MIC to organic molecules anchored on top of the nanowires. These sensors also need to be inexpensive (made of low-cost, earth abundant materials), have low power consumption, and robustly deployable. The primary component of the detection platforms are copper oxide nanowire arrays (CuONWs with lengths of 25 to 30 m, 50 to 100 nm in diameter) and silicon nanowires arrays (SiNWs with lengths of 5 to 8 m, 45 to 100 nm in diameter). They are synthesized using facile and scalable techniques and are selected for their robust electrical and mechanical properties. Electrochemical degradation studies of the NWs were performed in 3.5 wt. % NaCl solution and simulated produced water using polarization and electrochemical impedance spectroscopy (EIS). The NWs systems showed robust resistance to degradation despite higher surface area (as compared to bulk counterparts), and both diffusion limitations and charge transfer resistance was observed on the analysis of the impedance response. The ability to immobilize a variety of moieties on the nanowire platforms gives them the ability to detecting a wide variety of MIC biomarkers. The Biotin-Streptavidin (SA) complex was used as a proof of concept to test the viability of the NW arrays as a substrate for sensing. A custom test bed was built for the functionalized NW thin films, and cyclic voltammetry studies revealed a stable current response with time for 10nM and 10,000 nM SA concentrations. The use of different probes such as aptamers to larger immunoglobulin probes provides the flexibility to detect the full spectrum of biomarkers. The development of these next generation sensor platforms along with the methodologies employed to stabilize them and assemble them into functional devices are explored in detail in this dissertation

    Utilizing RxNorm to Support Practical Computing Applications: Capturing Medication History in Live Electronic Health Records

    Full text link
    RxNorm was utilized as the basis for direct-capture of medication history data in a live EHR system deployed in a large, multi-state outpatient behavioral healthcare provider in the United States serving over 75,000 distinct patients each year across 130 clinical locations. This tool incorporated auto-complete search functionality for medications and proper dosage identification assistance. The overarching goal was to understand if and how standardized terminologies like RxNorm can be used to support practical computing applications in live EHR systems. We describe the stages of implementation, approaches used to adapt RxNorm's data structure for the intended EHR application, and the challenges faced. We evaluate the implementation using a four-factor framework addressing flexibility, speed, data integrity, and medication coverage. RxNorm proved to be functional for the intended application, given appropriate adaptations to address high-speed input/output (I/O) requirements of a live EHR and the flexibility required for data entry in multiple potential clinical scenarios. Future research around search optimization for medication entry, user profiling, and linking RxNorm to drug classification schemes holds great potential for improving the user experience and utility of medication data in EHRs.Comment: Appendix (including SQL/DDL Code) available by author request. Keywords: RxNorm; Electronic Health Record; Medication History; Interoperability; Unified Medical Language System; Search Optimizatio

    Prototype of Fault Adaptive Embedded Software for Large-Scale Real-Time Systems

    Get PDF
    This paper describes a comprehensive prototype of large-scale fault adaptive embedded software developed for the proposed Fermilab BTeV high energy physics experiment. Lightweight self-optimizing agents embedded within Level 1 of the prototype are responsible for proactive and reactive monitoring and mitigation based on specified layers of competence. The agents are self-protecting, detecting cascading failures using a distributed approach. Adaptive, reconfigurable, and mobile objects for reliablility are designed to be self-configuring to adapt automatically to dynamically changing environments. These objects provide a self-healing layer with the ability to discover, diagnose, and react to discontinuities in real-time processing. A generic modeling environment was developed to facilitate design and implementation of hardware resource specifications, application data flow, and failure mitigation strategies. Level 1 of the planned BTeV trigger system alone will consist of 2500 DSPs, so the number of components and intractable fault scenarios involved make it impossible to design an `expert system' that applies traditional centralized mitigative strategies based on rules capturing every possible system state. Instead, a distributed reactive approach is implemented using the tools and methodologies developed by the Real-Time Embedded Systems group.Comment: 2nd Workshop on Engineering of Autonomic Systems (EASe), in the 12th Annual IEEE International Conference and Workshop on the Engineering of Computer Based Systems (ECBS), Washington, DC, April, 200
    corecore