123,614 research outputs found

    Automatic instantiation of abstract tests on specific configurations for large critical control systems

    Full text link
    Computer-based control systems have grown in size, complexity, distribution and criticality. In this paper a methodology is presented to perform an abstract testing of such large control systems in an efficient way: an abstract test is specified directly from system functional requirements and has to be instantiated in more test runs to cover a specific configuration, comprising any number of control entities (sensors, actuators and logic processes). Such a process is usually performed by hand for each installation of the control system, requiring a considerable time effort and being an error prone verification activity. To automate a safe passage from abstract tests, related to the so called generic software application, to any specific installation, an algorithm is provided, starting from a reference architecture and a state-based behavioural model of the control software. The presented approach has been applied to a railway interlocking system, demonstrating its feasibility and effectiveness in several years of testing experience

    Observation Centric Sensor Data Model

    Get PDF
    Management of sensor data requires metadata to understand the semantics of observations. While e-science researchers have high demands on metadata, they are selective in entering metadata. The claim in this paper is to focus on the essentials, i.e., the actual observations being described by location, time, owner, instrument, and measurement. The applicability of this approach is demonstrated in two very different case studies

    Ontology-based knowledge representation of experiment metadata in biological data mining

    Get PDF
    According to the PubMed resource from the U.S. National Library of Medicine, over 750,000 scientific articles have been published in the ~5000 biomedical journals worldwide in the year 2007 alone. The vast majority of these publications include results from hypothesis-driven experimentation in overlapping biomedical research domains. Unfortunately, the sheer volume of information being generated by the biomedical research enterprise has made it virtually impossible for investigators to stay aware of the latest findings in their domain of interest, let alone to be able to assimilate and mine data from related investigations for purposes of meta-analysis. While computers have the potential for assisting investigators in the extraction, management and analysis of these data, information contained in the traditional journal publication is still largely unstructured, free-text descriptions of study design, experimental application and results interpretation, making it difficult for computers to gain access to the content of what is being conveyed without significant manual intervention. In order to circumvent these roadblocks and make the most of the output from the biomedical research enterprise, a variety of related standards in knowledge representation are being developed, proposed and adopted in the biomedical community. In this chapter, we will explore the current status of efforts to develop minimum information standards for the representation of a biomedical experiment, ontologies composed of shared vocabularies assembled into subsumption hierarchical structures, and extensible relational data models that link the information components together in a machine-readable and human-useable framework for data mining purposes

    Quality-aware model-driven service engineering

    Get PDF
    Service engineering and service-oriented architecture as an integration and platform technology is a recent approach to software systems integration. Quality aspects ranging from interoperability to maintainability to performance are of central importance for the integration of heterogeneous, distributed service-based systems. Architecture models can substantially influence quality attributes of the implemented software systems. Besides the benefits of explicit architectures on maintainability and reuse, architectural constraints such as styles, reference architectures and architectural patterns can influence observable software properties such as performance. Empirical performance evaluation is a process of measuring and evaluating the performance of implemented software. We present an approach for addressing the quality of services and service-based systems at the model-level in the context of model-driven service engineering. The focus on architecture-level models is a consequence of the black-box character of services

    UK utility data integration: overcoming schematic heterogeneity

    Get PDF
    In this paper we discuss syntactic, semantic and schematic issues which inhibit the integration of utility data in the UK. We then focus on the techniques employed within the VISTA project to overcome schematic heterogeneity. A Global Schema based architecture is employed. Although automated approaches to Global Schema definition were attempted the heterogeneities of the sector were too great. A manual approach to Global Schema definition was employed. The techniques used to define and subsequently map source utility data models to this schema are discussed in detail. In order to ensure a coherent integrated model, sub and cross domain validation issues are then highlighted. Finally the proposed framework and data flow for schematic integration is introduced

    Composite load spectra for select space propulsion structural components

    Get PDF
    The objective of this program is to develop generic load models with multiple levels of progressive sophistication to simulate the composite load spectra that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades, and liquid oxygen (LOX) posts and system ducting. These models will be developed using two independent approaches. The first approach consists of using state-of-the-art probabilistic methods to describe the individual loading conditions and combinations of these loading conditions to synthesize the composite load spectra simulation. The methodology required to combine the various individual load simulation models (hot-gas dynamic, vibrations, instantaneous position, centrifugal field, etc.) into composite load spectra simulation models will be developed under this program. A computer code incorporating the various individual and composite load spectra models will be developed to construct the specific load model desired. The second approach, which is covered under the options portion of the contract, will consist of developing coupled models for composite load spectra simulation which combine the (deterministic) models for composite load dynamic, acoustic, high-pressure and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients will then be determined using advanced probabilistic simulation methods with and without strategically selected experimental data. This report covers the efforts of the third year of the contract. The overall program status is that the turbine blade loads have been completed and implemented. The transfer duct loads are defined and are being implemented. The thermal loads for all components are defined and coding is being developed. A dynamic pressure load model is under development. The parallel work on the probabilistic methodology is essentially completed. The overall effort is being integrated in an expert system code specifically developed for this project

    Trusted Launch of Generic Virtual Machine Images in Public IaaS Environments

    Get PDF
    Cloud computing and Infrastructure-as-a-Service (IaaS) are emerging and promising technologies, however their faster-pased adoption is hampered by data security concerns. In the same time, Trusted Computing (TC) is experiencing a revived interest as a security mechanism for IaaS. We address the lack of an implementable mechanism to ensure the launch of a virtual machine (VM) instance on a trusted remote host. Relying on Trusted Platform Modules operations such as binding and sealing to provide integrity guarantees for clients that require a trusted VM launch, we have designed a trusted launch protocol for generic VM images in public IaaS environments. We also present a proof-of-concept implemen- tation of the protocol based on OpenStack, an open-source IaaS platform. The results provide a basis for use of TC mechanisms within IaaS platforms and pave the way for a wider applicability of TC to IaaS security
    • 

    corecore