30,151 research outputs found

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    Evaluation of Cognitive Architectures for Cyber-Physical Production Systems

    Full text link
    Cyber-physical production systems (CPPS) integrate physical and computational resources due to increasingly available sensors and processing power. This enables the usage of data, to create additional benefit, such as condition monitoring or optimization. These capabilities can lead to cognition, such that the system is able to adapt independently to changing circumstances by learning from additional sensors information. Developing a reference architecture for the design of CPPS and standardization of machines and software interfaces is crucial to enable compatibility of data usage between different machine models and vendors. This paper analysis existing reference architecture regarding their cognitive abilities, based on requirements that are derived from three different use cases. The results from the evaluation of the reference architectures, which include two instances that stem from the field of cognitive science, reveal a gap in the applicability of the architectures regarding the generalizability and the level of abstraction. While reference architectures from the field of automation are suitable to address use case specific requirements, and do not address the general requirements, especially w.r.t. adaptability, the examples from the field of cognitive science are well usable to reach a high level of adaption and cognition. It is desirable to merge advantages of both classes of architectures to address challenges in the field of CPPS in Industrie 4.0

    Conceptual graph-based knowledge representation for supporting reasoning in African traditional medicine

    Get PDF
    Although African patients use both conventional or modern and traditional healthcare simultaneously, it has been proven that 80% of people rely on African traditional medicine (ATM). ATM includes medical activities stemming from practices, customs and traditions which were integral to the distinctive African cultures. It is based mainly on the oral transfer of knowledge, with the risk of losing critical knowledge. Moreover, practices differ according to the regions and the availability of medicinal plants. Therefore, it is necessary to compile tacit, disseminated and complex knowledge from various Tradi-Practitioners (TP) in order to determine interesting patterns for treating a given disease. Knowledge engineering methods for traditional medicine are useful to model suitably complex information needs, formalize knowledge of domain experts and highlight the effective practices for their integration to conventional medicine. The work described in this paper presents an approach which addresses two issues. First it aims at proposing a formal representation model of ATM knowledge and practices to facilitate their sharing and reusing. Then, it aims at providing a visual reasoning mechanism for selecting best available procedures and medicinal plants to treat diseases. The approach is based on the use of the Delphi method for capturing knowledge from various experts which necessitate reaching a consensus. Conceptual graph formalism is used to model ATM knowledge with visual reasoning capabilities and processes. The nested conceptual graphs are used to visually express the semantic meaning of Computational Tree Logic (CTL) constructs that are useful for formal specification of temporal properties of ATM domain knowledge. Our approach presents the advantage of mitigating knowledge loss with conceptual development assistance to improve the quality of ATM care (medical diagnosis and therapeutics), but also patient safety (drug monitoring)

    Compositional Falsification of Cyber-Physical Systems with Machine Learning Components

    Full text link
    Cyber-physical systems (CPS), such as automotive systems, are starting to include sophisticated machine learning (ML) components. Their correctness, therefore, depends on properties of the inner ML modules. While learning algorithms aim to generalize from examples, they are only as good as the examples provided, and recent efforts have shown that they can produce inconsistent output under small adversarial perturbations. This raises the question: can the output from learning components can lead to a failure of the entire CPS? In this work, we address this question by formulating it as a problem of falsifying signal temporal logic (STL) specifications for CPS with ML components. We propose a compositional falsification framework where a temporal logic falsifier and a machine learning analyzer cooperate with the aim of finding falsifying executions of the considered model. The efficacy of the proposed technique is shown on an automatic emergency braking system model with a perception component based on deep neural networks

    Requirement verification in simulation-based automation testing

    Full text link
    The emergence of the Industrial Internet results in an increasing number of complicated temporal interdependencies between automation systems and the processes to be controlled. There is a need for verification methods that scale better than formal verification methods and which are more exact than testing. Simulation-based runtime verification is proposed as such a method, and an application of Metric temporal logic is presented as a contribution. The practical scalability of the proposed approach is validated against a production process designed by an industrial partner, resulting in the discovery of requirement violations.Comment: 4 pages, 2 figures. Added IEEE copyright notic

    Sciduction: Combining Induction, Deduction, and Structure for Verification and Synthesis

    Full text link
    Even with impressive advances in automated formal methods, certain problems in system verification and synthesis remain challenging. Examples include the verification of quantitative properties of software involving constraints on timing and energy consumption, and the automatic synthesis of systems from specifications. The major challenges include environment modeling, incompleteness in specifications, and the complexity of underlying decision problems. This position paper proposes sciduction, an approach to tackle these challenges by integrating inductive inference, deductive reasoning, and structure hypotheses. Deductive reasoning, which leads from general rules or concepts to conclusions about specific problem instances, includes techniques such as logical inference and constraint solving. Inductive inference, which generalizes from specific instances to yield a concept, includes algorithmic learning from examples. Structure hypotheses are used to define the class of artifacts, such as invariants or program fragments, generated during verification or synthesis. Sciduction constrains inductive and deductive reasoning using structure hypotheses, and actively combines inductive and deductive reasoning: for instance, deductive techniques generate examples for learning, and inductive reasoning is used to guide the deductive engines. We illustrate this approach with three applications: (i) timing analysis of software; (ii) synthesis of loop-free programs, and (iii) controller synthesis for hybrid systems. Some future applications are also discussed
    • 

    corecore