158,507 research outputs found

    Cooperation in Industrial Systems

    No full text
    ARCHON is an ongoing ESPRIT II project (P-2256) which is approximately half way through its five year duration. It is concerned with defining and applying techniques from the area of Distributed Artificial Intelligence to the development of real-size industrial applications. Such techniques enable multiple problem solvers (e.g. expert systems, databases and conventional numerical software systems) to communicate and cooperate with each other to improve both their individual problem solving behavior and the behavior of the community as a whole. This paper outlines the niche of ARCHON in the Distributed AI world and provides an overview of the philosophy and architecture of our approach the essence of which is to be both general (applicable to the domain of industrial process control) and powerful enough to handle real-world problems

    Meta-heuristic algorithms in car engine design: a literature survey

    Get PDF
    Meta-heuristic algorithms are often inspired by natural phenomena, including the evolution of species in Darwinian natural selection theory, ant behaviors in biology, flock behaviors of some birds, and annealing in metallurgy. Due to their great potential in solving difficult optimization problems, meta-heuristic algorithms have found their way into automobile engine design. There are different optimization problems arising in different areas of car engine management including calibration, control system, fault diagnosis, and modeling. In this paper we review the state-of-the-art applications of different meta-heuristic algorithms in engine management systems. The review covers a wide range of research, including the application of meta-heuristic algorithms in engine calibration, optimizing engine control systems, engine fault diagnosis, and optimizing different parts of engines and modeling. The meta-heuristic algorithms reviewed in this paper include evolutionary algorithms, evolution strategy, evolutionary programming, genetic programming, differential evolution, estimation of distribution algorithm, ant colony optimization, particle swarm optimization, memetic algorithms, and artificial immune system

    A framework for effective management of condition based maintenance programs in the context of industrial development of E-Maintenance strategies

    Get PDF
    CBM (Condition Based Maintenance) solutions are increasingly present in industrial systems due to two main circumstances: rapid evolution, without precedents, in the capture and analysis of data and significant cost reduction of supporting technologies. CBM programs in industrial systems can become extremely complex, especially when considering the effective introduction of new capabilities provided by PHM (Prognostics and Health Management) and E-maintenance disciplines. In this scenario, any CBM solution involves the management of numerous technical aspects, that the maintenance manager needs to understand, in order to be implemented properly and effectively, according to the company’s strategy. This paper provides a comprehensive representation of the key components of a generic CBM solution, this is presented using a framework or supporting structure for an effective management of the CBM programs. The concept “symptom of failure”, its corresponding analysis techniques (introduced by ISO 13379-1 and linked with RCM/FMEA analysis), and other international standard for CBM open-software application development (for instance, ISO 13374 and OSA-CBM), are used in the paper for the development of the framework. An original template has been developed, adopting the formal structure of RCM analysis templates, to integrate the information of the PHM techniques used to capture the failure mode behaviour and to manage maintenance. Finally, a case study describes the framework using the referred template.Gobierno de Andalucía P11-TEP-7303 M

    The DLV System for Knowledge Representation and Reasoning

    Full text link
    This paper presents the DLV system, which is widely considered the state-of-the-art implementation of disjunctive logic programming, and addresses several aspects. As for problem solving, we provide a formal definition of its kernel language, function-free disjunctive logic programs (also known as disjunctive datalog), extended by weak constraints, which are a powerful tool to express optimization problems. We then illustrate the usage of DLV as a tool for knowledge representation and reasoning, describing a new declarative programming methodology which allows one to encode complex problems (up to Δ3P\Delta^P_3-complete problems) in a declarative fashion. On the foundational side, we provide a detailed analysis of the computational complexity of the language of DLV, and by deriving new complexity results we chart a complete picture of the complexity of this language and important fragments thereof. Furthermore, we illustrate the general architecture of the DLV system which has been influenced by these results. As for applications, we overview application front-ends which have been developed on top of DLV to solve specific knowledge representation tasks, and we briefly describe the main international projects investigating the potential of the system for industrial exploitation. Finally, we report about thorough experimentation and benchmarking, which has been carried out to assess the efficiency of the system. The experimental results confirm the solidity of DLV and highlight its potential for emerging application areas like knowledge management and information integration.Comment: 56 pages, 9 figures, 6 table

    Hypothesis exploration with visualization of variance.

    Get PDF
    BackgroundThe Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes-to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics-wide-scale, systematic study of phenotypes-to neuropsychiatry research.ResultsThis paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles-patterns of values across phenotypes-that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes.ConclusionsThe ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports 'natural selection' on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics

    A General Approach for Securely Querying and Updating XML Data

    Get PDF
    Over the past years several works have proposed access control models for XML data where only read-access rights over non-recursive DTDs are considered. A few amount of works have studied the access rights for updates. In this paper, we present a general model for specifying access control on XML data in the presence of update operations of W3C XQuery Update Facility. Our approach for enforcing such updates specifications is based on the notion of query rewriting where each update operation defined over arbitrary DTD (recursive or not) is rewritten to a safe one in order to be evaluated only over XML data which can be updated by the user. We investigate in the second part of this report the secure of XML updating in the presence of read-access rights specified by a security views. For an XML document, a security view represents for each class of users all and only the parts of the document these users are able to see. We show that an update operation defined over a security view can cause disclosure of sensitive data hidden by this view if it is not thoroughly rewritten with respect to both read and update access rights. Finally, we propose a security view based approach for securely updating XML in order to preserve the confidentiality and integrity of XML data.Comment: No. RR-7870 (2012

    Prospective memory impairments in Alzheimer's Disease and behavioral variant frontotemporal dementia: Clinical and neural correlates

    Get PDF
    BACKGROUND: Prospective memory (PM) refers to a future-oriented form of memory in which the individual must remember to execute an intended action either at a future point in time (Time-based) or in response to a specific event (Event-based). Lapses in PM are commonly exhibited in neurodegenerative disorders including Alzheimer's disease (AD) and frontotemporal dementia (FTD), however, the neurocognitive mechanisms driving these deficits remain unknown. OBJECTIVE: To investigate the clinical and neural correlates of Time- and Event-based PM disruption in AD and the behavioral-variant FTD (bvFTD). METHODS: Twelve AD, 12 bvFTD, and 12 healthy older Control participants completed a modified version of the Cambridge Prospective Memory test, which examines Time- and Event-based aspects of PM. All participants completed a standard neuropsychological assessment and underwent whole-brain structural MRI. RESULTS: AD and bvFTD patients displayed striking impairments across Time- and Event-based PM relative to Controls, however, Time-based PM was disproportionately affected in the AD group. Episodic memory dysfunction and hippocampal atrophy was found to correlate strongly with PM integrity in both patient groups, however, dissociable neural substrates were also evident for PM performance across dementia syndromes. CONCLUSION: Our study reveals the multifaceted nature of PM dysfunction in neurodegenerative disorders, and suggests common and dissociable neurocognitive mechanisms, which subtend these deficits in each patient group. Future studies of PM disturbance in dementia syndromes will be crucial for the development of successful interventions to improve functional independence in the patient's daily life

    A review of clinical decision-making: Models and current research

    Get PDF
    Aims and objectives: The aim of this paper was to review the current literature with respect to clinical decision-making models and the educational application of models to clinical practice. This was achieved by exploring the function and related research of the three available models of clinical decision making: information processing model, the intuitive-humanist model and the clinical decision making model. Background: Clinical decision-making is a unique process that involves the interplay between knowledge of pre-existing pathological conditions, explicit patient information, nursing care and experiential learning. Historically, two models of clinical decision making are recognised from the literature; the information processing model and the intuitive-humanist model. The usefulness and application of both models has been examined in relation the provision of nursing care and care related outcomes. More recently a third model of clinical decision making has been proposed. This new multidimensional model contains elements of the information processing model but also examines patient specific elements that are necessary for cue and pattern recognition. Design: Literature review Methods: Evaluation of the literature generated from MEDLINE, CINAHL, OVID, PUBMED and EBESCO systems and the Internet from 1980 – November 2005

    Automated Observability Investigation of Analog Electronic Circuits using SPICE

    Get PDF
    In the present paper, a computer-aided approach to fault observability investigation of linear analog circuits is developed. The method is based on sensitivity investigation of the test characteristics in the frequency domain. The test frequencies are selected maximizing the sensitivity of the magnitude of the test characteristics. Applying postprocessing of the simulation results using macrodefinitions in the graphical analyzer Probe, a fault observability investigation of the circuit is performed. A number of sensitivity measures are defined in Probe for observability investigation of multiple faults using pre-defined macrodefinitions. The sensitivity of S-parameters is obtained in order to investigate the fault observability at RF
    • 

    corecore