169,071 research outputs found

    Derivation of diagnostic models based on formalized process knowledge

    Get PDF
    © IFAC.Industrial systems are vulnerable to faults. Early and accurate detection and diagnosis in production systems can minimize down-time, increase the safety of the plant operation, and reduce manufacturing costs. Knowledge- and model-based approaches to automated fault detection and diagnosis have been demonstrated to be suitable for fault cause analysis within a broad range of industrial processes and research case studies. However, the implementation of these methods demands a complex and error-prone development phase, especially due to the extensive efforts required during the derivation of models and their respective validation. In an effort to reduce such modeling complexity, this paper presents a structured causal modeling approach to supporting the derivation of diagnostic models based on formalized process knowledge. The method described herein exploits the Formalized Process Description Guideline VDI/VDE 3682 to establish causal relations among key-process variables, develops an extension of the Signed Digraph model combined with the use of fuzzy set theory to allow more accurate causality descriptions, and proposes a representation of the resulting diagnostic model in CAEX/AutomationML targeting dynamic data access, portability, and seamless information exchange

    Learning to Localize and Align Fine-Grained Actions to Sparse Instructions

    Full text link
    Automatic generation of textual video descriptions that are time-aligned with video content is a long-standing goal in computer vision. The task is challenging due to the difficulty of bridging the semantic gap between the visual and natural language domains. This paper addresses the task of automatically generating an alignment between a set of instructions and a first person video demonstrating an activity. The sparse descriptions and ambiguity of written instructions create significant alignment challenges. The key to our approach is the use of egocentric cues to generate a concise set of action proposals, which are then matched to recipe steps using object recognition and computational linguistic techniques. We obtain promising results on both the Extended GTEA Gaze+ dataset and the Bristol Egocentric Object Interactions Dataset
    corecore