402,927 research outputs found

    Derivation of diagnostic models based on formalized process knowledge

    Get PDF
    © IFAC.Industrial systems are vulnerable to faults. Early and accurate detection and diagnosis in production systems can minimize down-time, increase the safety of the plant operation, and reduce manufacturing costs. Knowledge- and model-based approaches to automated fault detection and diagnosis have been demonstrated to be suitable for fault cause analysis within a broad range of industrial processes and research case studies. However, the implementation of these methods demands a complex and error-prone development phase, especially due to the extensive efforts required during the derivation of models and their respective validation. In an effort to reduce such modeling complexity, this paper presents a structured causal modeling approach to supporting the derivation of diagnostic models based on formalized process knowledge. The method described herein exploits the Formalized Process Description Guideline VDI/VDE 3682 to establish causal relations among key-process variables, develops an extension of the Signed Digraph model combined with the use of fuzzy set theory to allow more accurate causality descriptions, and proposes a representation of the resulting diagnostic model in CAEX/AutomationML targeting dynamic data access, portability, and seamless information exchange

    Tracking Data-Flow with Open Closure Types

    Get PDF
    Type systems hide data that is captured by function closures in function types. In most cases this is a beneficial design that favors simplicity and compositionality. However, some applications require explicit information about the data that is captured in closures. This paper introduces open closure types, that is, function types that are decorated with type contexts. They are used to track data-flow from the environment into the function closure. A simply-typed lambda calculus is used to study the properties of the type theory of open closure types. A distinctive feature of this type theory is that an open closure type of a function can vary in different type contexts. To present an application of the type theory, it is shown that a type derivation establishes a simple non-interference property in the sense of information-flow theory. A publicly available prototype implementation of the system can be used to experiment with type derivations for example programs.Comment: Logic for Programming Artificial Intelligence and Reasoning (2013

    A Hierarchy of Information Quantities for Finite Block Length Analysis of Quantum Tasks

    Full text link
    We consider two fundamental tasks in quantum information theory, data compression with quantum side information as well as randomness extraction against quantum side information. We characterize these tasks for general sources using so-called one-shot entropies. We show that these characterizations - in contrast to earlier results - enable us to derive tight second order asymptotics for these tasks in the i.i.d. limit. More generally, our derivation establishes a hierarchy of information quantities that can be used to investigate information theoretic tasks in the quantum domain: The one-shot entropies most accurately describe an operational quantity, yet they tend to be difficult to calculate for large systems. We show that they asymptotically agree up to logarithmic terms with entropies related to the quantum and classical information spectrum, which are easier to calculate in the i.i.d. limit. Our techniques also naturally yields bounds on operational quantities for finite block lengths.Comment: See also arXiv:1208.1400, which independently derives part of our result: the second order asymptotics for binary hypothesis testin

    Partitioning of Distributed MIMO Systems based on Overhead Considerations

    Full text link
    Distributed-Multiple Input Multiple Output (DMIMO) networks is a promising enabler to address the challenges of high traffic demand in future wireless networks. A limiting factor that is directly related to the performance of these systems is the overhead signaling required for distributing data and control information among the network elements. In this paper, the concept of orthogonal partitioning is extended to D-MIMO networks employing joint multi-user beamforming, aiming to maximize the effective sum-rate, i.e., the actual transmitted information data. Furthermore, in order to comply with practical requirements, the overhead subframe size is considered to be constrained. In this context, a novel formulation of constrained orthogonal partitioning is introduced as an elegant Knapsack optimization problem, which allows the derivation of quick and accurate solutions. Several numerical results give insight into the capabilities of D-MIMO networks and the actual sum-rate scaling under overhead constraints.Comment: IEEE Wireless Communications Letter

    Deriving physical parameters of unresolved star clusters. II. The degeneracies of age, mass, extinction, and metallicity

    Full text link
    This paper is the second of a series that investigates the stochasticity and degeneracy problems that hinder the derivation of the age, mass, extinction, and metallicity of unresolved star clusters in external galaxies when broad-band photometry is used. While Paper I concentrated on deriving age, mass, and extinction of star clusters for one fixed metallicity, we here derive these parameters in case when metallicity is let free to vary. The results were obtained using several different filter systems (UBVRIUBVRI, UBVRIJHKUBVRIJHK, GALEX+UBVRIUBVRI), which allowed to optimally reduce the different degeneracies between the cluster physical parameters. The age, mass, and extinction of a sample of artificial star clusters were derived by comparing their broad-band integrated magnitudes with the magnitudes of a large grid of cluster models with various metallicities. A large collection of artificial clusters was studied to model the different degeneracies in the age, mass, extinction, and metallicity parameter space when stochasticity is taken into account in the cluster models. We show that, without prior knowledge on the metallicity, the optical bands (UBVRIUBVRI) fail to allow a correct derivation of the age, mass, and extinction because of the strong degeneracies between models of different metallicities. Adding near-infrared information (UBVRIUBVRI+JHKJHK) slightly helps in improving the parameter derivation, except for the metallicity. Adding ultraviolet data (GALEX+UBVRIUBVRI) helps significantly in deriving these parameters and allows constraining the metallicity when the photometric errors have a Gaussian distribution with standard deviations 0.05 mag for UBVRIUBVRI and 0.15 mag for the GALEX bands.Comment: 8 pages, 9 figure

    Stochastic Dynamics of Bionanosystems: Multiscale Analysis and Specialized Ensembles

    Full text link
    An approach for simulating bionanosystems, such as viruses and ribosomes, is presented. This calibration-free approach is based on an all-atom description for bionanosystems, a universal interatomic force field, and a multiscale perspective. The supramillion-atom nature of these bionanosystems prohibits the use of a direct molecular dynamics approach for phenomena like viral structural transitions or self-assembly that develop over milliseconds or longer. A key element of these multiscale systems is the cross-talk between, and consequent strong coupling of, processes over many scales in space and time. We elucidate the role of interscale cross-talk and overcome bionanosystem simulation difficulties with automated construction of order parameters (OPs) describing supra-nanometer scale structural features, construction of OP dependent ensembles describing the statistical properties of atomistic variables that ultimately contribute to the entropies driving the dynamics of the OPs, and the derivation of a rigorous equation for the stochastic dynamics of the OPs. Since the atomic scale features of the system are treated statistically, several ensembles are constructed that reflect various experimental conditions. The theory provides a basis for a practical, quantitative bionanosystem modeling approach that preserves the cross-talk between the atomic and nanoscale features. A method for integrating information from nanotechnical experimental data in the derivation of equations of stochastic OP dynamics is also introduced.Comment: 24 page

    JTorX: Exploring Model-Based Testing

    Get PDF
    The overall goal of the work described in this thesis is: ``To design a flexible tool for state-of-the-art model-based derivation and automatic application of black-box tests for reactive systems, usable both for education and outside an academic context.'' From this goal, we derive functional and non-functional design requirements. The core of the thesis is a discussion of the design, in which we show how the functional requirements are fulfilled. In addition, we provide evidence to validate the non-functional requirements, in the form of case studies and responses to a tool user questionnaire. We describe the overall architecture of our tool, and discuss three usage scenarios which are necessary to fulfill the functional requirements: random on-line testing, guided on-line testing, and off-line test derivation and execution. With on-line testing, test derivation and test execution takes place in an integrated manner: a next test step is only derived when it is necessary for execution. With random testing, during test derivation a random walk through the model is done. With guided testing, during test derivation additional (guidance) information is used, to guide the derivation through specific paths in the model. With off-line testing, test derivation and test execution take place as separate activities. In our architecture we identify two major components: a test derivation engine, which synthesizes test primitives from a given model and from optional test guidance information, and a test execution engine, which contains the functionality to connect the test tool to the system under test. We refer to this latter functionality as the ``adapter''. In the description of the test derivation engine, we look at the same three usage scenarios, and we discuss support for visualization, and for dealing with divergence in the model. In the description of the test execution engine, we discuss three example adapter instances, and then generalise this to a general adapter design. We conclude with a description of extensions to deal with symbolic treatment of data and time
    corecore