1,426 research outputs found

    Improving performance through concept formation and conceptual clustering

    Get PDF
    Research from June 1989 through October 1992 focussed on concept formation, clustering, and supervised learning for purposes of improving the efficiency of problem-solving, planning, and diagnosis. These projects resulted in two dissertations on clustering, explanation-based learning, and means-ends planning, and publications in conferences and workshops, several book chapters, and journals; a complete Bibliography of NASA Ames supported publications is included. The following topics are studied: clustering of explanations and problem-solving experiences; clustering and means-end planning; and diagnosis of space shuttle and space station operating modes

    Towards knowledge-based gene expression data mining

    Get PDF
    The field of gene expression data analysis has grown in the past few years from being purely data-centric to integrative, aiming at complementing microarray analysis with data and knowledge from diverse available sources. In this review, we report on the plethora of gene expression data mining techniques and focus on their evolution toward knowledge-based data analysis approaches. In particular, we discuss recent developments in gene expression-based analysis methods used in association and classification studies, phenotyping and reverse engineering of gene networks

    ๋งค๊ฐœ๋ถ„ํฌ๊ทผ์‚ฌ๋ฅผ ํ†ตํ•œ ๊ณต์ •์‹œ์Šคํ…œ ๊ณตํ•™์—์„œ์˜ ํ™•๋ฅ ๊ธฐ๊ณ„ํ•™์Šต ์ ‘๊ทผ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ํ™”ํ•™์ƒ๋ฌผ๊ณตํ•™๋ถ€, 2021.8. ์ด์ข…๋ฏผ.With the rapid development of measurement technology, higher quality and vast amounts of process data become available. Nevertheless, process data are โ€˜scarceโ€™ in many cases as they are sampled only at certain operating conditions while the dimensionality of the system is large. Furthermore, the process data are inherently stochastic due to the internal characteristics of the system or the measurement noises. For this reason, uncertainty is inevitable in process systems, and estimating it becomes a crucial part of engineering tasks as the prediction errors can lead to misguided decisions and cause severe casualties or economic losses. A popular approach to this is applying probabilistic inference techniques that can model the uncertainty in terms of probability. However, most of the existing probabilistic inference techniques are based on recursive sampling, which makes it difficult to use them for industrial applications that require processing a high-dimensional and massive amount of data. To address such an issue, this thesis proposes probabilistic machine learning approaches based on parametric distribution approximation, which can model the uncertainty of the system and circumvent the computational complexity as well. The proposed approach is applied for three major process engineering tasks: process monitoring, system modeling, and process design. First, a process monitoring framework is proposed that utilizes a probabilistic classifier for fault classification. To enhance the accuracy of the classifier and reduce the computational cost for its training, a feature extraction method called probabilistic manifold learning is developed and applied to the process data ahead of the fault classification. We demonstrate that this manifold approximation process not only reduces the dimensionality of the data but also casts the data into a clustered structure, making the classifier have a low dependency on the type and dimension of the data. By exploiting this property, non-metric information (e.g., fault labels) of the data is effectively incorporated and the diagnosis performance is drastically improved. Second, a probabilistic modeling approach based on Bayesian neural networks is proposed. The parameters of deep neural networks are transformed into Gaussian distributions and trained using variational inference. The redundancy of the parameter is autonomously inferred during the model training, and insignificant parameters are eliminated a posteriori. Through a verification study, we demonstrate that the proposed approach can not only produce high-fidelity models that describe the stochastic behaviors of the system but also produce the optimal model structure. Finally, a novel process design framework is proposed based on reinforcement learning. Unlike the conventional optimization methods that recursively evaluate the objective function to find an optimal value, the proposed method approximates the objective function surface by parametric probabilistic distributions. This allows learning the continuous action policy without introducing any cumbersome discretization process. Moreover, the probabilistic policy gives means for effective control of the exploration and exploitation rates according to the certainty information. We demonstrate that the proposed framework can learn process design heuristics during the solution process and use them to solve similar design problems.๊ณ„์ธก๊ธฐ์ˆ ์˜ ๋ฐœ๋‹ฌ๋กœ ์–‘์งˆ์˜, ๊ทธ๋ฆฌ๊ณ  ๋ฐฉ๋Œ€ํ•œ ์–‘์˜ ๊ณต์ • ๋ฐ์ดํ„ฐ์˜ ์ทจ๋“์ด ๊ฐ€๋Šฅํ•ด์กŒ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋งŽ์€ ๊ฒฝ์šฐ ์‹œ์Šคํ…œ ์ฐจ์›์˜ ํฌ๊ธฐ์— ๋น„ํ•ด์„œ ์ผ๋ถ€ ์šด์ „์กฐ๊ฑด์˜ ๊ณต์ • ๋ฐ์ดํ„ฐ๋งŒ์ด ์ทจ๋“๋˜๊ธฐ ๋•Œ๋ฌธ์—, ๊ณต์ • ๋ฐ์ดํ„ฐ๋Š” โ€˜ํฌ์†Œโ€™ํ•˜๊ฒŒ ๋œ๋‹ค. ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ, ๊ณต์ • ๋ฐ์ดํ„ฐ๋Š” ์‹œ์Šคํ…œ ๊ฑฐ๋™ ์ž์ฒด์™€ ๋”๋ถˆ์–ด ๊ณ„์ธก์—์„œ ๋ฐœ์ƒํ•˜๋Š” ๋…ธ์ด์ฆˆ๋กœ ์ธํ•œ ๋ณธ์งˆ์ ์ธ ํ™•๋ฅ ์  ๊ฑฐ๋™์„ ๋ณด์ธ๋‹ค. ๋”ฐ๋ผ์„œ ์‹œ์Šคํ…œ์˜ ์˜ˆ์ธก๋ชจ๋ธ์€ ์˜ˆ์ธก ๊ฐ’์— ๋Œ€ํ•œ ๋ถˆํ™•์‹ค์„ฑ์„ ์ •๋Ÿ‰์ ์œผ๋กœ ๊ธฐ์ˆ ํ•˜๋Š” ๊ฒƒ์ด ์š”๊ตฌ๋˜๋ฉฐ, ์ด๋ฅผ ํ†ตํ•ด ์˜ค์ง„์„ ์˜ˆ๋ฐฉํ•˜๊ณ  ์ž ์žฌ์  ์ธ๋ช… ํ”ผํ•ด์™€ ๊ฒฝ์ œ์  ์†์‹ค์„ ๋ฐฉ์ง€ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด์— ๋Œ€ํ•œ ๋ณดํŽธ์ ์ธ ์ ‘๊ทผ๋ฒ•์€ ํ™•๋ฅ ์ถ”์ •๊ธฐ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋Ÿฌํ•œ ๋ถˆํ™•์‹ค์„ฑ์„ ์ •๋Ÿ‰ํ™” ํ•˜๋Š” ๊ฒƒ์ด๋‚˜, ํ˜„์กดํ•˜๋Š” ์ถ”์ •๊ธฐ๋ฒ•๋“ค์€ ์žฌ๊ท€์  ์ƒ˜ํ”Œ๋ง์— ์˜์กดํ•˜๋Š” ํŠน์„ฑ์ƒ ๊ณ ์ฐจ์›์ด๋ฉด์„œ๋„ ๋‹ค๋Ÿ‰์ธ ๊ณต์ •๋ฐ์ดํ„ฐ์— ์ ์šฉํ•˜๊ธฐ ์–ด๋ ต๋‹ค๋Š” ๊ทผ๋ณธ์ ์ธ ํ•œ๊ณ„๋ฅผ ๊ฐ€์ง„๋‹ค. ๋ณธ ํ•™์œ„๋…ผ๋ฌธ์—์„œ๋Š” ๋งค๊ฐœ๋ถ„ํฌ๊ทผ์‚ฌ์— ๊ธฐ๋ฐ˜ํ•œ ํ™•๋ฅ ๊ธฐ๊ณ„ํ•™์Šต์„ ์ ์šฉํ•˜์—ฌ ์‹œ์Šคํ…œ์— ๋‚ด์žฌ๋œ ๋ถˆํ™•์‹ค์„ฑ์„ ๋ชจ๋ธ๋งํ•˜๋ฉด์„œ๋„ ๋™์‹œ์— ๊ณ„์‚ฐ ํšจ์œจ์ ์ธ ์ ‘๊ทผ ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ๋จผ์ €, ๊ณต์ •์˜ ๋ชจ๋‹ˆํ„ฐ๋ง์— ์žˆ์–ด ๊ฐ€์šฐ์‹œ์•ˆ ํ˜ผํ•ฉ ๋ชจ๋ธ (Gaussian mixture model)์„ ๋ถ„๋ฅ˜์ž๋กœ ์‚ฌ์šฉํ•˜๋Š” ํ™•๋ฅ ์  ๊ฒฐํ•จ ๋ถ„๋ฅ˜ ํ”„๋ ˆ์ž„์›Œํฌ๊ฐ€ ์ œ์•ˆ๋˜์—ˆ๋‹ค. ์ด๋•Œ ๋ถ„๋ฅ˜์ž์˜ ํ•™์Šต์—์„œ์˜ ๊ณ„์‚ฐ ๋ณต์žก๋„๋ฅผ ์ค„์ด๊ธฐ ์œ„ํ•˜์—ฌ ๋ฐ์ดํ„ฐ๋ฅผ ์ €์ฐจ์›์œผ๋กœ ํˆฌ์˜์‹œํ‚ค๋Š”๋ฐ, ์ด๋ฅผ ์œ„ํ•œ ํ™•๋ฅ ์  ๋‹ค์–‘์ฒด ํ•™์Šต (probabilistic manifold learn-ing) ๋ฐฉ๋ฒ•์ด ์ œ์•ˆ๋˜์—ˆ๋‹ค. ์ œ์•ˆํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๋ฐ์ดํ„ฐ์˜ ๋‹ค์–‘์ฒด (manifold)๋ฅผ ๊ทผ์‚ฌํ•˜์—ฌ ๋ฐ์ดํ„ฐ ํฌ์ธํŠธ ์‚ฌ์ด์˜ ์Œ๋ณ„ ์šฐ๋„ (pairwise likelihood)๋ฅผ ๋ณด์กดํ•˜๋Š” ํˆฌ์˜๋ฒ•์ด ์‚ฌ์šฉ๋œ๋‹ค. ์ด๋ฅผ ํ†ตํ•˜์—ฌ ๋ฐ์ดํ„ฐ์˜ ์ข…๋ฅ˜์™€ ์ฐจ์›์— ์˜์กด๋„๊ฐ€ ๋‚ฎ์€ ์ง„๋‹จ ๊ฒฐ๊ณผ๋ฅผ ์–ป์Œ๊ณผ ๋™์‹œ์— ๋ฐ์ดํ„ฐ ๋ ˆ์ด๋ธ”๊ณผ ๊ฐ™์€ ๋น„๊ฑฐ๋ฆฌ์  (non-metric) ์ •๋ณด๋ฅผ ํšจ์œจ์ ์œผ๋กœ ์‚ฌ์šฉํ•˜์—ฌ ๊ฒฐํ•จ ์ง„๋‹จ ๋Šฅ๋ ฅ์„ ํ–ฅ์ƒ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Œ์„ ๋ณด์˜€๋‹ค. ๋‘˜์งธ๋กœ, ๋ฒ ์ด์ง€์•ˆ ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง(Bayesian deep neural networks)์„ ์‚ฌ์šฉํ•œ ๊ณต์ •์˜ ํ™•๋ฅ ์  ๋ชจ๋ธ๋ง ๋ฐฉ๋ฒ•๋ก ์ด ์ œ์‹œ๋˜์—ˆ๋‹ค. ์‹ ๊ฒฝ๋ง์˜ ๊ฐ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๊ฐ€์šฐ์Šค ๋ถ„ํฌ๋กœ ์น˜ํ™˜๋˜๋ฉฐ, ๋ณ€๋ถ„์ถ”๋ก  (variational inference)์„ ํ†ตํ•˜์—ฌ ๊ณ„์‚ฐ ํšจ์œจ์ ์ธ ํ›ˆ๋ จ์ด ์ง„ํ–‰๋œ๋‹ค. ํ›ˆ๋ จ์ด ๋๋‚œ ํ›„ ํŒŒ๋ผ๋ฏธํ„ฐ์˜ ์œ ํšจ์„ฑ์„ ์ธก์ •ํ•˜์—ฌ ๋ถˆํ•„์š”ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์†Œ๊ฑฐํ•˜๋Š” ์‚ฌํ›„ ๋ชจ๋ธ ์••์ถ• ๋ฐฉ๋ฒ•์ด ์‚ฌ์šฉ๋˜์—ˆ๋‹ค. ๋ฐ˜๋„์ฒด ๊ณต์ •์— ๋Œ€ํ•œ ์‚ฌ๋ก€ ์—ฐ๊ตฌ๋Š” ์ œ์•ˆํ•˜๋Š” ๋ฐฉ๋ฒ•์ด ๊ณต์ •์˜ ๋ณต์žกํ•œ ๊ฑฐ๋™์„ ํšจ๊ณผ์ ์œผ๋กœ ๋ชจ๋ธ๋ง ํ•  ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ๋ชจ๋ธ์˜ ์ตœ์  ๊ตฌ์กฐ๋ฅผ ๋„์ถœํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์—ฌ์ค€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋ถ„ํฌํ˜• ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง์„ ์‚ฌ์šฉํ•œ ๊ฐ•ํ™”ํ•™์Šต์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ํ™•๋ฅ ์  ๊ณต์ • ์„ค๊ณ„ ํ”„๋ ˆ์ž„์›Œํฌ๊ฐ€ ์ œ์•ˆ๋˜์—ˆ๋‹ค. ์ตœ์ ์น˜๋ฅผ ์ฐพ๊ธฐ ์œ„ํ•ด ์žฌ๊ท€์ ์œผ๋กœ ๋ชฉ์  ํ•จ์ˆ˜ ๊ฐ’์„ ํ‰๊ฐ€ํ•˜๋Š” ๊ธฐ์กด์˜ ์ตœ์ ํ™” ๋ฐฉ๋ฒ•๋ก ๊ณผ ๋‹ฌ๋ฆฌ, ๋ชฉ์  ํ•จ์ˆ˜ ๊ณก๋ฉด (objective function surface)์„ ๋งค๊ฐœํ™” ๋œ ํ™•๋ฅ ๋ถ„ํฌ๋กœ ๊ทผ์‚ฌํ•˜๋Š” ์ ‘๊ทผ๋ฒ•์ด ์ œ์‹œ๋˜์—ˆ๋‹ค. ์ด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์ด์‚ฐํ™” (discretization)๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ  ์—ฐ์†์  ํ–‰๋™ ์ •์ฑ…์„ ํ•™์Šตํ•˜๋ฉฐ, ํ™•์‹ค์„ฑ (certainty)์— ๊ธฐ๋ฐ˜ํ•œ ํƒ์ƒ‰ (exploration) ๋ฐ ํ™œ์šฉ (exploi-tation) ๋น„์œจ์˜ ์ œ์–ด๊ฐ€ ํšจ์œจ์ ์œผ๋กœ ์ด๋ฃจ์–ด์ง„๋‹ค. ์‚ฌ๋ก€ ์—ฐ๊ตฌ ๊ฒฐ๊ณผ๋Š” ๊ณต์ •์˜ ์„ค๊ณ„์— ๋Œ€ํ•œ ๊ฒฝํ—˜์ง€์‹ (heuristic)์„ ํ•™์Šตํ•˜๊ณ  ์œ ์‚ฌํ•œ ์„ค๊ณ„ ๋ฌธ์ œ์˜ ํ•ด๋ฅผ ๊ตฌํ•˜๋Š” ๋ฐ ์ด์šฉํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์—ฌ์ค€๋‹ค.Chapter 1 Introduction 1 1.1. Motivation 1 1.2. Outline of the thesis 5 Chapter 2 Backgrounds and preliminaries 9 2.1. Bayesian inference 9 2.2. Monte Carlo 10 2.3. Kullback-Leibler divergence 11 2.4. Variational inference 12 2.5. Riemannian manifold 13 2.6. Finite extended-pseudo-metric space 16 2.7. Reinforcement learning 16 2.8. Directed graph 19 Chapter 3 Process monitoring and fault classification with probabilistic manifold learning 20 3.1. Introduction 20 3.2. Methods 25 3.2.1. Uniform manifold approximation 27 3.2.2. Clusterization 28 3.2.3. Projection 31 3.2.4. Mapping of unknown data query 32 3.2.5. Inference 33 3.3. Verification study 38 3.3.1. Dataset description 38 3.3.2. Experimental setup 40 3.3.3. Process monitoring 43 3.3.4. Projection characteristics 47 3.3.5. Fault diagnosis 50 3.3.6. Computational Aspects 56 Chapter 4 Process system modeling with Bayesian neural networks 59 4.1. Introduction 59 4.2. Methods 63 4.2.1. Long Short-Term Memory (LSTM) 63 4.2.2. Bayesian LSTM (BLSTM) 66 4.3. Verification study 68 4.3.1. System description 68 4.3.2. Estimation of the plasma variables 71 4.3.3. Dataset description 72 4.3.4. Experimental setup 72 4.3.5. Weight regularization during training 78 4.3.6. Modeling complex behaviors of the system 80 4.3.7. Uncertainty quantification and model compression 85 Chapter 5 Process design based on reinforcement learning with distributional actor-critic networks 89 5.1. Introduction 89 5.2. Methods 93 5.2.1. Flowsheet hashing 93 5.2.2. Behavioral cloning 99 5.2.3. Neural Monte Carlo tree search (N-MCTS) 100 5.2.4. Distributional actor-critic networks (DACN) 105 5.2.5. Action masking 110 5.3. Verification study 110 5.3.1. System description 110 5.3.2. Experimental setup 111 5.3.3. Result and discussions 115 Chapter 6 Concluding remarks 120 6.1. Summary of the contributions 120 6.2. Future works 122 Appendix 125 A.1. Proof of Lemma 1 125 A.2. Performance indices for dimension reduction 127 A.3. Model equations for process units 130 Bibliography 132 ์ดˆ ๋ก 149๋ฐ•

    On Dynamic Monitoring Methods for Networks-on-Chip

    Get PDF
    Rapid ongoing evolution of multiprocessors will lead to systems with hundreds of processing cores integrated in a single chip. An emerging challenge is the implementation of reliable and efficient interconnection between these cores as well as other components in the systems. Network-on-Chip is an interconnection approach which is intended to solve the performance bottleneck caused by traditional, poorly scalable communication structures such as buses. However, a large on-chip network involves issues related to congestion problems and system control, for instance. Additionally, faults can cause problems in multiprocessor systems. These faults can be transient faults, permanent manufacturing faults, or they can appear due to aging. To solve the emerging traffic management, controllability issues and to maintain system operation regardless of faults a monitoring system is needed. The monitoring system should be dynamically applicable to various purposes and it should fully cover the system under observation. In a large multiprocessor the distances between components can be relatively long. Therefore, the system should be designed so that the amount of energy-inefficient long-distance communication is minimized. This thesis presents a dynamically clustered distributed monitoring structure. The monitoring is distributed so that no centralized control is required for basic tasks such as traffic management and task mapping. To enable extensive analysis of different Network-on-Chip architectures, an in-house SystemC based simulation environment was implemented. It allows transaction level analysis without time consuming circuit level implementations during early design phases of novel architectures and features. The presented analysis shows that the dynamically clustered monitoring structure can be efficiently utilized for traffic management in faulty and congested Network-on-Chip-based multiprocessor systems. The monitoring structure can be also successfully applied for task mapping purposes. Furthermore, the analysis shows that the presented in-house simulation environment is flexible and practical tool for extensive Network-on-Chip architecture analysis.Siirretty Doriast

    CBR and MBR techniques: review for an application in the emergencies domain

    Get PDF
    The purpose of this document is to provide an in-depth analysis of current reasoning engine practice and the integration strategies of Case Based Reasoning and Model Based Reasoning that will be used in the design and development of the RIMSAT system. RIMSAT (Remote Intelligent Management Support and Training) is a European Commission funded project designed to: a.. Provide an innovative, 'intelligent', knowledge based solution aimed at improving the quality of critical decisions b.. Enhance the competencies and responsiveness of individuals and organisations involved in highly complex, safety critical incidents - irrespective of their location. In other words, RIMSAT aims to design and implement a decision support system that using Case Base Reasoning as well as Model Base Reasoning technology is applied in the management of emergency situations. This document is part of a deliverable for RIMSAT project, and although it has been done in close contact with the requirements of the project, it provides an overview wide enough for providing a state of the art in integration strategies between CBR and MBR technologies.Postprint (published version

    Computational intelligence techniques for HVAC systems: a review

    Get PDF
    Buildings are responsible for 40% of global energy use and contribute towards 30% of the total CO2 emissions. The drive to reduce energy use and associated greenhouse gas emissions from buildings has acted as a catalyst in the development of advanced computational methods for energy efficient design, management and control of buildings and systems. Heating, ventilation and air conditioning (HVAC) systems are the major source of energy consumption in buildings and an ideal candidate for substantial reductions in energy demand. Significant advances have been made in the past decades on the application of computational intelligence (CI) techniques for HVAC design, control, management, optimization, and fault detection and diagnosis. This article presents a comprehensive and critical review on the theory and applications of CI techniques for prediction, optimization, control and diagnosis of HVAC systems.The analysis of trends reveals the minimization of energy consumption was the key optimization objective in the reviewed research, closely followed by the optimization of thermal comfort, indoor air quality and occupant preferences. Hardcoded Matlab program was the most widely used simulation tool, followed by TRNSYS, EnergyPlus, DOEโ€“2, HVACSim+ and ESPโ€“r. Metaheuristic algorithms were the preferred CI method for solving HVAC related problems and in particular genetic algorithms were applied in most of the studies. Despite the low number of studies focussing on MAS, as compared to the other CI techniques, interest in the technique is increasing due to their ability of dividing and conquering an HVAC optimization problem with enhanced overall performance. The paper also identifies prospective future advancements and research directions

    Topology Recoverability Prediction for Ad-Hoc Robot Networks: A Data-Driven Fault-Tolerant Approach

    Full text link
    Faults occurring in ad-hoc robot networks may fatally perturb their topologies leading to disconnection of subsets of those networks. Optimal topology synthesis is generally resource-intensive and time-consuming to be done in real time for large ad-hoc robot networks. One should only perform topology re-computations if the probability of topology recoverability after the occurrence of any fault surpasses that of its irrecoverability. We formulate this problem as a binary classification problem. Then, we develop a two-pathway data-driven model based on Bayesian Gaussian mixture models that predicts the solution to a typical problem by two different pre-fault and post-fault prediction pathways. The results, obtained by the integration of the predictions of those pathways, clearly indicate the success of our model in solving the topology (ir)recoverability prediction problem compared to the best of current strategies found in the literature

    Integration and test strategies for complex manufacturing machines

    Get PDF
    Complex manufacturing machines, like ASML wafer scanners, consist of thousands of components like electronic boards, software, mechanical parts and optics. These components of multiple disciplines are assembled or integrated into modules. The modules are integrated into sub-systems forming the system, according to an integration plan. Components as well as modules, sub-systems, and systems, can be tested, diagnosed and ??xed, according to a test-diagnose- fix plan. An increase in the number of components results in an increase of the number of tasks in these plans. Moreover, the effort required to obtain a sequence that describes in which order the tasks should be executed also increases. The duration and the cost of a sequence depends on the quality of the system. In this project we introduce a method to analyze the duration and the cost of sequences of integration and test-diagnose-fix tasks. The method uses test-diagnose-fixed models to analyze the performance of sequences. The basic elements in such a model are: a) test, diagnose and fix tasks with their costs and durations, b) fault states, c) the coverage of test tasks on fault states, d) failure probabilities of fault states. These elements can be obtained for components, modules or sub-systems of multiple disciplines. Three case studies have been performed using this method. The outcome of the analysis indicates that choosing a di??erent test sequence can reduce the test duration by 30% to 70%. In addition, three techniques have been developed to improve integration and test-diagnose-fix sequences: ยฟ To reduce the execution time of test-diagnose-fix sequences an algorithm has been developed to determine a new test task with an optimal coverage w.r.t. the fault states. The algorithm selects the new test task based on the maximum information gain. A test sequence, including the new test case, improves the test duration of the test-diagnose-fix task, because faults can be detected earlier. ยฟ To reduce the execution time of test-diagnose-fix sequences an adapted hypergraph partitioning algorithm has been developed. The algorithm partitions a test-diagnose-??x task into smaller tasks which can be executed in parallel. The result of a case study is a reduction of the test duration by 30% with a concomitant increase of 30% in the test cost. ยฟ The impact of the choice of the system architecture on the execution time and planning effort of integration and test-diagnose-fix sequences is investigate

    Timing Predictability in Future Multi-Core Avionics Systems

    Full text link
    • โ€ฆ
    corecore