8,532 research outputs found

    A data mining approach to incremental adaptive functional diagnosis

    Get PDF
    This paper presents a novel approach to functional fault diagnosis adopting data mining to exploit knowledge extracted from the system model. Such knowledge puts into relation test outcomes with components failures, to define an incremental strategy for identifying the candidate faulty component. The diagnosis procedure is built upon a set of sorted, possibly approximate, rules that specify given a (set of) failing test, which is the faulty candidate. The procedure iterative selects the most promising rules and requests the execution of the corresponding tests, until a component is identified as faulty, or no diagnosis can be performed. The proposed approach aims at limiting the number of tests to be executed in order to reduce the time and cost of diagnosis. Results on a set of examples show that the proposed approach allows for a significant reduction of the number of executed tests (the average improvement ranges from 32% to 88%)

    An Adaptive Design Methodology for Reduction of Product Development Risk

    Full text link
    Embedded systems interaction with environment inherently complicates understanding of requirements and their correct implementation. However, product uncertainty is highest during early stages of development. Design verification is an essential step in the development of any system, especially for Embedded System. This paper introduces a novel adaptive design methodology, which incorporates step-wise prototyping and verification. With each adaptive step product-realization level is enhanced while decreasing the level of product uncertainty, thereby reducing the overall costs. The back-bone of this frame-work is the development of Domain Specific Operational (DOP) Model and the associated Verification Instrumentation for Test and Evaluation, developed based on the DOP model. Together they generate functionally valid test-sequence for carrying out prototype evaluation. With the help of a case study 'Multimode Detection Subsystem' the application of this method is sketched. The design methodologies can be compared by defining and computing a generic performance criterion like Average design-cycle Risk. For the case study, by computing Average design-cycle Risk, it is shown that the adaptive method reduces the product development risk for a small increase in the total design cycle time.Comment: 21 pages, 9 figure

    Deep Space Network information system architecture study

    Get PDF
    The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control

    Cost modelling and concurrent engineering for testable design

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.As integrated circuits and printed circuit boards increase in complexity, testing becomes a major cost factor of the design and production of the complex devices. Testability has to be considered during the design of complex electronic systems, and automatic test systems have to be used in order to facilitate the test. This fact is now widely accepted in industry. Both design for testability and the usage of automatic test systems aim at reducing the cost of production testing or, sometimes, making it possible at all. Many design for testability methods and test systems are available which can be configured into a production test strategy, in order to achieve high quality of the final product. The designer has to select from the various options for creating a test strategy, by maximising the quality and minimising the total cost for the electronic system. This thesis presents a methodology for test strategy generation which is based on consideration of the economics during the life cycle of the electronic system. This methodology is a concurrent engineering approach which takes into account all effects of a test strategy on the electronic system during its life cycle by evaluating its related cost. This objective methodology is used in an original test strategy planning advisory system, which allows for test strategy planning for VLSI circuits as well as for digital electronic systems. The cost models which are used for evaluating the economics of test strategies are described in detail and the test strategy planning system is presented. A methodology for making decisions which are based on estimated costing data is presented. Results of using the cost models and the test strategy planning system for evaluating the economics of test strategies for selected industrial designs are presented

    Deep-coverage whole genome sequences and blood lipids among 16,324 individuals.

    Get PDF
    Large-scale deep-coverage whole-genome sequencing (WGS) is now feasible and offers potential advantages for locus discovery. We perform WGS in 16,324 participants from four ancestries at mean depth >29X and analyze genotypes with four quantitative traits-plasma total cholesterol, low-density lipoprotein cholesterol (LDL-C), high-density lipoprotein cholesterol, and triglycerides. Common variant association yields known loci except for few variants previously poorly imputed. Rare coding variant association yields known Mendelian dyslipidemia genes but rare non-coding variant association detects no signals. A high 2M-SNP LDL-C polygenic score (top 5th percentile) confers similar effect size to a monogenic mutation (~30 mg/dl higher for each); however, among those with severe hypercholesterolemia, 23% have a high polygenic score and only 2% carry a monogenic mutation. At these sample sizes and for these phenotypes, the incremental value of WGS for discovery is limited but WGS permits simultaneous assessment of monogenic and polygenic models to severe hypercholesterolemia

    Evaluation of Single-Chip, Real-Time Tomographic Data Processing on FPGA - SoC Devices

    Get PDF
    A novel approach to tomographic data processing has been developed and evaluated using the Jagiellonian PET (J-PET) scanner as an example. We propose a system in which there is no need for powerful, local to the scanner processing facility, capable to reconstruct images on the fly. Instead we introduce a Field Programmable Gate Array (FPGA) System-on-Chip (SoC) platform connected directly to data streams coming from the scanner, which can perform event building, filtering, coincidence search and Region-Of-Response (ROR) reconstruction by the programmable logic and visualization by the integrated processors. The platform significantly reduces data volume converting raw data to a list-mode representation, while generating visualization on the fly.Comment: IEEE Transactions on Medical Imaging, 17 May 201

    Complex low volume electronics simulation tool to improve yield and reliability

    Get PDF
    Assembly of Printed Circuit Boards (PCB) in low volumes and a high-mix requires a level of manual intervention during product manufacture, which leads to poor first time yield and increased production costs. Failures at the component-level and failures that stem from non-component causes (i.e. system-level), such as defects in design and manufacturing, can account for this poor yield. These factors have not been incorporated in prediction models due to the fact that systemfailure causes are not driven by well-characterised deterministic processes. A simulation and analysis support tool being developed that is based on a suite of interacting modular components with well defined functionalities and interfaces is presented in this paper. The CLOVES (Complex Low Volume Electronics Simulation) tool enables the characterisation and dynamic simulation of complete design; manufacturing and business processes (throughout the entire product life cycle) in terms of their propensity to create defects that could cause product failure. Details of this system and how it is being developed to fulfill changing business needs is presented in this paper. Using historical data and knowledge of previous printed circuit assemblies (PCA) design specifications and manufacturing experiences, defect and yield results can be effectively stored and re-applied for future problem solving. For example, past PCA design specifications can be used at design stage to amend designs or define process options to optimise the product yield and service reliability
    corecore