43 research outputs found
Technology 2002: The Third National Technology Transfer Conference and Exposition, volume 2
Proceedings from symposia of the Technology 2002 Conference and Exposition, December 1-3, 1992, Baltimore, MD. Volume 2 features 60 papers presented during 30 concurrent sessions
Recommended from our members
Theory and application of vector space similarity measures in computer assisted conceptual design
A number of computational tools now exist to aid in developing conceptual solutions based on a functional description of a design problem. A key limitation of these tools is the way results are organized for presentation to the user. In general, results are an undifferentiated mass of potential solutions. Analysis using a novel concept clustering tool shows concept generator output represents permutations of a set of a few solution archetypes. This provides an initial solution to organizing and presenting the results. More efficient solutions are sought by adopting a generate-evaluate-guide framework from the computational design synthesis literature. Specifically, the concept generation approach is altered so that each generated solution maximizes the variety it adds to the set of solutions. To achieve this, suitable similarity measures must first be developed.
Current techniques for similarity assessment in the design literature tend to be ad hoc and highly specialized to particular tasks. Prior work from the field of information retrieval is applied and extended to create a generalized approach to similarity assessment for vector space design data. These techniques are validated against an existing design by analogy methodology. A new tool for locating functional analogies within a database of existing products is developed as a result.
Improved similarity measures are combined with the proposed computational synthesis framework from literature to modify an existing concept generation tool. The resulting tool efficiently locates the few novel solutions in the set of possible results, and is a key step in the continued evolution of this class of computational design tools
Distributed Methods for Estimation and Fault Diagnosis: the case of Large-scale Networked Systems
2011/2012L’obiettivo di questa tesi è il monitoraggio di sistemi complessi a larga-scala. L’importanza di questo argomento è dovuto alla rinnovata enfasi data alle problematiche riguardanti la sicurezza e l’affidabilità dei sistemi, diventate requisiti fondamentali nella progettazione. Infatti, la crescente complessità dei moderni sistemi, dove le relazioni fra i diversi componenti, con il mondo esterno e con il fattore umano sono sempre più importanti, implica una crescente attenzione ai rischi e ai costi dovuti ai guasti e lo sviluppo di approcci nuovi per il controllo e il monitoraggio. Mentre nel contesto centralizzato i problemi di stima e di diagnostica di guasto sono stati ampiamente studiati, lo sviluppo di metodologie specifiche per sistemi distribuiti, larga scala o “networked”, come i Cyber-Physical Systems e i Systems-of-Systems, è cominciato negli ultimi anni.
Il sistema fisico è rappresentato come l’interconnessione di sottosistemi ottenuti attraverso una decomposizione del sistema complesso dove le sovrapposizioni sono consentite. L’approccio si basa sul modello dinamico non-lineare dei sottosistemi e sull’approssimazione adattativa delle non note interconnessioni fra i sottosistemi.
La novità è la proposta di un’architettura unica che tenga conto dei molteplici aspetti che costituiscono i sistemi moderni, integrando il sistema fisico, il livello sensoriale e il sistema di diagnostica e considerando le relazioni fra questi ambienti e le reti di comunicazione. In particolare, vengono proposte delle soluzioni ai problemi che emergono dall’utilizzo di reti di comunicazione e dal considerare sistemi distribuiti e networked.
Il processo di misura è effettuato da un insieme di reti di sensori, disaccoppiando il livello fisico da quello diagnostico e aumentando in questo modo la scalabilità e l’affidabilità del sistema diagnostico complessivo. Un nuovo metodo di stima distribuita per reti di sensori è utilizzato per filtrare le misure minimizzando sia la media sia la varianza dell’errore di stima attraverso la soluzione di un problema di ottimizzazione di Pareto. Un metodo per la re-sincronizzazione delle misure è proposto per gestire sistemi multi-rate e misure asincrone e per compensare l’effetto dei ritardi nella rete di comunicazione fra sensori e diagnostici.
Poiché uno dei problemi più importanti quando si considerano sistemi distribuiti e reti di comunicazione è per l’appunto il verificarsi di ritardi di trasmissione e perdite di pacchetti, si propone una strategia di compensazione dei ritardi , basata sull’uso di Time Stamps e buffer e sull’introduzione di una matrice di consenso tempo-variante, che permette di gestire il problema dei ritardi nella rete di comunicazione fra diagnostici.
Gli schemi distribuiti per la detection e l’isolation dei guasti sono sviluppati, garantendo la convergenza degli stimatori e derivando le condizioni sufficienti per la detectability e l’isolability. La matrice tempo-variante proposta permette di migliorare queste proprietà definendo delle soglie meno conservative. Alcuni risultati sperimentali provano l’efficacia del metodo proposto.
Infine, le architetture distribuite per la detection e l’isolation, sviluppate nel caso tempo-discreto, sono estese al caso tempo continuo e nello scenario in cui lo stato non è completamente misurabile, sia a tempo continuo che a tempo discreto.This thesis deals with the problem of the monitoring of modern complex systems. The motivation is the renewed emphasis given to monitoring and fault-tolerant systems. In fact, nowadays reliability is a key requirement in the design of technical systems. While fault diagnosis architectures and estimation methods have been extensively studied for centralized systems, the interest towards distributed, networked, large-scale and complex systems, such as Cyber-Physical Systems and Systems-of-Systems, has grown in the recent years. The increased complexity in modern systems implies the need for novel tools, able to consider all the different aspects and levels constituting these systems.
The system being monitored is modeled as the interconnection of several subsystems and a divide et impera approach allowing overlapping decomposition is used. The local diagnostic decision is made on the basis of the knowledge of the local subsystem dynamic model and of an adaptive approximation of the uncertain interconnection with neighboring subsystems.
The goal is to integrate all the aspects of the monitoring process in a comprehensive architecture, taking into account the physical environment, the sensor layer, the diagnosers level and the communication networks. In particular, specifically designed methods are developed in order to take into account the issues emerging when dealing with communication networks and distributed systems.
The introduction of the sensor layer, composed by a set of sensor networks, allows the decoupling of the physical and the sensing/computation topologies, bringing some advantages, such as scalability and reliability of the diagnosis architecture. We design the measurements acquisition task by proposing a distributed estimation method for sensor networks, able to filter measurements so that both the variance and the mean of the estimation error are minimized by means of a Pareto optimization problem. Moreover, we consider multi-rate systems and non synchronized measurements, having in mind realistic applications. A re-synchronization method is proposed in order to manage the case of multi-rate systems and to compensate delays in the communication network between sensors and diagnosers.
Since one of the problems when dealing with distributed, large-scale or networked systems and therefore with a communication network, is inevitably the presence of stochastic delays and packet dropouts, we propose therefore a distributed delay compensation strategy in the communication network between diagnosers, based on the use of Time Stamps and buffers and the definition of a time-varying consensus matrix. The goal of the novel time-varying matrix is twofold: it allows to manage communication delays, packet dropouts and interrupted links and to optimize detectability and isolability skills by defining less conservative thresholds.
The distributed fault detection and isolation schemes are studied and analytical results regarding fault detectability, isolability and estimator convergence are derived. Simulation results show the effectiveness of the proposed architecture.
For the sake of completeness, the monitoring architecture is studied and adapted to different frameworks: the fault detection and isolation methodology is extended for continuous-time systems and the case where the state is only partially measurable is considered for discrete-time and continuous-time systems.XXV Ciclo198
Semantic Networks for Hybrid Processes.
Simulation models are often used in parallel with a physical system to facilitate control, diagnosis and monitoring. Model based methods for control, diagnosis and monitoring form the basis for the popular sobriquets `intelligent', `smart' or `cyber-physical'.
We refer to a configuration where a model and a physical system are run in parallel as a emph{hybrid process}. Discrepancies between the model and the process may be caused by a fault in the process or an error in the model. In this work we focus on correcting modeling errors and provide methods to correct or update the model when a discrepancy is observed between a model and process operating in parallel. We then show that some of the methods developed for model adaptation and diagnosis can be used for control systems design.
There are five main contributions.
The first contribution is an analysis of the practical considerations and limitations of a networked implementation of a hybrid process. The analysis considers both the delay and jitter in a packet switching network as well as limits on the accuracy of clocks used to synchronize the model and process.
The second contribution is a semantic representation of hybrid processes which enables improvements to the accuracy and scope of algorithms used to update the model. We demonstrate how model uncertainty can be balanced against signal uncertainty and how the structure of interconnections between model components can be automatically reconfigured if needed.
The third contribution is a diagnostic approach to isolate model components responsible for a discrepancy between model and process, for a structure preserving realization of a system of ODEs.
The fourth contribution is an extension of the diagnostic strategy to include larger graphs with cycles, model uncertainty and measurement noise. The method uses graph theoretic tools to simplify the graph and make the problem more tractable and robust to noise.
The fifth contribution is a simulation of a distributed control system to illustrate our contributions. Using a coordinated network of electric vehicle charging stations as an example, a consensus based decentralized charging policy is implemented using semantic modeling and declarative descriptions of the interconnection network.PhDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99903/1/danand_1.pd
Recommended from our members
Laboratory Directed Research and Development FY2010 Annual Report
A premier applied-science laboratory, Lawrence Livermore National Laboratory (LLNL) has at its core a primary national security mission - to ensure the safety, security, and reliability of the nation's nuclear weapons stockpile without nuclear testing, and to prevent and counter the spread and use of weapons of mass destruction: nuclear, chemical, and biological. The Laboratory uses the scientific and engineering expertise and facilities developed for its primary mission to pursue advanced technologies to meet other important national security needs - homeland defense, military operations, and missile defense, for example - that evolve in response to emerging threats. For broader national needs, LLNL executes programs in energy security, climate change and long-term energy needs, environmental assessment and management, bioscience and technology to improve human health, and for breakthroughs in fundamental science and technology. With this multidisciplinary expertise, the Laboratory serves as a science and technology resource to the U.S. government and as a partner with industry and academia. This annual report discusses the following topics: (1) Advanced Sensors and Instrumentation; (2) Biological Sciences; (3) Chemistry; (4) Earth and Space Sciences; (5) Energy Supply and Use; (6) Engineering and Manufacturing Processes; (7) Materials Science and Technology; Mathematics and Computing Science; (8) Nuclear Science and Engineering; and (9) Physics
Structured representation learning from complex data
This thesis advances several theoretical and practical aspects of the recently introduced restricted Boltzmann machine - a powerful probabilistic and generative framework for modelling data and learning representations. The contributions of this study represent a systematic and common theme in learning structured representations from complex data