322 research outputs found
A generative model for sparse, evolving digraphs
Generating graphs that are similar to real ones is an open problem, while the
similarity notion is quite elusive and hard to formalize. In this paper, we
focus on sparse digraphs and propose SDG, an algorithm that aims at generating
graphs similar to real ones. Since real graphs are evolving and this evolution
is important to study in order to understand the underlying dynamical system,
we tackle the problem of generating series of graphs. We propose SEDGE, an
algorithm meant to generate series of graphs similar to a real series. SEDGE is
an extension of SDG. We consider graphs that are representations of software
programs and show experimentally that our approach outperforms other existing
approaches. Experiments show the performance of both algorithms
Recommended from our members
From multiscale modeling to metamodeling of geomechanics problems
In numerical simulations of geomechanics problems, a grand challenge consists of overcoming the difficulties in making accurate and robust predictions by revealing the true mechanisms in particle interactions, fluid flow inside pore spaces, and hydromechanical coupling effect between the solid and fluid constituents, from microscale to mesoscale, and to macroscale. While simulation tools incorporating subscale physics can provide detailed insights and accurate material properties to macroscale simulations via computational homogenizations, these numerical simulations are often too computational demanding to be directly used across multiple scales. Recent breakthroughs of Artificial Intelligence (AI) via machine learning have great potential to overcome these barriers, as evidenced by their great success in many applications such as image recognition, natural language processing, and strategy exploration in games. The AI can achieve super-human performance level in a large number of applications, and accomplish tasks that were thought to be not feasible due to the limitations of human and previous computer algorithms. Yet, machine learning approaches can also suffer from overfitting, lack of interpretability, and lack of reliability. Thus the application of machine learning into generation of accurate and reliable surrogate constitutive models for geomaterials with multiscale and multiphysics is not trivial. For this purpose, we propose to establish an integrated modeling process for automatic designing, training, validating, and falsifying of constitutive models, or "metamodeling". This dissertation focuses on our efforts in laying down step-by-step the necessary theoretical and technical foundations for the multiscale metamodeling framework.
The first step is to develop multiscale hydromechanical homogenization frameworks for both bulk granular materials and granular interfaces, with their behaviors homogenized from subscale microstructural simulations. For efficient simulations of field-scale geomechanics problems across more than two scales, we develop a hybrid data-driven method designed to capture the multiscale hydro-mechanical coupling effect of porous media with pores of various different sizes. By using sub-scale simulations to generate database to train material models, an offline homogenization procedure is used to replace the up-scaling procedure to generate path-dependent cohesive laws for localized physical discontinuities at both grain and specimen scales.
To enable AI in taking over the trial-and-error tasks in the constitutive modeling process, we introduce a novel âmetamodelingâ framework that employs both graph theory and deep reinforcement learning (DRL) to generate accurate, physics compatible and interpretable surrogate machine learning models. The process of writing constitutive models is simplified as a sequence of forming graph edges with the goal of maximizing the model score (a function of accuracy, robustness and forward prediction quality). By using neural networks to estimate policies and state values, the computer agent is able to efficiently self-improve the constitutive models generated through self-playing.
To overcome the obstacle of limited information in geomechanics, we improve the efficiency in utilization of experimental data by a multi-agent cooperative metamodeling framework to provide guidance on database generation and constitutive modeling at the same time. The modeler agent in the framework focuses on evaluating all modeling options (from domain expertsâ knowledge or machine learning) in a directed multigraph of elasto-plasticity theory, and finding the optimal path that links the source of the directed graph (e.g., strain history) to the target (e.g., stress). Meanwhile, the data agent focuses on collecting data from real or virtual experiments, interacts with the modeler agent sequentially and generates the database for model calibration to optimize the prediction accuracy. Finally, we design a non-cooperative meta-modeling framework that focuses on automatically developing strategies that simultaneously generate experimental data to calibrate model parameters and explore weakness of a known constitutive model until the strengths and weaknesses of the constitutive law on the application range can be identified through competition. These tasks are enabled by a zero-sum reward system of the metamodeling game and robust adversarial reinforcement learning techniques
Dagstuhl Reports : Volume 1, Issue 2, February 2011
Online Privacy: Towards Informational Self-Determination on the Internet (Dagstuhl Perspectives Workshop 11061) : Simone Fischer-HĂŒbner, Chris Hoofnagle, Kai Rannenberg, Michael Waidner, Ioannis Krontiris and Michael Marhöfer Self-Repairing Programs (Dagstuhl Seminar 11062) : Mauro PezzĂ©, Martin C. Rinard, Westley Weimer and Andreas Zeller Theory and Applications of Graph Searching Problems (Dagstuhl Seminar 11071) : Fedor V. Fomin, Pierre Fraigniaud, Stephan Kreutzer and Dimitrios M. Thilikos Combinatorial and Algorithmic Aspects of Sequence Processing (Dagstuhl Seminar 11081) : Maxime Crochemore, Lila Kari, Mehryar Mohri and Dirk Nowotka Packing and Scheduling Algorithms for Information and Communication Services (Dagstuhl Seminar 11091) Klaus Jansen, Claire Mathieu, Hadas Shachnai and Neal E. Youn
Fault propagation timing analysis to aid in the selection of sensors fro health management systems
Sensor data is processed to assess performance and health of complex systems. Proper sensor selection, placement, and implementation are critical to build an effective health management system. For complex systems in which the timely assessment of the health is desired to avoid expensive consequences of failure, sensor placement is vital. The ability to identify a critical failure early is completely dependent on sensor location within the fault propagation path. A strategy for assessing a sensor suite with respect to timely critical failure detection is presented in this thesis. To illustrate the strategy, Fault Propagation Timing Analysis (FPTA) will be performed on the Rocketdyne RS-68 rocket engine --Abstract, page iii
Fault propagation, detection and analysis in process systems
Process systems are often complicated and liable to experience faults and their effects. Faults can adversely affect the safety of the plant, its environmental impact and economic operation. As such, fault diagnosis in process systems is an active area of research and development in both academia and
industry.
The work reported in this thesis contributes to fault diagnosis by exploring the modelling and
analysis of fault propagation and detection in process systems. This is done by posing and answering three research questions. What are the necessary ingredients of a fault diagnosis model? What information should a fault diagnosis model yield? Finally, what types of model are appropriate to fault diagnosis?
To answer these questions , the assumption of the research is that the behaviour of a process system arises from the causal structure of the process system. On this basis, the research presented in this thesis develops a two-level approach to fault diagnosis based on detailed process information, and modelling and analysis techniques for representing causality.
In the first instance, a qualitative approach is developed called a level 1 fusion. The level 1 fusion models the detailed causality of the system using digraphs. The level 1 fusion is a causal map of the process. Such causal maps can be searched to discover and analyse fault propagation paths through the process.
By directly building on the level 1 fusion, a quantitative level 2 fusion is developed which uses a type of digraph called a Bayesian network. By associating process variables with fault variables, and using conditional probability theory, it is shown how measured effects can be used to calculate and
rank the probability of candidate causes.
The novel contributions are the development of a systematic approach to fault diagnosis based on modelling the chemistry, physics, and architecture of the process. It is also shown how the control and instrumentation system constrains the casualty of the process. By demonstrating how digraph models
can be reversed, it is shown how both cause-to-effect and effect-to-cause analysis can be carried out.
In answering the three research questions, this research shows that it is feasible to gain detailed insights into fault propagation by qualitatively modelling the physical causality of the process system. It is also shown that a qualitative fault diagnosis model can be used as the basis for a quantitative fault
diagnosis modelOpen Acces
On coding labeled trees
Trees are probably the most studied class of graphs in Computer Science. In this thesis we study bijective codes that represent labeled trees by means of string of node labels. We contribute to the understanding of their algorithmic tractability, their properties, and their applications.
The thesis is divided into two parts. In the first part we focus on two types of tree codes, namely Prufer-like codes and Transformation codes. We study optimal encoding and decoding algorithms, both in a sequential and in a parallel setting. We propose a unified approach that works for all Prufer-like codes and a more generic scheme based on the transformation of a tree into a functional digraph suitable for all bijective codes. Our results in this area close a variety of open problems.
We also consider possible applications of tree encodings, discussing how to exploit these codes in Genetic Algorithms and in the generation of random trees. Moreover, we introduce a modified version of a known code that, in Genetic Algorithms, outperform all the other known codes.
In the second part of the thesis we focus on two possible generalizations of our work. We first take into account the classes of k-trees and k-arch graphs (both superclasses of trees): we study bijective codes for this classes of graphs and their algorithmic feasibility. Then, we shift our attention to Informative Labeling Schemes. In this context labels are no longer considered as simple unique node identifiers, they rather convey information useful to achieve efficient computations on the tree. We exploit this idea to design a concurrent data structure for the lowest common ancestor problem on dynamic trees.
We also present an experimental comparison between our labeling scheme and the one proposed by Peleg for static trees
Testability Analysis and Improvements of Register-Transfer Level Digital Circuits
The paper presents novel testability analysis method applicable to register-transfer level digital circuits. It is shown if each module stored in a design library is equipped both with information related to design and information related to testing, then more accurate testability results can be achieved. A mathematical model based on virtual port conception is utilized to describe the information and proposed testability analysis method. In order to be effective, the method is based on the idea of searching two special digraphs developed for the purpose. Experimental results gained by the method are presented and compared with results of existing methods
Dynamic Modeling, Sensor Placement Design, and Fault Diagnosis of Nuclear Desalination Systems
Fault diagnosis of sensors, devices, and equipment is an important topic in the nuclear industry for effective and continuous operation of nuclear power plants. All the fault diagnostic approaches depend critically on the sensors that measure important process variables. Whenever a process encounters a fault, the effect of the fault is propagated to some or all the process variables. The ability of the sensor network to detect and isolate failure modes and anomalous conditions is crucial for the effectiveness of a fault detection and isolation (FDI) system. However, the emphasis of most fault diagnostic approaches found in the literature is primarily on the procedures for performing FDI using a given set of sensors. Little attention has been given to actual sensor allocation for achieving the efficient FDI performance. This dissertation presents a graph-based approach that serves as a solution for the optimization of sensor placement to ensure the observability of faults, as well as the fault resolution to a maximum possible extent. This would potentially facilitate an automated sensor allocation procedure. Principal component analysis (PCA), a multivariate data-driven technique, is used to capture the relationships in the data, and to fit a hyper-plane to the data. The fault directions for different fault scenarios are obtained from the prediction errors, and fault isolation is then accomplished using new projections on these fault directions. The effectiveness of the use of an optimal sensor set versus a reduced set for fault detection and isolation is demonstrated using this technique.
Among a variety of desalination technologies, the multi-stage flash (MSF) processes contribute substantially to the desalinating capacity in the world. In this dissertation, both steady-state and dynamic simulation models of a MSF desalination plant are developed. The dynamic MSF model is coupled with a previously developed International Reactor Innovative and Secure (IRIS) model in the SIMULINK environment. The developed sensor placement design and fault diagnostic methods are illustrated with application to the coupled nuclear desalination system. The results demonstrate the effectiveness of the newly developed integrated approach to performance monitoring and fault diagnosis with optimized sensor placement for large industrial systems
Robust Observation and Control of Complex Networks
The problem of understanding when individual actions of interacting agents display to a coordinated collective behavior has receiving a considerable attention in many research fields. Especially in control engineering, distributed applications in cooperative environments
are achieving resounding success, due to the large number of relevant applications, such as formation control, attitude synchronization tasks and cooperative applications in large-scale systems.
Although those problems have been extensively studied in Literature, themost of classic approaches use to consider the unrealistic scenario in which networks always consist of
identical, linear, time-invariant entities. Itâs clear that this assumption strongly approximates the effective behavior of a network. In fact agents can be subjected to parameter uncertainties,
unmodeled dynamics or simply characterized by proper nonlinear dynamics.
Therefore, motivated by those practical problems, the present Thesis proposes various approaches for dealing with the problem of observation and control in both the framework
of multi-agents and complex interconnected systems. The main contributions of this Thesis consist on the development of several algorithms based on concepts of discontinuous slidingmode control. This techniques can be employed for solving in finite-time problems of robust
state estimation and consensus-based synchronization in network of heterogenous nonlinear systems subjected to unknown but bounded disturbances and sudden topological changes.
Both directed and undirected topologies have been taken into account. It is worth to mention also the extension of the consensus problem to networks of agents governed by a class parabolic partial differential equation, for which, for the first time, a boundary-based robust local interaction protocol has been presented
- âŠ