1,331 research outputs found

    Machine learning and its applications in reliability analysis systems

    Get PDF
    In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA

    LGEM+^\text{+}: a first-order logic framework for automated improvement of metabolic network models through abduction

    Full text link
    Scientific discovery in biology is difficult due to the complexity of the systems involved and the expense of obtaining high quality experimental data. Automated techniques are a promising way to make scientific discoveries at the scale and pace required to model large biological systems. A key problem for 21st century biology is to build a computational model of the eukaryotic cell. The yeast Saccharomyces cerevisiae is the best understood eukaryote, and genome-scale metabolic models (GEMs) are rich sources of background knowledge that we can use as a basis for automated inference and investigation. We present LGEM+, a system for automated abductive improvement of GEMs consisting of: a compartmentalised first-order logic framework for describing biochemical pathways (using curated GEMs as the expert knowledge source); and a two-stage hypothesis abduction procedure. We demonstrate that deductive inference on logical theories created using LGEM+, using the automated theorem prover iProver, can predict growth/no-growth of S. cerevisiae strains in minimal media. LGEM+ proposed 2094 unique candidate hypotheses for model improvement. We assess the value of the generated hypotheses using two criteria: (a) genome-wide single-gene essentiality prediction, and (b) constraint of flux-balance analysis (FBA) simulations. For (b) we developed an algorithm to integrate FBA with the logic model. We rank and filter the hypotheses using these assessments. We intend to test these hypotheses using the robot scientist Genesis, which is based around chemostat cultivation and high-throughput metabolomics.Comment: 15 pages, one figure, two tables, two algorithm

    A Bayesian Abduction Model For Sensemaking

    Get PDF
    This research develops a Bayesian Abduction Model for Sensemaking Support (BAMSS) for information fusion in sensemaking tasks. Two methods are investigated. The first is the classical Bayesian information fusion with belief updating (using Bayesian clustering algorithm) and abductive inference. The second method uses a Genetic Algorithm (BAMSS-GA) to search for the k-best most probable explanation (MPE) in the network. Using various data from recent Iraq and Afghanistan conflicts, experimental simulations were conducted to compare the methods using posterior probability values which can be used to give insightful information for prospective sensemaking. The inference results demonstrate the utility of BAMSS as a computational model for sensemaking. The major results obtained are: (1) The inference results from BAMSS-GA gave average posterior probabilities that were 103 better than those produced by BAMSS; (2) BAMSS-GA gave more consistent posterior probabilities as measured by variances; and (3) BAMSS was able to give an MPE while BAMSS-GA was able to identify the optimal values for kMPEs. In the experiments, out of 20 MPEs generated by BAMSS, BAMSS-GA was able to identify 7 plausible network solutions resulting in less amount of information needed for sensemaking and reducing the inference search space by 7/20 (35%). The results reveal that GA can be used successfully in Bayesian information fusion as a search technique to identify those significant posterior probabilities useful for sensemaking. BAMSS-GA was also more robust in overcoming the problem of bounded search that is a constraint to Bayesian clustering and inference state space in BAMSS

    An integrative top-down and bottom-up qualitative model construction framework for exploration of biochemical systems

    Get PDF
    The authors would like to thank the support on this research by the CRISP project (Combinatorial Responses In Stress Pathways) funded by the BBSRC (BB/F00513X/1) under the Systems Approaches to Biological Research (SABR) Initiative.Peer reviewedPublisher PD

    The essence of things. Is there a methodological specificity in sociological knowledge?

    Get PDF
    Scientific reasoning – presumed unique, perfect, objective – still solidly bases its foundations on the consequences of the evident success (theoretical and practical) obtained over the centuries starting from Galilean intuition. Over time, the granitic belief that scientific success can depend exclusively on a single, simple principle of method, has actually been slightly undermined; there is still a solid scientific basis about this idea, but the demands for rethinking and eclecticism also in the methodological approach begin to be "important". It therefore seems absolutely compelling to note the difference between "inanimate" sciences – which have gradually seen the level of compliexity growing, however, resulting from the fact that the more the system of knowledge grows and the greater are the dark areas to be revealed – and "animated" sciences who have as their object of interest people, sensibility, ethics, behaviors and social actions

    Models in Search of Targets: Exploratory Modelling and the Case of Turing Patterns

    Get PDF
    Traditional frameworks for evaluating scientific models have tended to downplay their exploratory function; instead they emphasize how models are inherently intended for specific phenomena and are to be judged by their ability to predict, reproduce, or explain empirical observations. By contrast, this paper argues that exploration should stand alongside explanation, prediction, and representation as a core function of scientific models. Thus, models often serve as starting points for future inquiry, as proofs of principle, as sources of potential explanations, and as a tool for reassessing the suitability of the target system (and sometimes of whole research agendas). This is illustrated by a case study of the varied career of reaction-diffusion models in the study of biological pattern formation, which was initiated by Alan Turing in a classic 1952 paper. Initially regarded as mathematically elegant, but biologically irrelevant, demonstrations of how, in principle, spontaneous pattern formation could occur in an organism, such Turing models have only recently rebounded, thanks to advances in experimental techniques and computational methods. The long-delayed vindication of Turing’s initial model, it is argued, is best explained by recognizing it as an exploratory tool (rather than as a purported representation of an actual target system)

    A QoS based Routing Approach using Genetic Algorithms for Bandwidth Maximization in Network

    Full text link
    This paper addresses the path selection problem from a known source to the destination in dense networks. The proposed solution for route discovery uses the genetic algorithm approach for a QoS based network. The multi point crossover and mutation helps in determining the optimal path and alternate path when required. The input to the genetic algorithm is a learnt module which is a part of the cognitive router that takes care of four QoS parameters. Here the set of nodes selected for routing is determined by delay, jitter and loss. On this graded surface of nodes selected, the bandwidth parameter is considered for path selection. The aim of the approach is to occupy the maximized bandwidth along the forward channels and minimize the route length. The population size is considered as fixed nodes participating in the network scenario, which will be limited to a known size of topology. The simulated results show that by using genetic algorithm (GA) approach the probability of convergence to shortest path is higher.Comment: 13 pages, 3 figures, 5 tables,. arXiv admin note: substantial text overlap with arXiv:1001.3920, arXiv:1408.1358; also substantial text overlap with http://cs-wwwarchiv.cs.unibas.ch/personen/sifalakis_manos/research/ms_setn2004.pdf and other sources without attributio

    Data-driven & Theory-driven Science : Artificial Realities and Applications to Savings Groups

    Get PDF
    Paper I and Paper II is not published yet. They are excluded from the dissertation until they will be published.The scientific process is neither unique nor nomic. Two processes of scientific inquiry are theory-driven and data-driven science. This dissertation analyzes savings groups using theory-driven and data-driven methods. Simulated realities-based on data-driven theory-are used to understand the emerging dynamics of savings groups. Savings groups are grassroots, community-based organizations composed of 15 to 30 members. These organizations-usually supported by international development agencies-have weekly meetings during a cycle of operations that typically lasts a year. In the groups, savings are kept in two funds: a fund for loans and a social welfare fund that covers life-cycle events. The findings of Papers A to D in this dissertation provide new large-sample evidence about savings groups, their dynamics, and the factors affecting their financial performance. In practice, the results of Paper A to D shed light on the best policies to promote sustainable development with informal finance in a cost-effective way. A theory-driven approach indicates that the social fund in savings groups stimulates loan allocation among risk-sharing members, while implicitly covering idiosyncratic risks (Paper A). A data-driven approach based on Bayesian data-mining reveals that the macroeconomic environment and the facilitation model of development agencies have a strong influence on the profit-generating capacity of savings groups (Paper B). Machine-learning methods further show that business training is not the most frequent program implemented by development agencies, but it is in fact the most powerful intervention to encourage profits, particularly when a development agency stops working with a group and leaves a community (Paper C). Finally, the simulation of a village with artificial agents indicates that the businesses of savings groups can have higher profits due to the consolidation of social capital and the competitive advantage created through a process of homophily (Paper D). Metatheoretically, the theory-driven and data-driven approaches of this dissertation-and the complementarity between these approaches-contribute to the epistemology of data-intensive science. The dissertation concludes that the gelstaltic and quasi-teleological explanations of the data-driven approach help to the formulation of theories through inductive and abductive reasoning.publishedVersio

    A framework for modelling Molecular Interaction Maps

    Full text link
    Metabolic networks, formed by a series of metabolic pathways, are made of intracellular and extracellular reactions that determine the biochemical properties of a cell, and by a set of interactions that guide and regulate the activity of these reactions. Most of these pathways are formed by an intricate and complex network of chain reactions, and can be represented in a human readable form using graphs which describe the cell cycle checkpoint pathways. This paper proposes a method to represent Molecular Interaction Maps (graphical representations of complex metabolic networks) in Linear Temporal Logic. The logical representation of such networks allows one to reason about them, in order to check, for instance, whether a graph satisfies a given property Ï•\phi, as well as to find out which initial conditons would guarantee Ï•\phi, or else how can the the graph be updated in order to satisfy Ï•\phi. Both the translation and resolution methods have been implemented in a tool capable of addressing such questions thanks to a reduction to propositional logic which allows exploiting classical SAT solvers.Comment: 31 pages, 12 figure

    Towards autonomous diagnostic systems with medical imaging

    Get PDF
    Democratizing access to high quality healthcare has highlighted the need for autonomous diagnostic systems that a non-expert can use. Remote communities, first responders and even deep space explorers will come to rely on medical imaging systems that will provide them with Point of Care diagnostic capabilities. This thesis introduces the building blocks that would enable the creation of such a system. Firstly, we present a case study in order to further motivate the need and requirements of autonomous diagnostic systems. This case study primarily concerns deep space exploration where astronauts cannot rely on communication with earth-bound doctors to help them through diagnosis, nor can they make the trip back to earth for treatment. Requirements and possible solutions about the major challenges faced with such an application are discussed. Moreover, this work describes how a system can explore its perceived environment by developing a Multi Agent Reinforcement Learning method that allows for implicit communication between the agents. Under this regime agents can share the knowledge that benefits them all in achieving their individual tasks. Furthermore, we explore how systems can understand the 3D properties of 2D depicted objects in a probabilistic way. In Part II, this work explores how to reason about the extracted information in a causally enabled manner. A critical view on the applications of causality in medical imaging, and its potential uses is provided. It is then narrowed down to estimating possible future outcomes and reasoning about counterfactual outcomes by embedding data on a pseudo-Riemannian manifold and constraining the latent space by using the relativistic concept of light cones. By formalizing an approach to estimating counterfactuals, a computationally lighter alternative to the abduction-action-prediction paradigm is presented through the introduction of Deep Twin Networks. Appropriate partial identifiability constraints for categorical variables are derived and the method is applied in a series of medical tasks involving structured data, images and videos. All methods are evaluated in a wide array of synthetic and real life tasks that showcase their abilities, often achieving state-of-the-art performance or matching the existing best performance while requiring a fraction of the computational cost.Open Acces
    • …
    corecore