58 research outputs found

    Reliability Abstracts and Technical Reviews January - December 1970

    Get PDF
    Reliability Abstracts and Technical Reviews is an abstract and critical analysis service covering published and report literature on reliability. The service is designed to provide information on theory and practice of reliability as applied to aerospace and an objective appraisal of the quality, significance, and applicability of the literature abstracted

    Probabilistic analysis of the human transcriptome with side information

    Get PDF
    Understanding functional organization of genetic information is a major challenge in modern biology. Following the initial publication of the human genome sequence in 2001, advances in high-throughput measurement technologies and efficient sharing of research material through community databases have opened up new views to the study of living organisms and the structure of life. In this thesis, novel computational strategies have been developed to investigate a key functional layer of genetic information, the human transcriptome, which regulates the function of living cells through protein synthesis. The key contributions of the thesis are general exploratory tools for high-throughput data analysis that have provided new insights to cell-biological networks, cancer mechanisms and other aspects of genome function. A central challenge in functional genomics is that high-dimensional genomic observations are associated with high levels of complex and largely unknown sources of variation. By combining statistical evidence across multiple measurement sources and the wealth of background information in genomic data repositories it has been possible to solve some the uncertainties associated with individual observations and to identify functional mechanisms that could not be detected based on individual measurement sources. Statistical learning and probabilistic models provide a natural framework for such modeling tasks. Open source implementations of the key methodological contributions have been released to facilitate further adoption of the developed methods by the research community.Comment: Doctoral thesis. 103 pages, 11 figure

    Computational and Near-Optimal Trade-Offs in Renewable Electricity System Modelling

    Get PDF
    In the decades to come, the European electricity system must undergo an unprecedented transformation to avert the devastating impacts of climate change. To devise various possibilities for achieving a sustainable yet cost-efficient system, in the thesis at hand, we solve large optimisation problems that coordinate the siting of generation, storage and transmission capacities. Thereby, it is critical to capture the weather-dependent variability of wind and solar power as well as transmission bottlenecks. In addition to modelling at high spatial and temporal resolution, this requires a detailed representation of the electricity grid. However, since the resulting computational challenges limit what can be investigated, compromises on model accuracy must be made, and methods from informatics become increasingly relevant to formulate models efficiently and to compute many scenarios. The first part of the thesis is concerned with justifying such trade-offs between model detail and solving times. The main research question is how to circumvent some of the challenging non-convexities introduced by transmission network representations in joint capacity expansion models while still capturing the core grid physics. We first examine tractable linear approximations of power flow and transmission losses. Subsequently, we develop an efficient reformulation of the discrete transmission expansion planning (TEP) problem based on a cycle decomposition of the network graph, which conveniently also accommodates grid synchronisation options. Because discrete investment decisions aggravate the problem\u27s complexity, we also cover simplifying heuristics that make use of sequential linear programming (SLP) and retrospective discretisation techniques. In the second half, we investigate other trade-offs, namely between least-cost and near-optimal solutions. We systematically explore broad ranges of technologically diverse system configurations that are viable without compromising the system\u27s overall cost-effectiveness. For example, we present solutions that avoid installing onshore wind turbines, bypass new overhead transmission lines, or feature a more regionally balanced distribution of generation capacities. Such alternative designs may be more widely socially accepted, and, thus, knowing about these degrees of freedom is highly policy-relevant. The method we employ to span the space of near-optimal solutions is related to modelling-to-generate-alternatives, a variant of multi-objective optimisation. The robustness of our results is further strengthened by considering technology cost uncertainties. To efficiently sweep the cost parameter space, we leverage multi-fidelity surrogate modelling techniques using sparse polynomial chaos expansion in combination with low-discrepancy sampling and extensive parallelisation on high-performance computing infrastructure

    Approximate Newton Methods for Policy Search in Markov Decision Processes

    Get PDF
    Approximate Newton methods are standard optimization tools which aim to maintain the benefits of Newton's method, such as a fast rate of convergence, while alleviating its drawbacks, such as computationally expensive calculation or estimation of the inverse Hessian. In this work we investigate approximate Newton methods for policy optimization in Markov decision processes (MDPs). We first analyse the structure of the Hessian of the total expected reward, which is a standard objective function for MDPs. We show that, like the gradient, the Hessian exhibits useful structure in the context of MDPs and we use this analysis to motivate two Gauss-Newton methods for MDPs. Like the Gauss- Newton method for non-linear least squares, these methods drop certain terms in the Hessian. The approximate Hessians possess desirable properties, such as negative definiteness, and we demonstrate several important performance guarantees including guaranteed ascent directions, invariance to affine transformation of the parameter space and convergence guarantees. We finally provide a unifying perspective of key policy search algorithms, demonstrating that our second Gauss- Newton algorithm is closely related to both the EM-algorithm and natural gradient ascent applied to MDPs, but performs significantly better in practice on a range of challenging domains

    Learning Bayesian network equivalence classes using ant colony optimisation

    Get PDF
    Bayesian networks have become an indispensable tool in the modelling of uncertain knowledge. Conceptually, they consist of two parts: a directed acyclic graph called the structure, and conditional probability distributions attached to each node known as the parameters. As a result of their expressiveness, understandability and rigorous mathematical basis, Bayesian networks have become one of the first methods investigated, when faced with an uncertain problem domain. However, a recurring problem persists in specifying a Bayesian network. Both the structure and parameters can be difficult for experts to conceive, especially if their knowledge is tacit.To counteract these problems, research has been ongoing, on learning both the structure and parameters of Bayesian networks from data. Whilst there are simple methods for learning the parameters, learning the structure has proved harder. Part ofthis stems from the NP-hardness of the problem and the super-exponential space of possible structures. To help solve this task, this thesis seeks to employ a relatively new technique, that has had much success in tackling NP-hard problems. This technique is called ant colony optimisation. Ant colony optimisation is a metaheuristic based on the behaviour of ants acting together in a colony. It uses the stochastic activity of artificial ants to find good solutions to combinatorial optimisation problems. In the current work, this method is applied to the problem of searching through the space of equivalence classes of Bayesian networks, in order to find a good match against a set of data. The system uses operators that evaluate potential modifications to a current state. Each of the modifications is scored and the results used to inform the search. In order to facilitate these steps, other techniques are also devised, to speed up the learning process. The techniques includeThe techniques are tested by sampling data from gold standard networks and learning structures from this sampled data. These structures are analysed using various goodnessof-fit measures to see how well the algorithms perform. The measures include structural similarity metrics and Bayesian scoring metrics. The results are compared in depth against systems that also use ant colony optimisation and other methods, including evolutionary programming and greedy heuristics. Also, comparisons are made to well known state-of-the-art algorithms and a study performed on a real-life data set. The results show favourable performance compared to the other methods and on modelling the real-life data

    Contextual Analysis of Large-Scale Biomedical Associations for the Elucidation and Prioritization of Genes and their Roles in Complex Disease

    Get PDF
    Vast amounts of biomedical associations are easily accessible in public resources, spanning gene-disease associations, tissue-specific gene expression, gene function and pathway annotations, and many other data types. Despite this mass of data, information most relevant to the study of a particular disease remains loosely coupled and difficult to incorporate into ongoing research. Current public databases are difficult to navigate and do not interoperate well due to the plethora of interfaces and varying biomedical concept identifiers used. Because no coherent display of data within a specific problem domain is available, finding the latent relationships associated with a disease of interest is impractical. This research describes a method for extracting the contextual relationships embedded within associations relevant to a disease of interest. After applying the method to a small test data set, a large-scale integrated association network is constructed for application of a network propagation technique that helps uncover more distant latent relationships. Together these methods are adept at uncovering highly relevant relationships without any a priori knowledge of the disease of interest. The combined contextual search and relevance methods power a tool which makes pertinent biomedical associations easier to find, easier to assimilate into ongoing work, and more prominent than currently available databases. Increasing the accessibility of current information is an important component to understanding high-throughput experimental results and surviving the data deluge

    Design of advanced primitives for secure multiparty computation : special shuffles and integer comparison

    Get PDF
    In modern cryptography, the problem of secure multiparty computation is about the cooperation between mutually distrusting parties computing a given function. Each party holds some private information that should remain secret as much as possible throughout the computation. A large body of research initiated in the early 1980's has shown that any computable function can be evaluated using secure multiparty computation. Though these feasibility results are general, their applicability in practical situations is rather unsatisfactory. This thesis concerns the study of two particular cryptographic primitives with focus on efficiency. The first primitive studied is a generalization of verifiable shuffles of homomorphic encryptions, where the shuffler is only allowed to apply a permutation from a restricted set of permutations. In this thesis, we consider shuffles using permutations from a k-fragile set, meaning that any k input-output correspondences uniquely identify a permutation within the set. We provide verifiable shuffles restricted to the set of all rotations (1-fragile), affine transformations (2-fragile), and Möbius transformations (3-fragile). Applications of these special shuffles include fragile mixing, electronic elections, secure function evaluation using scrambled circuits, and secure integer comparison. Two approaches for verifiable rotations are presented. On the one hand, we use properties of the Discrete Fourier Transform (DFT) to express in a compact way that a rotation is applied in a shuffle. The solution is efficient, but imposes some mild restrictions on the parameters to allow DFT to work. On the other hand, we present a general solution that does not impose any parameter constraint and works on any homomorphic cryptosystem. These protocols for rotations are used to build efficient shuffling protocols for affine and Möbius transformations. The second primitive is secure integer comparison. In a general scenario, parties are given homomorphic encryptions of the bits of two integers and, after running a protocol, an encryption of a bit is produced, telling the result of the greater-than comparison of the two integers. This is a useful building block for higher-level protocols such as electronic voting, biometrics authentication or electronic auctions. A study of the relationship of other problems to integer comparison is given as well. We present two types of solutions for integer comparison. Firstly, we consider an arithmetic circuit yielding secure protocols within the framework for multiparty computation based on threshold homomorphic cryptosystems. Our circuit achieves a good balance between round and computational complexities, when compared to the similar solutions in the literature. The second type of solutions uses a intricate approach where different building blocks are used. A full analysis is made for the two-party case where efficiency of the resulting protocols compares favorably to other solutions and approaches
    corecore