295 research outputs found

    Optimization of Critical Infrastructure with Fluids

    Full text link
    Many of the world's most critical infrastructure systems control the motion of fluids. Despite their importance, the design, operation, and restoration of these infrastructures are sometimes carried out suboptimally. One reason for this is the intractability of optimization problems involving fluids, which are often constrained by partial differential equations or nonconvex physics. To address these challenges, this dissertation focuses on developing new mathematical programming and algorithmic techniques for optimization problems involving difficult nonlinear constraints that model a fluid's behavior. These new contributions bring many important problems within the realm of tractability. The first focus of this dissertation is on surface water systems. Specifically, we introduce the Optimal Flood Mitigation Problem, which optimizes the positioning of structural measures to protect critical assets with respect to a predefined flood scenario. Two solution approaches are then developed. The first leverages mathematical programming but does not tractably scale to realistic scenarios. The second uses a physics-inspired metaheuristic, which is found to compute good quality solutions for realistic scenarios. The second focus is on potable water distribution systems. Two foundational problems are considered. The first is the optimal water network design problem, for which we derive a novel convex reformulation, then develop an algorithm found to be more effective than the current state of the art on select instances. The second is the optimal pump scheduling (or Optimal Water Flow) problem, for which we develop a mathematical programming relaxation and various algorithmic techniques to improve convergence. The final focus is on natural gas pipeline systems. Two novel problems are considered. The first is the Maximal Load Delivery (MLD) problem for gas pipelines, which aims at finding a feasible steady-state operating point that maximizes load delivery for a severely damaged gas network. The second is the joint gas-power MLD problem, which couples damaged gas and power networks at gas-fired generators. In both problems, convex relaxations of nonconvex dynamical constraints are developed to increase tractability.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169849/1/tasseff_1.pd

    Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners (Second Edition)

    Get PDF
    Probabilistic Risk Assessment (PRA) is a comprehensive, structured, and logical analysis method aimed at identifying and assessing risks in complex technological systems for the purpose of cost-effectively improving their safety and performance. NASA's objective is to better understand and effectively manage risk, and thus more effectively ensure mission and programmatic success, and to achieve and maintain high safety standards at NASA. NASA intends to use risk assessment in its programs and projects to support optimal management decision making for the improvement of safety and program performance. In addition to using quantitative/probabilistic risk assessment to improve safety and enhance the safety decision process, NASA has incorporated quantitative risk assessment into its system safety assessment process, which until now has relied primarily on a qualitative representation of risk. Also, NASA has recently adopted the Risk-Informed Decision Making (RIDM) process [1-1] as a valuable addition to supplement existing deterministic and experience-based engineering methods and tools. Over the years, NASA has been a leader in most of the technologies it has employed in its programs. One would think that PRA should be no exception. In fact, it would be natural for NASA to be a leader in PRA because, as a technology pioneer, NASA uses risk assessment and management implicitly or explicitly on a daily basis. NASA has probabilistic safety requirements (thresholds and goals) for crew transportation system missions to the International Space Station (ISS) [1-2]. NASA intends to have probabilistic requirements for any new human spaceflight transportation system acquisition. Methods to perform risk and reliability assessment in the early 1960s originated in U.S. aerospace and missile programs. Fault tree analysis (FTA) is an example. It would have been a reasonable extrapolation to expect that NASA would also become the world leader in the application of PRA. That was, however, not to happen. Early in the Apollo program, estimates of the probability for a successful roundtrip human mission to the moon yielded disappointingly low (and suspect) values and NASA became discouraged from further performing quantitative risk analyses until some two decades later when the methods were more refined, rigorous, and repeatable. Instead, NASA decided to rely primarily on the Hazard Analysis (HA) and Failure Modes and Effects Analysis (FMEA) methods for system safety assessment

    Investigating human-perceptual properties of "shapes" using 3D shapes and 2D fonts

    Get PDF
    Shapes are generally used to convey meaning. They are used in video games, films and other multimedia, in diverse ways. 3D shapes may be destined for virtual scenes or represent objects to be constructed in the real-world. Fonts add character to an otherwise plain block of text, allowing the writer to make important points more visually prominent or distinct from other text. They can indicate the structure of a document, at a glance. Rather than studying shapes through traditional geometric shape descriptors, we provide alternative methods to describe and analyse shapes, from a lens of human perception. This is done via the concepts of Schelling Points and Image Specificity. Schelling Points are choices people make when they aim to match with what they expect others to choose but cannot communicate with others to determine an answer. We study whole mesh selections in this setting, where Schelling Meshes are the most frequently selected shapes. The key idea behind image Specificity is that different images evoke different descriptions; but ‘Specific’ images yield more consistent descriptions than others. We apply Specificity to 2D fonts. We show that each concept can be learned and predict them for fonts and 3D shapes, respectively, using a depth image-based convolutional neural network. Results are shown for a range of fonts and 3D shapes and we demonstrate that font Specificity and the Schelling meshes concept are useful for visualisation, clustering, and search applications. Overall, we find that each concept represents similarities between their respective type of shape, even when there are discontinuities between the shape geometries themselves. The ‘context’ of these similarities is in some kind of abstract or subjective meaning which is consistent among different people

    Essays In Causal Inference: Addressing Bias In Observational And Randomized Studies Through Analysis And Design

    Get PDF
    In observational studies, identifying assumptions may fail, often quietly and without notice, leading to biased causal estimates. Although less of a concern in randomized trials where treatment is assigned at random, bias may still enter the equation through other means. This dissertation has three parts, each developing new methods to address a particular pattern or source of bias in the setting being studied. In the first part, we extend the conventional sensitivity analysis methods for observational studies to better address patterns of heterogeneous confounding in matched-pair designs. We illustrate our method with two sibling studies on the impact of schooling on earnings, where the presence of unmeasured, heterogeneous ability bias is of material concern. The second part develops a modified difference-in-difference design for comparative interrupted time series studies. The method permits partial identification of causal effects when the parallel trends assumption is violated by an interaction between group and history. The method is applied to a study of the repeal of Missouri\u27s permit-to-purchase handgun law and its effect on firearm homicide rates. In the final part, we present a study design to identify vaccine efficacy in randomized control trials when there is no gold standard case definition. Our approach augments a two-arm randomized trial with natural variation of a genetic trait to produce a factorial experiment. The method is motivated by the inexact case definition of clinical malaria

    Vol. 16, No. 1 (Full Issue)

    Get PDF

    An attention model and its application in man-made scene interpretation

    No full text
    The ultimate aim of research into computer vision is designing a system which interprets its surrounding environment in a similar way the human can do effortlessly. However, the state of technology is far from achieving such a goal. In this thesis different components of a computer vision system that are designed for the task of interpreting man-made scenes, in particular images of buildings, are described. The flow of information in the proposed system is bottom-up i.e., the image is first segmented into its meaningful components and subsequently the regions are labelled using a contextual classifier. Starting from simple observations concerning the human vision system and the gestalt laws of human perception, like the law of “good (simple) shape” and “perceptual grouping”, a blob detector is developed, that identifies components in a 2D image. These components are convex regions of interest, with interest being defined as significant gradient magnitude content. An eye tracking experiment is conducted, which shows that the regions identified by the blob detector, correlate significantly with the regions which drive the attention of viewers. Having identified these blobs, it is postulated that a blob represents an object, linguistically identified with its own semantic name. In other words, a blob may contain a window a door or a chimney in a building. These regions are used to identify and segment higher order structures in a building, like facade, window array and also environmental regions like sky and ground. Because of inconsistency in the unary features of buildings, a contextual learning algorithm is used to classify the segmented regions. A model which learns spatial and topological relationships between different objects from a set of hand-labelled data, is used. This model utilises this information in a MRF to achieve consistent labellings of new scenes

    Use of Pattern Classification Algorithms to Interpret Passive and Active Data Streams from a Walking-Speed Robotic Sensor Platform

    Get PDF
    In order to perform useful tasks for us, robots must have the ability to notice, recognize, and respond to objects and events in their environment. This requires the acquisition and synthesis of information from a variety of sensors. Here we investigate the performance of a number of sensor modalities in an unstructured outdoor environment, including the Microsoft Kinect, thermal infrared camera, and coffee can radar. Special attention is given to acoustic echolocation measurements of approaching vehicles, where an acoustic parametric array propagates an audible signal to the oncoming target and the Kinect microphone array records the reflected backscattered signal. Although useful information about the target is hidden inside the noisy time domain measurements, the Dynamic Wavelet Fingerprint process (DWFP) is used to create a time-frequency representation of the data. A small-dimensional feature vector is created for each measurement using an intelligent feature selection process for use in statistical pattern classification routines. Using our experimentally measured data from real vehicles at 50 m, this process is able to correctly classify vehicles into one of five classes with 94% accuracy. Fully three-dimensional simulations allow us to study the nonlinear beam propagation and interaction with real-world targets to improve classification results

    k-Means

    Get PDF
    corecore