102,093 research outputs found

    Probabilistic Flow Regime Map Modeling of Two-Phase Flow

    Get PDF
    The purpose of this investigation is to develop models for two-phase heat transfer, void fraction, and pressure drop, three key design parameters, in single, smooth, horizontal tubes using a common probabilistic two-phase flow regime basis. Probabilistic two-phase flow maps are experimentally developed for R134a at 25 ??C, 35 ??C, and 50 ??C, R410A at 25 ??C, mass fluxes from 100 to 600 kg/m2-s, qualities from 0 to 1 in 8.00 mm, 5.43 mm, 3.90 mm, and 1.74 mm I.D. horizontal, smooth, adiabatic tubes in order to extend probabilistic two-phase flow map modeling to single tubes. An automated flow visualization technique, utilizing image recognition software and a new optical method, is developed to classify the flow regimes present in approximately one million captured images. The probabilistic two-phase flow maps developed are represented as continuous functions and generalized based on physical parameters. Condensation heat transfer, void fraction, and pressure drop models are developed for single tubes utilizing the generalized flow regime map developed. The condensation heat transfer model is compared to experimentally obtained condensation data of R134a at 25 ??C in 8.915 mm diameter smooth copper tube with mass fluxes ranging from 100 to 300 kg/m2-s and a full quality range. The condensation heat transfer, void fraction, and pressure drop models developed are also compared to data found in the literature for a wide range of tube sizes, refrigerants, and flow conditions.Air Conditioning and Refrigeration Project 18

    Deep Directional Statistics: Pose Estimation with Uncertainty Quantification

    Full text link
    Modern deep learning systems successfully solve many perception tasks such as object pose estimation when the input image is of high quality. However, in challenging imaging conditions such as on low-resolution images or when the image is corrupted by imaging artifacts, current systems degrade considerably in accuracy. While a loss in performance is unavoidable, we would like our models to quantify their uncertainty in order to achieve robustness against images of varying quality. Probabilistic deep learning models combine the expressive power of deep learning with uncertainty quantification. In this paper, we propose a novel probabilistic deep learning model for the task of angular regression. Our model uses von Mises distributions to predict a distribution over object pose angle. Whereas a single von Mises distribution is making strong assumptions about the shape of the distribution, we extend the basic model to predict a mixture of von Mises distributions. We show how to learn a mixture model using a finite and infinite number of mixture components. Our model allows for likelihood-based training and efficient inference at test time. We demonstrate on a number of challenging pose estimation datasets that our model produces calibrated probability predictions and competitive or superior point estimates compared to the current state-of-the-art

    Building Machines That Learn and Think Like People

    Get PDF
    Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar

    Approximate Bayesian Image Interpretation using Generative Probabilistic Graphics Programs

    Get PDF
    The idea of computer vision as the Bayesian inverse problem to computer graphics has a long history and an appealing elegance, but it has proved difficult to directly implement. Instead, most vision tasks are approached via complex bottom-up processing pipelines. Here we show that it is possible to write short, simple probabilistic graphics programs that define flexible generative models and to automatically invert them to interpret real-world images. Generative probabilistic graphics programs consist of a stochastic scene generator, a renderer based on graphics software, a stochastic likelihood model linking the renderer's output and the data, and latent variables that adjust the fidelity of the renderer and the tolerance of the likelihood model. Representations and algorithms from computer graphics, originally designed to produce high-quality images, are instead used as the deterministic backbone for highly approximate and stochastic generative models. This formulation combines probabilistic programming, computer graphics, and approximate Bayesian computation, and depends only on general-purpose, automatic inference techniques. We describe two applications: reading sequences of degraded and adversarially obscured alphanumeric characters, and inferring 3D road models from vehicle-mounted camera images. Each of the probabilistic graphics programs we present relies on under 20 lines of probabilistic code, and supports accurate, approximately Bayesian inferences about ambiguous real-world images.Comment: The first two authors contributed equally to this wor
    • …
    corecore