16 research outputs found

    Computational Techniques for Stochastic Reachability

    Get PDF
    As automated control systems grow in prevalence and complexity, there is an increasing demand for verification and controller synthesis methods to ensure these systems perform safely and to desired specifications. In addition, uncertain or stochastic behaviors are often exhibited (such as wind affecting the motion of an aircraft), making probabilistic verification desirable. Stochastic reachability analysis provides a formal means of generating the set of initial states that meets a given objective (such as safety or reachability) with a desired level of probability, known as the reachable (or safe) set, depending on the objective. However, the applicability of reachability analysis is limited in the scope and size of system it can address. First, generating stochastic reachable or viable sets is computationally intensive, and most existing methods rely on an optimal control formulation that requires solving a dynamic program, and which scales exponentially in the dimension of the state space. Second, almost no results exist for extending stochastic reachability analysis to systems with incomplete information, such that the controller does not have access to the full state of the system. This thesis addresses both of the above limitations, and introduces novel computational methods for generating stochastic reachable sets for both perfectly and partially observable systems. We initially consider a linear system with additive Gaussian noise, and introduce two methods for computing stochastic reachable sets that do not require dynamic programming. The first method uses a particle approximation to formulate a deterministic mixed integer linear program that produces an estimate to reachability probabilities. The second method uses a convex chance-constrained optimization problem to generate an under-approximation to the reachable set. Using these methods we are able to generate stochastic reachable sets for a four-dimensional spacecraft docking example in far less time than it would take had we used a dynamic program. We then focus on discrete time stochastic hybrid systems, which provide a flexible modeling framework for systems that exhibit mode-dependent behavior, and whose state space has both discrete and continuous components. We incorporate a stochastic observation process into the hybrid system model, and derive both theoretical and computational results for generating stochastic reachable sets subject to an observation process. The derivation of an information state allows us to recast the problem as one of perfect information, and we prove that solving a dynamic program over the information state is equivalent to solving the original problem. We then demonstrate that the dynamic program to solve the reachability problem for a partially observable stochastic hybrid system shares the same properties as for a partially observable Markov decision process (POMDP) with an additive cost function, and so we can exploit approximation strategies designed for POMDPs to solve the reachability problem. To do so, however, we first generate approximate representations of the information state and value function as either vectors or Gaussian mixtures, through a finite state approximation to the hybrid system or using a Gaussian mixture approximation to an indicator function defined over a convex region. For a system with linear dynamics and Gaussian measurement noise, we show that it exhibits special properties that do not require an approximation of the information state, which enables much more efficient computation of the reachable set. In all cases we provide convergence results and numerical examples

    Real-Time Path Planning for Automating Optical Tweezers based Particle Transport Operations

    Get PDF
    Optical tweezers (OT) have been developed to successfully trap, orient, and transport micro and nano scale components of many different sizes and shapes in a fluid medium. They can be viewed as robots made out of light. Components can be simply released from optical traps by switching off laser beams. By utilizing the principle of time sharing or holograms, multiple optical traps can perform several operations in parallel. These characteristics make optical tweezers a very promising technology for creating directed micro and nano scale assemblies. In the infra-red regime, they are useful in a large number of biological applications as well. This dissertation explores the problem of real-time path planning for autonomous OT based transport operations. Such operations pose interesting challenges as the environment is uncertain and dynamic due to the random Brownian motion of the particles and noise in the imaging based measurements. Silica microspheres having diameters between (1-20) µm are selected as model components. Offline simulations are performed to gather trapping probability data that serves as a measure of trap strength and reliability as a function of relative position of the particle under consideration with respect to the trap focus, and trap velocity. Simplified models are generated using Gaussian Radial Basis Functions to represent the data in a compact form. These metamodels can be queried at run-time to obtain estimated probability values accurately and efficiently. Simple trapping probability models are then utilized in a stochastic dynamic programming framework to compute optimum trap locations and velocities that minimizes the total, expected transport time by incorporating collision avoidance and recovery steps. A discrete version of an approximate partially observable Markov decision process algorithm, called the QMDP_NLTDV algorithm, is developed. Real-time performance is ensured by pruning the search space and enhancing convergence rates by introducing a non-linear value function. The algorithm is validated both using a simulator as well as a physical holographic tweezer set-up. Successful runs show that the automated planner is flexible, works well in reasonably crowded scenes, and is capable of transporting a specific particle to a given goal location by avoiding collisions either by circumventing or by trapping other freely diffusing particles. This technique for transporting individual particles is utilized within a decoupled and prioritized approach to move multiple particles simultaneously. An iterative version of a bipartite graph matching algorithm is also used to assign goal locations to target objects optimally. As in the case of single particle transport, simulation and some physical experiments are performed to validate the multi-particle planning approach

    LSST Science Book, Version 2.0

    Get PDF
    A survey that can cover the sky in optical bands over wide fields to faint magnitudes with a fast cadence will enable many of the exciting science opportunities of the next decade. The Large Synoptic Survey Telescope (LSST) will have an effective aperture of 6.7 meters and an imaging camera with field of view of 9.6 deg^2, and will be devoted to a ten-year imaging survey over 20,000 deg^2 south of +15 deg. Each pointing will be imaged 2000 times with fifteen second exposures in six broad bands from 0.35 to 1.1 microns, to a total point-source depth of r~27.5. The LSST Science Book describes the basic parameters of the LSST hardware, software, and observing plans. The book discusses educational and outreach opportunities, then goes on to describe a broad range of science that LSST will revolutionize: mapping the inner and outer Solar System, stellar populations in the Milky Way and nearby galaxies, the structure of the Milky Way disk and halo and other objects in the Local Volume, transient and variable objects both at low and high redshift, and the properties of normal and active galaxies at low and high redshift. It then turns to far-field cosmological topics, exploring properties of supernovae to z~1, strong and weak lensing, the large-scale distribution of galaxies and baryon oscillations, and how these different probes may be combined to constrain cosmological models and the physics of dark energy.Comment: 596 pages. Also available at full resolution at http://www.lsst.org/lsst/sciboo

    Bayesian large-scale structure inference and cosmic web analysis

    Full text link
    Surveys of the cosmic large-scale structure carry opportunities for building and testing cosmological theories about the origin and evolution of the Universe. This endeavor requires appropriate data assimilation tools, for establishing the contact between survey catalogs and models of structure formation. In this thesis, we present an innovative statistical approach for the ab initio simultaneous analysis of the formation history and morphology of the cosmic web: the BORG algorithm infers the primordial density fluctuations and produces physical reconstructions of the dark matter distribution that underlies observed galaxies, by assimilating the survey data into a cosmological structure formation model. The method, based on Bayesian probability theory, provides accurate means of uncertainty quantification. We demonstrate the application of BORG to the Sloan Digital Sky Survey data and describe the primordial and late-time large-scale structure in the observed volume. We show how the approach has led to the first quantitative inference of the cosmological initial conditions and of the formation history of the observed structures. We then use these results for several cosmographic projects aiming at analyzing and classifying the large-scale structure. In particular, we build an enhanced catalog of cosmic voids probed at the level of the dark matter distribution, deeper than with the galaxies. We present detailed probabilistic maps of the dynamic cosmic web, and offer a general solution to the problem of classifying structures in the presence of uncertainty. The results described in this thesis constitute accurate chrono-cosmography of the inhomogeneous cosmic structure.Comment: 237 pages, 63 figures, 14 tables. PhD thesis, Institut d'Astrophysique de Paris, September 2015 (advisor: B. Wandelt). Contains the papers arXiv:1305.4642, arXiv:1409.6308, arXiv:1410.0355, arXiv:1502.02690, arXiv:1503.00730, arXiv:1507.08664 and draws from arXiv:1403.1260. Full version including high-resolution figures available from the author's websit

    Applied Mathematics and Computational Physics

    Get PDF
    As faster and more efficient numerical algorithms become available, the understanding of the physics and the mathematical foundation behind these new methods will play an increasingly important role. This Special Issue provides a platform for researchers from both academia and industry to present their novel computational methods that have engineering and physics applications

    LSST Science Book, Version 2.0

    Get PDF
    A survey that can cover the sky in optical bands over wide fields to faint magnitudes with a fast cadence will enable many of the exciting science opportunities of the next decade. The Large Synoptic Survey Telescope (LSST) will have an effective aperture of 6.7 meters and an imaging camera with field of view of 9.6 deg^2, and will be devoted to a ten-year imaging survey over 20,000 deg^2 south of +15 deg. Each pointing will be imaged 2000 times with fifteen second exposures in six broad bands from 0.35 to 1.1 microns, to a total point-source depth of r~27.5. The LSST Science Book describes the basic parameters of the LSST hardware, software, and observing plans. The book discusses educational and outreach opportunities, then goes on to describe a broad range of science that LSST will revolutionize: mapping the inner and outer Solar System, stellar populations in the Milky Way and nearby galaxies, the structure of the Milky Way disk and halo and other objects in the Local Volume, transient and variable objects both at low and high redshift, and the properties of normal and active galaxies at low and high redshift. It then turns to far-field cosmological topics, exploring properties of supernovae to z~1, strong and weak lensing, the large-scale distribution of galaxies and baryon oscillations, and how these different probes may be combined to constrain cosmological models and the physics of dark energy

    Molecular Line Transfer Calculations in Star Forming Regions

    Get PDF
    This thesis describes the development, benchmarking and application of a non-LTE, co-moving frame Monte Carlo molecular line radiative transfer module for TORUS. Careful attention has been paid to the convergence, acceleration and optimisation of the code. I present the results of the application of the code to various benchmarking scenarios, including a collapsing cloud, a circumstellar disc and a very optically thick cloud of interstellar water. Benchmarking is an essential step in verifying the accuracy and efficiency of the code which is vital if it is to be used to analyse real data. In all cases, the code was able to accurately reproduce either the expected analytical solution or (in the absence of such a solution) was able to produce results commensurate with the results of other codes. In order to facilitate the motivating radiative transfer calculations of a star-forming cluster simulated using smoothed particle hydrodynamics (SPH) performed in this thesis, it was first necessary to devise and test an algorithm that efficiently maps an irregular distribution of smoothed particle hydrodynamics (SPH) particles onto a regular adaptive mesh. Whilst the algorithm was designed with this in mind it has also been used to study the effects of radiative feedback in circumstellar discs as well create a synthetic survey of a simulated galaxy. Bate et al.'s particle representation was resampled onto an adaptive mesh to enable me to use TORUS to obtain non-LTE level populations of multiple molecular species throughout the cluster and create velocity-resolved datacubes by calculating the emergent intensity using raytracing. I compared line profiles of cores traced by N2H+ (1-0) to probes of low density gas (13CO and C18O (1-0)) surrounding the cores along the line-of-sight. The relative differences of the line-centre velocities were found to be small compared to the velocity dispersion, matching recent observations. The conclusion is that one cannot reject competitive accretion as a viable theory of star formation based on observed velocity profiles
    corecore