42,874 research outputs found

    Automated single-slide staining device

    Get PDF
    A simple apparatus and method is disclosed for making individual single Gram stains on bacteria inoculated slides to assist in classifying bacteria in the laboratory as Gram-positive or Gram-negative. The apparatus involves positioning a single inoculated slide in a stationary position and thereafter automatically and sequentially flooding the slide with increments of a primary stain, a mordant, a decolorizer, a counterstain and a wash solution in a sequential manner without the individual lab technician touching the slide and with minimum danger of contamination thereof from other slides

    Validating Sample Average Approximation Solutions with Negatively Dependent Batches

    Full text link
    Sample-average approximations (SAA) are a practical means of finding approximate solutions of stochastic programming problems involving an extremely large (or infinite) number of scenarios. SAA can also be used to find estimates of a lower bound on the optimal objective value of the true problem which, when coupled with an upper bound, provides confidence intervals for the true optimal objective value and valuable information about the quality of the approximate solutions. Specifically, the lower bound can be estimated by solving multiple SAA problems (each obtained using a particular sampling method) and averaging the obtained objective values. State-of-the-art methods for lower-bound estimation generate batches of scenarios for the SAA problems independently. In this paper, we describe sampling methods that produce negatively dependent batches, thus reducing the variance of the sample-averaged lower bound estimator and increasing its usefulness in defining a confidence interval for the optimal objective value. We provide conditions under which the new sampling methods can reduce the variance of the lower bound estimator, and present computational results to verify that our scheme can reduce the variance significantly, by comparison with the traditional Latin hypercube approach

    Incremental Sparse GP Regression for Continuous-time Trajectory Estimation & Mapping

    Get PDF
    Recent work on simultaneous trajectory estimation and mapping (STEAM) for mobile robots has found success by representing the trajectory as a Gaussian process. Gaussian processes can represent a continuous-time trajectory, elegantly handle asynchronous and sparse measurements, and allow the robot to query the trajectory to recover its estimated position at any time of interest. A major drawback of this approach is that STEAM is formulated as a batch estimation problem. In this paper we provide the critical extensions necessary to transform the existing batch algorithm into an extremely efficient incremental algorithm. In particular, we are able to vastly speed up the solution time through efficient variable reordering and incremental sparse updates, which we believe will greatly increase the practicality of Gaussian process methods for robot mapping and localization. Finally, we demonstrate the approach and its advantages on both synthetic and real datasets.Comment: 10 pages, 10 figure

    Fast Ensemble Smoothing

    Full text link
    Smoothing is essential to many oceanographic, meteorological and hydrological applications. The interval smoothing problem updates all desired states within a time interval using all available observations. The fixed-lag smoothing problem updates only a fixed number of states prior to the observation at current time. The fixed-lag smoothing problem is, in general, thought to be computationally faster than a fixed-interval smoother, and can be an appropriate approximation for long interval-smoothing problems. In this paper, we use an ensemble-based approach to fixed-interval and fixed-lag smoothing, and synthesize two algorithms. The first algorithm produces a linear time solution to the interval smoothing problem with a fixed factor, and the second one produces a fixed-lag solution that is independent of the lag length. Identical-twin experiments conducted with the Lorenz-95 model show that for lag lengths approximately equal to the error doubling time, or for long intervals the proposed methods can provide significant computational savings. These results suggest that ensemble methods yield both fixed-interval and fixed-lag smoothing solutions that cost little additional effort over filtering and model propagation, in the sense that in practical ensemble application the additional increment is a small fraction of either filtering or model propagation costs. We also show that fixed-interval smoothing can perform as fast as fixed-lag smoothing and may be advantageous when memory is not an issue
    corecore