12,021 research outputs found

    Lookahead Strategies for Sequential Monte Carlo

    Get PDF
    Based on the principles of importance sampling and resampling, sequential Monte Carlo (SMC) encompasses a large set of powerful techniques dealing with complex stochastic dynamic systems. Many of these systems possess strong memory, with which future information can help sharpen the inference about the current state. By providing theoretical justification of several existing algorithms and introducing several new ones, we study systematically how to construct efficient SMC algorithms to take advantage of the "future" information without creating a substantially high computational burden. The main idea is to allow for lookahead in the Monte Carlo process so that future information can be utilized in weighting and generating Monte Carlo samples, or resampling from samples of the current state.Comment: Published in at http://dx.doi.org/10.1214/12-STS401 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Segmentation-Aware Convolutional Networks Using Local Attention Masks

    Get PDF
    We introduce an approach to integrate segmentation information within a convolutional neural network (CNN). This counter-acts the tendency of CNNs to smooth information across regions and increases their spatial precision. To obtain segmentation information, we set up a CNN to provide an embedding space where region co-membership can be estimated based on Euclidean distance. We use these embeddings to compute a local attention mask relative to every neuron position. We incorporate such masks in CNNs and replace the convolution operation with a "segmentation-aware" variant that allows a neuron to selectively attend to inputs coming from its own region. We call the resulting network a segmentation-aware CNN because it adapts its filters at each image point according to local segmentation cues. We demonstrate the merit of our method on two widely different dense prediction tasks, that involve classification (semantic segmentation) and regression (optical flow). Our results show that in semantic segmentation we can match the performance of DenseCRFs while being faster and simpler, and in optical flow we obtain clearly sharper responses than networks that do not use local attention masks. In both cases, segmentation-aware convolution yields systematic improvements over strong baselines. Source code for this work is available online at http://cs.cmu.edu/~aharley/segaware

    Algorithms and Architecture for Real-time Recommendations at News UK

    Full text link
    Recommendation systems are recognised as being hugely important in industry, and the area is now well understood. At News UK, there is a requirement to be able to quickly generate recommendations for users on news items as they are published. However, little has been published about systems that can generate recommendations in response to changes in recommendable items and user behaviour in a very short space of time. In this paper we describe a new algorithm for updating collaborative filtering models incrementally, and demonstrate its effectiveness on clickstream data from The Times. We also describe the architecture that allows recommendations to be generated on the fly, and how we have made each component scalable. The system is currently being used in production at News UK.Comment: Accepted for presentation at AI-2017 Thirty-seventh SGAI International Conference on Artificial Intelligence. Cambridge, England 12-14 December 201

    Concurrent filtering and smoothing: A parallel architecture for real-time navigation and full smoothing

    Get PDF
    We present a parallelized navigation architecture that is capable of running in real-time and incorporating long-term loop closure constraints while producing the optimal Bayesian solution. This architecture splits the inference problem into a low-latency update that incorporates new measurements using just the most recent states (filter), and a high-latency update that is capable of closing long loops and smooths using all past states (smoother). This architecture employs the probabilistic graphical models of factor graphs, which allows the low-latency inference and high-latency inference to be viewed as sub-operations of a single optimization performed within a single graphical model. A specific factorization of the full joint density is employed that allows the different inference operations to be performed asynchronously while still recovering the optimal solution produced by a full batch optimization. Due to the real-time, asynchronous nature of this algorithm, updates to the state estimates from the high-latency smoother will naturally be delayed until the smoother calculations have completed. This architecture has been tested within a simulated aerial environment and on real data collected from an autonomous ground vehicle. In all cases, the concurrent architecture is shown to recover the full batch solution, even while updated state estimates are produced in real-time.United States. Air Force Research Laboratory. All Source Positioning and Navigation (ASPN) Program (Contract FA8650-11-C-7137

    Inverse Modeling for MEG/EEG data

    Full text link
    We provide an overview of the state-of-the-art for mathematical methods that are used to reconstruct brain activity from neurophysiological data. After a brief introduction on the mathematics of the forward problem, we discuss standard and recently proposed regularization methods, as well as Monte Carlo techniques for Bayesian inference. We classify the inverse methods based on the underlying source model, and discuss advantages and disadvantages. Finally we describe an application to the pre-surgical evaluation of epileptic patients.Comment: 15 pages, 1 figur

    GSFC Ada programming guidelines

    Get PDF
    A significant Ada effort has been under way at Goddard for the last two years. To ease the center's transition toward Ada (notably for future space station projects), a cooperative effort of half a dozen companies and NASA personnel was started in 1985 to produce programming standards and guidelines for the Ada language. The great richness of the Ada language and the need of programmers for good style examples makes Ada programming guidelines an important tool to smooth the Ada transition. Because of the natural divergence of technical opinions, the great diversity of our government and private organizations and the novelty of the Ada technology, the creation of an Ada programming guidelines document is a difficult and time consuming task. It is also a vital one. Steps must now be taken to ensure that the guide is refined in an organized but timely manner to reflect the growing level of expertise of the Ada community
    • …
    corecore