16 research outputs found

    Real-time people tracking in a camera network

    Get PDF
    Visual tracking is a fundamental key to the recognition and analysis of human behaviour. In this thesis we present an approach to track several subjects using multiple cameras in real time. The tracking framework employs a numerical Bayesian estimator, also known as a particle lter, which has been developed for parallel implementation on a Graphics Processing Unit (GPU). In order to integrate multiple cameras into a single tracking unit we represent the human body by a parametric ellipsoid in a 3D world. The elliptical boundary can be projected rapidly, several hundred times per subject per frame, onto any image for comparison with the image data within a likelihood model. Adding variables to encode visibility and persistence into the state vector, we tackle the problems of distraction and short-period occlusion. However, subjects may also disappear for longer periods due to blind spots between cameras elds of view. To recognise a desired subject after such a long-period, we add coloured texture to the ellipsoid surface, which is learnt and retained during the tracking process. This texture signature improves the recall rate from 60% to 70-80% when compared to state only data association. Compared to a standard Central Processing Unit (CPU) implementation, there is a signi cant speed-up ratio

    Ray Tracing Gems

    Get PDF
    This book is a must-have for anyone serious about rendering in real time. With the announcement of new ray tracing APIs and hardware to support them, developers can easily create real-time applications with ray tracing as a core component. As ray tracing on the GPU becomes faster, it will play a more central role in real-time rendering. Ray Tracing Gems provides key building blocks for developers of games, architectural applications, visualizations, and more. Experts in rendering share their knowledge by explaining everything from nitty-gritty techniques that will improve any ray tracer to mastery of the new capabilities of current and future hardware. What you'll learn: The latest ray tracing techniques for developing real-time applications in multiple domains Guidance, advice, and best practices for rendering applications with Microsoft DirectX Raytracing (DXR) How to implement high-performance graphics for interactive visualizations, games, simulations, and more Who this book is for: Developers who are looking to leverage the latest APIs and GPU technology for real-time rendering and ray tracing Students looking to learn about best practices in these areas Enthusiasts who want to understand and experiment with their new GPU

    Discovering Higher-order SNP Interactions in High-dimensional Genomic Data

    Get PDF
    In this thesis, a multifactor dimensionality reduction based method on associative classification is employed to identify higher-order SNP interactions for enhancing the understanding of the genetic architecture of complex diseases. Further, this thesis explored the application of deep learning techniques by providing new clues into the interaction analysis. The performance of the deep learning method is maximized by unifying deep neural networks with a random forest for achieving reliable interactions in the presence of noise

    Accelerating gravitational-wave inference with machine learning

    Get PDF
    The future for gravitational-wave astronomy is bright, with improvements for existing ground-based interferometers of the LIGO-Virgo-KAGRA Collaboration (LVK) and new ground- and space-based interferometers planned for the near future. As a result, there will imminently be an abundance of data to analyse from these detectors, which will bring with it the chances to probe new regimes. However, this will also bring with it new challenges to address, such as the volume of data and need for new analysis techniques. Leveraging this data hinges on our ability to determine the characteristics of the sources that produce the observed gravitational-wave signals, and Bayesian inference is the method of choice. The main algorithms that have been used in these analyses are Markov Chain Monte Carlo and Nested Sampling. Each have their own advantages and disadvantages. However, both are computationally expensive when applied to gravitational-wave inference, typically taking of order days to weeks for shorter signals and up to months for longer signals, such as those from binary neutron star mergers. Furthermore, the cost of these analyses increases as additional physics is included, such as higher-order modes, precession and eccentricity. These factors, combined with the previously mentioned increase in data, and therefore number of signals, pose a significant challenge. As such, there is a need for faster and more efficient algorithms for gravitational-wave inference. In this work, we present novel algorithms that serve as drop-in replacements for existing approaches but can accelerate inference by an order of magnitude. Our initial approach is to incorporate machine learning into an existing algorithm, namely nested sampling, with the aim of accelerating it whilst leaving the underlying algorithm unchanged. To this end, we introduce nessai, a nested sampling algorithm that includes a novel method for sampling from the likelihood-constrained prior that leverages normalizing flows, a type of machine learning algorithm. Normalizing flows can approximate the distribution of live points during a nested sampling run, and allow for new points to be drawn from it. They are also flexible and can learn complex correlations, thus eliminating the need to use a random walk to propose new samples. We validate nessai for gravitational-wave inference by analysing a population of simulated binary black holes (BBHs) and demonstrate that it produces statistically consistent results. We also compare nessai to dynesty, the standard nested sampling algorithm used by the LVK, and find that, after some improvements, it is on average ∼ 6 times more efficient and enables inference in time scales of order 10 hours on a single core. We also highlight other advantages of nessai, such as the included diagnostics and simple parallelization of the likelihood evaluation. However, we also find that the rejection sampling step necessary to ensure new samples are distributed according to the prior can be a significant computational bottleneck. We then take the opposite approach and design a custom nested sampling algorithm tailored to normalizing flows, which we call i-nessai. This algorithm is based on importance nested sampling and incorporates elements from existing variants of nested sampling. In contrast to the standard algorithm, samples no longer have to be ordered by increasing likelihood nor distributed according to the prior, thus addressing the aforementioned bottleneck in nessai. Furthermore, the formulation of the evidence allows for it to be updated with batches of samples rather than one-by-one. The algorithm we design is centred around constructing a meta-proposal that approximates the posterior distribution, which is achieved by iteratively adding normalizing flows until a stopping criterion is met. We validate i-nessai on a range of toy test problems which allows us to verify the algorithm is consistent with both nessai and, when available, the analytic results. We then repeat a similar analysis to that performed previously, and analyse a population of simulated BBH signals with i-nessai. The results show that i-nessai produces consistent results, but is up to 3 times more efficient than nessai and more than an order of magnitude more efficient (13 times) than dynesty. We also apply i-nessai to a binary neutron star (BNS) analysis and find that it can yield results in less than 30 minutes whilst only requiring O(106 ) likelihood evaluations. Having developed tools to accelerate parameter estimation, we then apply them to real data from LVK observing runs. We choose to analyse all 11 events from O1 and small selection of events from O2 and O3 and find good agreement between our results and those published by the LVK This demonstrates that nessai can be used to analyse real gravitational-wave data. However, it also highlights aspects that could be improved to further accelerate the algorithm, such as how the orbital phase and multimodal likelihood surfaces are handled. We also show how i-nessai can be applied to real data, but ultimately conclude that further work is required to determine if the settings used are robust. Finally, we consider nessai in the context of next generation ground-based interferometers and highlight some of the challenges such analyses present. As a whole, the algorithms introduced in this work pave the way for faster gravitational wave inference, offering speed-ups of up to an order of magnitude compared to existing approaches. Furthermore, they demonstrate how machine learning can be incorporated into existing analyses to accelerate them, which has the additional benefit of providing drop-in replacements for existing tools

    Notes on Randomized Algorithms

    Full text link
    Lecture notes for the Yale Computer Science course CPSC 469/569 Randomized Algorithms. Suitable for use as a supplementary text for an introductory graduate or advanced undergraduate course on randomized algorithms. Discusses tools from probability theory, including random variables and expectations, union bound arguments, concentration bounds, applications of martingales and Markov chains, and the Lov\'asz Local Lemma. Algorithmic topics include analysis of classic randomized algorithms such as Quicksort and Hoare's FIND, randomized tree data structures, hashing, Markov chain Monte Carlo sampling, randomized approximate counting, derandomization, quantum computing, and some examples of randomized distributed algorithms

    Sixth Biennial Report : August 2001 - May 2003

    No full text

    Evolutionary genomics : statistical and computational methods

    Get PDF
    This open access book addresses the challenge of analyzing and understanding the evolutionary dynamics of complex biological systems at the genomic level, and elaborates on some promising strategies that would bring us closer to uncovering of the vital relationships between genotype and phenotype. After a few educational primers, the book continues with sections on sequence homology and alignment, phylogenetic methods to study genome evolution, methodologies for evaluating selective pressures on genomic sequences as well as genomic evolution in light of protein domain architecture and transposable elements, population genomics and other omics, and discussions of current bottlenecks in handling and analyzing genomic data. Written for the highly successful Methods in Molecular Biology series, chapters include the kind of detail and expert implementation advice that lead to the best results. Authoritative and comprehensive, Evolutionary Genomics: Statistical and Computational Methods, Second Edition aims to serve both novices in biology with strong statistics and computational skills, and molecular biologists with a good grasp of standard mathematical concepts, in moving this important field of study forward

    Memory Coalescing Implementation of Metropolis Resampling on Graphics Processing Unit

    No full text
    Owing to many cores in its architecture, graphics processing unit (GPU) offers promise for parallel execution of the particle filter. A stage of the particle filter that is particularly challenging to parallelize is resampling. There are parallel resampling algorithms in the literature such as Metropolis resampling, which does not require a collective operation such as cumulative sum over weights and does not suffer from numerical instability. However, with large number of particles, Metropolis resampling becomes slow. This is because of the non-coalesced access problem on the global memory of the GPU. In this article, we offer solutions for this problem of Metropolis resampling. We introduce two implementation techniques, named Metropolis-C1 and Metropolis-C2, and compare them with the original Metropolis resampling on NVIDIA Tesla K40 board. In the first scenario where these two techniques achieve their fastest execution times, Metropolis-C1 is faster than the others, but yields the worst results in quality. However, Metropolis-C2 is closer to Metropolis resampling in quality. In the second scenario where all three algorithms yield similar quality, although Metropolis-C1 and Metropolis-C2 get slower, they are still faster than the original Metropolis resampling

    Evolutionary Genomics

    Get PDF
    This open access book addresses the challenge of analyzing and understanding the evolutionary dynamics of complex biological systems at the genomic level, and elaborates on some promising strategies that would bring us closer to uncovering of the vital relationships between genotype and phenotype. After a few educational primers, the book continues with sections on sequence homology and alignment, phylogenetic methods to study genome evolution, methodologies for evaluating selective pressures on genomic sequences as well as genomic evolution in light of protein domain architecture and transposable elements, population genomics and other omics, and discussions of current bottlenecks in handling and analyzing genomic data. Written for the highly successful Methods in Molecular Biology series, chapters include the kind of detail and expert implementation advice that lead to the best results. Authoritative and comprehensive, Evolutionary Genomics: Statistical and Computational Methods, Second Edition aims to serve both novices in biology with strong statistics and computational skills, and molecular biologists with a good grasp of standard mathematical concepts, in moving this important field of study forward
    corecore