18 research outputs found

    A multigrid platform for real-time motion computation with discontinuity-preserving variational methods

    Get PDF
    Variational methods are among the most accurate techniques for estimating the optic flow. They yield dense flow fields and can be designed such that they preserve discontinuities, allow to deal with large displacements and perform well under noise or varying illumination. However, such adaptations render the minimisation of the underlying energy functional very expensive in terms of computational costs: Typically, one or more large linear or nonlinear systems of equations have to be solved in order to obtain the desired solution. Consequently, variational methods are considered to be too slow for real-time performance. In our paper we address this problem in two ways: (i) We present a numerical framework based on bidirectional multigrid methods for accelerating a broad class of variational optic flow methods with different constancy and smoothness assumptions. In particular, discontinuity-preserving regularisation strategies are thereby in the focus of our work. (ii) We show by the examples of classical as well as more advanced variational techniques that real-time performance is possible - even for very complex optic flow models with high accuracy. Experiments show frame rates up to 63 dense flow fields per second for real-world image sequences of size 160 x 120 on a standard PC. Compared to classical iterative methods this constitutes a speedup of two to four orders of magnitude

    Variational optic flow computation in real-time

    Get PDF
    Variational methods for optic flow computation have the reputation of producing good results at the expense of being too slow for realtime applications. We show that real-time variational computation of optic flow fields is possible when appropriate methods are combined with modern numerical techniques. We consider the CLG method, a recent variational technique that combines the quality of the dense flow fields of the Horn and Schunck approach with the noise robustness of the Lucas-Kanade method. For the linear system of equations resulting from the discretised Euler-Lagrange equations, we present different multigrid schemes in detail. We show that under realistic accuracy requirements they are up to 247 times more efficient than the widely used Gauß-Seidel algorithm. On a 3.06 GHz PC, we have computed 40 dense flow fields of size 200 x 200 pixels within a single second

    High performance cluster computing with 3-D nonlinear diffusion filters

    Get PDF
    This paper deals with parallelisation and implementation aspects of PDE-based image processing models for large cluster environments with distributed memory. As an example we focus on nonlinear diffusion filtering which we discretise by means of an additive operator splitting (AOS). We start by decomposing the algorithm into small modules that shall be parallelised separately. For this purpose image partitioning strategies are discussed and their impact on the communication pattern and volume is analysed. Based on the results we develop an algorithmic implementation with excellent scaling properties on massively connected low latency networks. Test runs on a high-end Myrinet cluster yield almost linear speedup factors up to 209 for 256 processors. This results in typical denoising times of 0.5 seconds for five iterations on a 256 x 256 x 128 data cube

    A source of entangled photons based on a cavity-enhanced and strain-tuned GaAs quantum dot

    Full text link
    A quantum-light source that delivers photons with a high brightness and a high degree of entanglement is fundamental for the development of efficient entanglement-based quantum-key distribution systems. Among all possible candidates, epitaxial quantum dots are currently emerging as one of the brightest sources of highly entangled photons. However, the optimization of both brightness and entanglement currently requires different technologies that are difficult to combine in a scalable manner. In this work, we overcome this challenge by developing a novel device consisting of a quantum dot embedded in a circular Bragg resonator, in turn, integrated onto a micromachined piezoelectric actuator. The resonator engineers the light-matter interaction to empower extraction efficiencies up to 0.69(4). Simultaneously, the actuator manipulates strain fields that tune the quantum dot for the generation of entangled photons with fidelities up to 0.96(1). This hybrid technology has the potential to overcome the limitations of the key rates that plague current approaches to entanglement-based quantum key distribution and entanglement-based quantum networks. Introductio

    Deterministic Fabrication of Inverted Nanocones around GaAs Quantum Dots

    No full text
    To enhance the performance of quantum dots, used as future quantum emitters in quantum comput- ing or quantum cryptography, the creation of nano-structures became indispensable. Furthermore, many quantum dot fabrication techniques, leading to photons with outstanding optical quality, occur without position control. This requires the deterministic positioning of the photonic structure around preselected quantum emitters. In this thesis, an approach to perform quantum dot position mapping and a subsequent creation of inverted nano-cones was worked out. Thereby, a photoluminescence setup, operating with light emitting diodes, was built up, an image processing script was created, and existing nano-structure processing steps were modied in order to enhance processing reproducibility. In the end, the fabricated structures were examined through photoluminescence measurements. As a result, some structures exhibited remarkable intensity enhancement, whereby positioning as well as fabrication techniques were still improvable.submitted by Christoph KohlbergerUniversitÀt Linz, Masterarbeit, 2018(VLID)278693

    Nonlinear Shape Statistics in Mumford-Shah Based Segmentation

    Full text link
    We present a variational integration of nonlinear shape statistics into a Mumford-Shah based segmentation process. The nonlinear statistics are derived from a set of training silhouettes by a novel method of density estimation which can be considered as an extension of kernel PCA to a stochastic framework

    Nonlinear Shape Statistics via Kernel Spaces

    No full text
    We present a novel approach for representing shape knowledge in terms of example views of 3D objects. Typically, such data sets exhibit a highly nonlinear structure with distinct clusters in the shape vector space, preventing the usual encoding by linear principal component analysis (PCA). For this reason, we propose a nonlinear Mercer kernel PCA scheme which takes into account both the projection distance and the within-subspace distance in a high-dimensional feature space. The comparison of our approach with supervised mixture models indicates that the statistics of example views of distinct 3D objects can fairly well be learned and represented in a completely unsupervised way
    corecore