20 research outputs found

    Multiple Measurement Vector Model for Sparsity-Based Vascular Ultrasound Imaging

    Get PDF

    Learning Wavefront Coding for Extended Depth of Field Imaging

    Get PDF
    Depth of field is an important factor of imaging systems that highly affects the quality of the acquired spatial information. Extended depth of field (EDoF) imaging is a challenging ill-posed problem and has been extensively addressed in the literature. We propose a computational imaging approach for EDoF, where we employ wavefront coding via a diffractive optical element (DOE) and we achieve deblurring through a convolutional neural network. Thanks to the end-to-end differentiable modeling of optical image formation and computational post-processing, we jointly optimize the optical design, i.e., DOE, and the deblurring through standard gradient descent methods. Based on the properties of the underlying refractive lens and the desired EDoF range, we provide an analytical expression for the search space of the DOE, which is instrumental in the convergence of the end-to-end network. We achieve superior EDoF imaging performance compared to the state of the art, where we demonstrate results with minimal artifacts in various scenarios, including deep 3D scenes and broadband imaging

    Unsupervised Scale-Invariant Multispectral Shape Matching

    Full text link
    Alignment between non-rigid stretchable structures is one of the hardest tasks in computer vision, as the invariant properties are hard to define on one hand, and on the other hand no labelled data exists for real datasets. We present unsupervised neural network architecture based upon the spectrum of scale-invariant geometry. We build ontop the functional maps architecture, but show that learning local features, as done until now, is not enough once the isometric assumption breaks but can be solved using scale-invariant geometry. Our method is agnostic to local-scale deformations and shows superior performance for matching shapes from different domains when compared to existing spectral state-of-the-art solutions

    Robust and Efficient Inference of Scene and Object Motion in Multi-Camera Systems

    Get PDF
    Multi-camera systems have the ability to overcome some of the fundamental limitations of single camera based systems. Having multiple view points of a scene goes a long way in limiting the influence of field of view, occlusion, blur and poor resolution of an individual camera. This dissertation addresses robust and efficient inference of object motion and scene in multi-camera and multi-sensor systems. The first part of the dissertation discusses the role of constraints introduced by projective imaging towards robust inference of multi-camera/sensor based object motion. We discuss the role of the homography and epipolar constraints for fusing object motion perceived by individual cameras. For planar scenes, the homography constraints provide a natural mechanism for data association. For scenes that are not planar, the epipolar constraint provides a weaker multi-view relationship. We use the epipolar constraint for tracking in multi-camera and multi-sensor networks. In particular, we show that the epipolar constraint reduces the dimensionality of the state space of the problem by introducing a ``shared'' state space for the joint tracking problem. This allows for robust tracking even when one of the sensors fail due to poor SNR or occlusion. The second part of the dissertation deals with challenges in the computational aspects of tracking algorithms that are common to such systems. Much of the inference in the multi-camera and multi-sensor networks deal with complex non-linear models corrupted with non-Gaussian noise. Particle filters provide approximate Bayesian inference in such settings. We analyze the computational drawbacks of traditional particle filtering algorithms, and present a method for implementing the particle filter using the Independent Metropolis Hastings sampler, that is highly amenable to pipelined implementations and parallelization. We analyze the implementations of the proposed algorithm, and in particular concentrate on implementations that have minimum processing times. The last part of the dissertation deals with the efficient sensing paradigm of compressing sensing (CS) applied to signals in imaging, such as natural images and reflectance fields. We propose a hybrid signal model on the assumption that most real-world signals exhibit subspace compressibility as well as sparse representations. We show that several real-world visual signals such as images, reflectance fields, videos etc., are better approximated by this hybrid of two models. We derive optimal hybrid linear projections of the signal and show that theoretical guarantees and algorithms designed for CS can be easily extended to hybrid subspace-compressive sensing. Such methods reduce the amount of information sensed by a camera, and help in reducing the so called data deluge problem in large multi-camera systems

    Visibility recovery on images acquired in attenuating media. Application to underwater, fog, and mammographic imaging

    Get PDF
    136 p.When acquired in attenuating media, digital images of ten suffer from a particularly complex degradation that reduces their visual quality, hindering their suitability for further computational applications, or simply decreasing the visual pleasan tness for the user. In these cases, mathematical image processing reveals it self as an ideal tool to recover some of the information lost during the degradation process. In this dissertation,we deal with three of such practical scenarios in which this problematic is specially relevant, namely, underwater image enhancement, fogremoval and mammographic image processing. In the case of digital mammograms,X-ray beams traverse human tissue, and electronic detectorscapture them as they reach the other side. However, the superposition on a bidimensional image of three-dimensional structures produces low contraste dimages in which structures of interest suffer from a diminished visibility, obstructing diagnosis tasks. Regarding fog removal, the loss of contrast is produced by the atmospheric conditions, and white colour takes over the scene uniformly as distance increases, also reducing visibility.For underwater images, there is an added difficulty, since colour is not lost uniformly; instead, red colours decay the fastest, and green and blue colours typically dominate the acquired images. To address all these challenges,in this dissertation we develop new methodologies that rely on: a)physical models of the observed degradation, and b) the calculus of variations.Equipped with this powerful machinery, we design novel theoreticaland computational tools, including image-dependent functional energies that capture the particularities of each degradation model. These energie sare composed of different integral terms that are simultaneous lyminimized by means of efficient numerical schemes, producing a clean,visually-pleasant and use ful output image, with better contrast and increased visibility. In every considered application, we provide comprehensive qualitative (visual) and quantitative experimental results to validateour methods, confirming that the developed techniques out perform other existing approaches in the literature

    Methods for Structure from Motion

    Get PDF

    Bayesian approach to ionospheric imaging with Gaussian Markov random field priors

    Get PDF
    Ionosfääri on noin 60–1000 kilometrin korkeudella sijaitseva ilmakehän kerros, jossa kaasuatomien ja -molekyylien elektroneja on päässyt irtoamaan auringon säteilyn ja auringosta peräisin olevien nopeiden hiukkasten vaikutuksesta. Näin syntyneillä ioneilla ja vapailla elektroneilla on sähkö- ja magneettikenttien kanssa vuorovaikuttava sähkövaraus. Ionosfäärillä on siksi merkittävä rooli radioliikenteessä. Se voi mahdollistaa horisontin yli tapahtuvat pitkät radiolähetykset heijastamalla lähetetyn sähkömagneettisen signaalin takaisin maata kohti. Toisaalta ionosfääri vaikuttaa myös sen läpäiseviin korkeampitaajuuksisiin signaaleihin. Esimerkiksi satelliittipaikannuksessa ionosfäärin vaikutus on parhaassakin tapauksessa otettava huomioon, mutta huonoimmassa se voi estää paikannuksen täysin. Näkyvin ja tunnetuin ionosfääriin liittyvä ilmiö lienee revontulet. Yksi keskeisistä suureista ionosfäärin tutkimuksessa on vapaiden elektronien määrä kuutiometrin tilavuudessa. Käytännössä elektronitiheyden mittaaminen on mahdollista mm. tutkilla, kuten Norjan, Suomen ja Ruotsin alueilla sijaitsevalla EISCAT-tutkajärjestelmällä, sekä raketti- tai satelliittimittauksilla. Mittaukset voivat olla hyvinkin tarkkoja, mutta tietoa saadaan ainoastaan tutkakeilan suunnassa tai mittalaitteen läheisyydestä. Näillä menetelmillä ionosfäärin tutkiminen laajemmalla alueella on siten vaikeaa ja kallista. Olemassa olevat paikannussatelliitit ja vastaanotinverkot mahdollistavat ionosfäärin elektronitiheyden mittaamisen alueellisessa, ja jopa globaalissa mittakaavassa, ensisijaisen käyttötarkoituksensa sivutuotteena. Satelliittimittausten ajallinen ja paikallinen kattavuus on hyvä, ja kaiken aikaa kasvava, mutta esimerkiksi tarkkoihin tutkamittauksiin verrattuna yksittäisten mittausten tuottama informaatio on huomattavasti vähäisempää. Tässä väitöstyössä kehitettiin tietokoneohjelmisto ionosfäärin elektronitiheyden kolmiulotteiseen kuvantamiseen. Menetelmä perustuu matemaattisten käänteisongelmien teoriaan ja muistuttaa lääketieteessä käytettyjä viipalekuvausmenetelmiä. Satelliittimittausten puutteellisesta informaatiosta johtuen työssä on keskitytty etenkin siihen, miten ratkaisun löytymistä voidaan auttaa tilastollisesti esitetyllä fysikaalisella ennakkotiedolla. Erityisesti työssä sovellettiin gaussisiin Markovin satunnaiskenttiin perustuvaa uutta korrelaatiopriori-menetelmää. Menetelmä vähentää merkittävästi tietokonelaskennassa käytettävän muistin tarvetta, mikä lyhentää laskenta-aikaa ja mahdollistaa korkeamman kuvantamisresoluution.Ionosphere is the partly ionised layer of Earth's atmosphere caused by solar radiation and particle precipitation. The ionisation can start from 60 km and extend up to 1000 km altitude. Often the interest in ionosphere is in the quantity and distribution of the free electrons. The electron density is related to the ionospheric refractive index and thus sufficiently high densities affect the electromagnetic waves propagating in the ionised medium. This is the reason for HF radio signals being able to reflect from the ionosphere allowing broadcast over the horizon, but also an error source in satellite positioning systems. The ionospheric electron density can be studied e.g. with specific radars and satellite in situ measurements. These instruments can provide very precise observations, however, typically only in the vicinity of the instrument. To make observations in regional and global scales, due to the volume of the domain and price of the aforementioned instruments, indirect satellite measurements and imaging methods are required. Mathematically ionospheric imaging suffers from two main complications. First, due to very sparse and limited measurement geometry between satellites and receivers, it is an ill-posed inverse problem. The measurements do not have enough information to reconstruct the electron density and thus additional information is required in some form. Second, to obtain sufficient resolution, the resulting numerical model can become computationally infeasible. In this thesis, the Bayesian statistical background for the ionospheric imaging is presented. The Bayesian approach provides a natural way to account for different sources of information with corresponding uncertainties and to update the estimated ionospheric state as new information becomes available. Most importantly, the Gaussian Markov Random Field (GMRF) priors are introduced for the application of ionospheric imaging. The GMRF approach makes the Bayesian approach computationally feasible by sparse prior precision matrices. The Bayesian method is indeed practicable and many of the widely used methods in ionospheric imaging revert back to the Bayesian approach. Unfortunately, the approach cannot escape the inherent lack of information provided by the measurement set-up, and similarly to other approaches, it is highly dependent on the additional subjective information required to solve the problem. It is here shown that the use of GMRF provides a genuine improvement for the task as this subjective information can be understood and described probabilistically in a meaningful and physically interpretative way while keeping the computational costs low

    Applications in Monocular Computer Vision using Geometry and Learning : Map Merging, 3D Reconstruction and Detection of Geometric Primitives

    Get PDF
    As the dream of autonomous vehicles moving around in our world comes closer, the problem of robust localization and mapping is essential to solve. In this inherently structured and geometric problem we also want the agents to learn from experience in a data driven fashion. How the modern Neural Network models can be combined with Structure from Motion (SfM) is an interesting research question and this thesis studies some related problems in 3D reconstruction, feature detection, SfM and map merging.In Paper I we study how a Bayesian Neural Network (BNN) performs in Semantic Scene Completion, where the task is to predict a semantic 3D voxel grid for the Field of View of a single RGBD image. We propose an extended task and evaluate the benefits of the BNN when encountering new classes at inference time. It is shown that the BNN outperforms the deterministic baseline.Papers II-­III are about detection of points, lines and planes defining a Room Layout in an RGB image. Due to the repeated textures and homogeneous colours of indoor surfaces it is not ideal to only use point features for Structure from Motion. The idea is to complement the point features by detecting a Wireframe – a connected set of line segments – which marks the intersection of planes in the Room Layout. Paper II concerns a task for detecting a Semantic Room Wireframe and implements a Neural Network model utilizing a Graph Convolutional Network module. The experiments show that the method is more flexible than previous Room Layout Estimation methods and perform better than previous Wireframe Parsing methods. Paper III takes the task closer to Room Layout Estimation by detecting a connected set of semantic polygons in an RGB image. The end­-to-­end trainable model is a combination of a Wireframe Parsing model and a Heterogeneous Graph Neural Network. We show promising results by outperforming state of the art models for Room Layout Estimation using synthetic Wireframe detections. However, the joint Wireframe and Polygon detector requires further research to compete with the state of the art models.In Paper IV we propose minimal solvers for SfM with parallel cylinders. The problem may be reduced to estimating circles in 2D and the paper contributes with theory for the two­view relative motion and two­-circle relative structure problem. Fast solvers are derived and experiments show good performance in both simulation and on real data.Papers V-­VII cover the task of map merging. That is, given a set of individually optimized point clouds with camera poses from a SfM pipeline, how can the solutions be effectively merged without completely re­solving the Structure from Motion problem? Papers V­-VI introduce an effective method for merging and shows the effectiveness through experiments of real and simulated data. Paper VII considers the matching problem for point clouds and proposes minimal solvers that allows for deformation ofeach point cloud. Experiments show that the method robustly matches point clouds with drift in the SfM solution

    Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data

    Get PDF
    This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data. Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches
    corecore