978 research outputs found

    CLEAR: Covariant LEAst-square Re-fitting with applications to image restoration

    Full text link
    In this paper, we propose a new framework to remove parts of the systematic errors affecting popular restoration algorithms, with a special focus for image processing tasks. Generalizing ideas that emerged for â„“1\ell_1 regularization, we develop an approach re-fitting the results of standard methods towards the input data. Total variation regularizations and non-local means are special cases of interest. We identify important covariant information that should be preserved by the re-fitting method, and emphasize the importance of preserving the Jacobian (w.r.t. the observed signal) of the original estimator. Then, we provide an approach that has a "twicing" flavor and allows re-fitting the restored signal by adding back a local affine transformation of the residual term. We illustrate the benefits of our method on numerical simulations for image restoration tasks

    AM-FM methods for image and video processing

    Get PDF
    This dissertation is focused on the development of robust and efficient Amplitude-Modulation Frequency-Modulation (AM-FM) demodulation methods for image and video processing (there is currently a patent pending that covers the AM-FM methods and applications described in this dissertation). The motivation for this research lies in the wide number of image and video processing applications that can significantly benefit from this research. A number of potential applications are developed in the dissertation. First, a new, robust and efficient formulation for the instantaneous frequency (IF) estimation: a variable spacing, local quadratic phase method (VS-LQP) is presented. VS-LQP produces much more accurate results than current AM-FM methods. At significant noise levels (SNR \u3c 30dB), for single component images, the VS-LQP method produces better IF estimation results than methods using a multi-scale filterbank. At low noise levels (SNR \u3e 50dB), VS-LQP performs better when used in combination with a multi-scale filterbank. In all cases, VS-LQP outperforms the Quasi-Eigen Approximation algorithm by significant amounts (up to 20dB). New least squares reconstructions using AM-FM components from the input signal (image or video) are also presented. Three different reconstruction approaches are developed: (i) using AM-FM harmonics, (ii) using AM-FM components extracted from different scales and (iii) using AM-FM harmonics with the output of a low-pass filter. The image reconstruction methods provide perceptually lossless results with image quality index values bigger than 0.7 on average. The video reconstructions produced image quality index values, frame by frame, up to more than 0.7 using AM-FM components extracted from different scales. An application of the AM-FM method to retinal image analysis is also shown. This approach uses the instantaneous frequency magnitude and the instantaneous amplitude (IA) information to provide image features. The new AM-FM approach produced ROC area of 0.984 in classifying Risk 0 versus Risk 1, 0.95 in classifying Risk 0 versus Risk 2, 0.973 in classifying Risk 0 versus Risk 3 and 0.95 in classifying Risk 0 versus all images with any sign of Diabetic Retinopathy. An extension of the 2D AM-FM demodulation methods to three dimensions is also presented. New AM-FM methods for motion estimation are developed. The new motion estimation method provides three motion estimation equations per channel filter (AM, IF motion equations and a continuity equation). Applications of the method in motion tracking, trajectory estimation and for continuous-scale video searching are demonstrated. For each application, we discuss the advantages of the AM-FM methods over current approaches

    Generalizable automated pixel-level structural segmentation of medical and biological data

    Get PDF
    Over the years, the rapid expansion in imaging techniques and equipments has driven the demand for more automation in handling large medical and biological data sets. A wealth of approaches have been suggested as optimal solutions for their respective imaging types. These solutions span various image resolutions, modalities and contrast (staining) mechanisms. Few approaches generalise well across multiple image types, contrasts or resolution. This thesis proposes an automated pixel-level framework that addresses 2D, 2D+t and 3D structural segmentation in a more generalizable manner, yet has enough adaptability to address a number of specific image modalities, spanning retinal funduscopy, sequential fluorescein angiography and two-photon microscopy. The pixel-level segmentation scheme involves: i ) constructing a phase-invariant orientation field of the local spatial neighbourhood; ii ) combining local feature maps with intensity-based measures in a structural patch context; iii ) using a complex supervised learning process to interpret the combination of all the elements in the patch in order to reach a classification decision. This has the advantage of transferability from retinal blood vessels in 2D to neural structures in 3D. To process the temporal components in non-standard 2D+t retinal angiography sequences, we first introduce a co-registration procedure: at the pairwise level, we combine projective RANSAC with a quadratic homography transformation to map the coordinate systems between any two frames. At the joint level, we construct a hierarchical approach in order for each individual frame to be registered to the global reference intra- and inter- sequence(s). We then take a non-training approach that searches in both the spatial neighbourhood of each pixel and the filter output across varying scales to locate and link microvascular centrelines to (sub-) pixel accuracy. In essence, this \link while extract" piece-wise segmentation approach combines the local phase-invariant orientation field information with additional local phase estimates to obtain a soft classification of the centreline (sub-) pixel locations. Unlike retinal segmentation problems where vasculature is the main focus, 3D neural segmentation requires additional exibility, allowing a variety of structures of anatomical importance yet with different geometric properties to be differentiated both from the background and against other structures. Notably, cellular structures, such as Purkinje cells, neural dendrites and interneurons, all display certain elongation along their medial axes, yet each class has a characteristic shape captured by an orientation field that distinguishes it from other structures. To take this into consideration, we introduce a 5D orientation mapping to capture these orientation properties. This mapping is incorporated into the local feature map description prior to a learning machine. Extensive performance evaluations and validation of each of the techniques presented in this thesis is carried out. For retinal fundus images, we compute Receiver Operating Characteristic (ROC) curves on existing public databases (DRIVE & STARE) to assess and compare our algorithms with other benchmark methods. For 2D+t retinal angiography sequences, we compute the error metrics ("Centreline Error") of our scheme with other benchmark methods. For microscopic cortical data stacks, we present segmentation results on both surrogate data with known ground-truth and experimental rat cerebellar cortex two-photon microscopic tissue stacks.Open Acces

    Four-dimensional cardiac imaging in living embryos via postacquisition synchronization of nongated slice sequences

    Get PDF
    Being able to acquire, visualize, and analyze 3D time series (4D data) from living embryos makes it possible to understand complex dynamic movements at early stages of embryonic development. Despite recent technological breakthroughs in 2D dynamic imaging, confocal microscopes remain quite slow at capturing optical sections at successive depths. However, when the studied motion is periodic— such as for a beating heart—a way to circumvent this problem is to acquire, successively, sets of 2D+time slice sequences at increasing depths over at least one time period and later rearrange them to recover a 3D+time sequence. In other imaging modalities at macroscopic scales, external gating signals, e.g., an electro-cardiogram, have been used to achieve proper synchronization. Since gating signals are either unavailable or cumbersome to acquire in microscopic organisms, we have developed a procedure to reconstruct volumes based solely on the information contained in the image sequences. The central part of the algorithm is a least-squares minimization of an objective criterion that depends on the similarity between the data from neighboring depths. Owing to a wavelet-based multiresolution approach, our method is robust to common confocal microscopy artifacts. We validate the procedure on both simulated data and in vivo measurements from living zebrafish embryos

    Revisiting spatial vision: toward a unifying model

    Get PDF
    We report contrast detection, contrast increment, contrast masking, orientation discrimination, and spatial frequency discrimination thresholds for spatially localized stimuli at 4° of eccentricity. Our stimulus geometry emphasizes interactions among overlapping visual filters and differs from that used in previous threshold measurements, which also admits interactions among distant filters. We quantitatively account for all measurements by simulating a small population of overlapping visual filters interacting through divisive inhibition. We depart from previous models of this kind in the parameters of divisive inhibition and in using a statistically efficient decision stage based on Fisher information. The success of this unified account suggests that, contrary to Bowne [Vision Res. 30, 449 (1990)], spatial vision thresholds reflect a single level of processing, perhaps as early as primary visual cortex

    Filtering Techniques for Low-Noise Previews of Interactive Stochastic Ray Tracing

    Get PDF
    Progressive stochastic ray tracing is increasingly used in interactive applications. Examples of such applications are interactive design reviews and digital content creation. This dissertation aims at advancing this development. For one thing, two filtering techniques are presented, which can generate fast and reliable previews of global illumination solutions. For another thing, a system architecture is presented, which supports exchangeable rendering back-ends in distributed rendering systems

    Visual scene recognition with biologically relevant generative models

    No full text
    This research focuses on developing visual object categorization methodologies that are based on machine learning techniques and biologically inspired generative models of visual scene recognition. Modelling the statistical variability in visual patterns, in the space of features extracted from them by an appropriate low level signal processing technique, is an important matter of investigation for both humans and machines. To study this problem, we have examined in detail two recent probabilistic models of vision: a simple multivariate Gaussian model as suggested by (Karklin & Lewicki, 2009) and a restricted Boltzmann machine (RBM) proposed by (Hinton, 2002). Both the models have been widely used for visual object classification and scene analysis tasks before. This research highlights that these models on their own are not plausible enough to perform the classification task, and suggests Fisher kernel as a means of inducing discrimination into these models for classification power. Our empirical results on standard benchmark data sets reveal that the classification performance of these generative models could be significantly boosted near to the state of the art performance, by drawing a Fisher kernel from compact generative models that computes the data labels in a fraction of total computation time. We compare the proposed technique with other distance based and kernel based classifiers to show how computationally efficient the Fisher kernels are. To the best of our knowledge, Fisher kernel has not been drawn from the RBM before, so the work presented in the thesis is novel in terms of its idea and application to vision problem

    On Quantum Statistical Inference, I

    Full text link
    Recent developments in the mathematical foundations of quantum mechanics have brought the theory closer to that of classical probability and statistics. On the other hand, the unique character of quantum physics sets many of the questions addressed apart from those met classically in stochastics. Furthermore, concurrent advances in experimental techniques and in the theory of quantum computation have led to a strong interest in questions of quantum information, in particular in the sense of the amount of information about unknown parameters in given observational data or accessible through various possible types of measurements. This scenery is outlined (with an audience of statisticians and probabilists in mind).Comment: A shorter version containing some different material will appear (2003), with discussion, in J. Roy. Statist. Soc. B, and is archived as quant-ph/030719
    • …
    corecore