5 research outputs found

    Dynamic Denoising of Tracking Sequences

    Get PDF
    ©2008 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/TIP.2008.920795In this paper, we describe an approach to the problem of simultaneously enhancing image sequences and tracking the objects of interest represented by the latter. The enhancement part of the algorithm is based on Bayesian wavelet denoising, which has been chosen due to its exceptional ability to incorporate diverse a priori information into the process of image recovery. In particular, we demonstrate that, in dynamic settings, useful statistical priors can come both from some reasonable assumptions on the properties of the image to be enhanced as well as from the images that have already been observed before the current scene. Using such priors forms the main contribution of the present paper which is the proposal of the dynamic denoising as a tool for simultaneously enhancing and tracking image sequences.Within the proposed framework, the previous observations of a dynamic scene are employed to enhance its present observation. The mechanism that allows the fusion of the information within successive image frames is Bayesian estimation, while transferring the useful information between the images is governed by a Kalman filter that is used for both prediction and estimation of the dynamics of tracked objects. Therefore, in this methodology, the processes of target tracking and image enhancement "collaborate" in an interlacing manner, rather than being applied separately. The dynamic denoising is demonstrated on several examples of SAR imagery. The results demonstrated in this paper indicate a number of advantages of the proposed dynamic denoising over "static" approaches, in which the tracking images are enhanced independently of each other

    Dynamic Tomography Reconstruction by Projection-Domain Separable Modeling

    Full text link
    In dynamic tomography the object undergoes changes while projections are being acquired sequentially in time. The resulting inconsistent set of projections cannot be used directly to reconstruct an object corresponding to a time instant. Instead, the objective is to reconstruct a spatio-temporal representation of the object, which can be displayed as a movie. We analyze conditions for unique and stable solution of this ill-posed inverse problem, and present a recovery algorithm, validating it experimentally. We compare our approach to one based on the recently proposed GMLR variation on deep prior for video, demonstrating the advantages of the proposed approach

    High-Precision Inversion of Dynamic Radiography Using Hydrodynamic Features

    Full text link
    Radiography is often used to probe complex, evolving density fields in dynamic systems and in so doing gain insight into the underlying physics. This technique has been used in numerous fields including materials science, shock physics, inertial confinement fusion, and other national security applications. In many of these applications, however, complications resulting from noise, scatter, complex beam dynamics, etc. prevent the reconstruction of density from being accurate enough to identify the underlying physics with sufficient confidence. As such, density reconstruction from static/dynamic radiography has typically been limited to identifying discontinuous features such as cracks and voids in a number of these applications. In this work, we propose a fundamentally new approach to reconstructing density from a temporal sequence of radiographic images. Using only the robust features identifiable in radiographs, we combine them with the underlying hydrodynamic equations of motion using a machine learning approach, namely, conditional generative adversarial networks (cGAN), to determine the density fields from a dynamic sequence of radiographs. Next, we seek to further enhance the hydrodynamic consistency of the ML-based density reconstruction through a process of parameter estimation and projection onto a hydrodynamic manifold. In this context, we note that the distance from the hydrodynamic manifold given by the training data to the test data in the parameter space considered both serves as a diagnostic of the robustness of the predictions and serves to augment the training database, with the expectation that the latter will further reduce future density reconstruction errors. Finally, we demonstrate the ability of this method to outperform a traditional radiographic reconstruction in capturing allowable hydrodynamic paths even when relatively small amounts of scatter are present.Comment: Submitted to Optics Expres

    Super-resolution from unregistered aliased images

    Get PDF
    Aliasing in images is often considered as a nuisance. Artificial low frequency patterns and jagged edges appear when an image is sampled at a too low frequency. However, aliasing also conveys useful information about the high frequency content of the image, which is exploited in super-resolution applications. We use a set of input images of the same scene to extract such high frequency information and create a higher resolution aliasing-free image. Typically, there is a small shift or more complex motion between the different images, such that they contain slightly different information about the scene. Super-resolution image reconstruction can be formulated as a multichannel sampling problem with unknown offsets. This results in a set of equations that are linear in the unknown signal coefficients but nonlinear in the offsets. This thesis concentrates on the computation of these offsets, as they are an essential prerequisite for an accurate high resolution reconstruction. If a part of the image spectra is free of aliasing, the planar shift and rotation parameters can be computed using only this low frequency information. In such a case, the images can be registered pairwise to a reference image. Such a method is not applicable if the images are undersampled by a factor of two or larger. A higher number of images needs to be registered jointly. Two subspace methods are discussed for such highly aliased images. The first approach is based on a Fourier description of the aliased signals as a sum of overlapping parts of the spectrum. It uses a rank condition to find the correct offsets. The second one uses a more general expansion in an arbitrary Hilbert space to compute the signal offsets. The sampled signal is represented as a linear combination of sampled basis functions. The offsets are computed by projecting the signal onto varying subspaces. Under certain conditions, in particular for bandlimited signals, the nonlinear super-resolution equations can be written as a set of polynomial equations. Using Buchberger's algorithm, the solution can then be computed as a Gröbner basis for the corresponding polynomial ideal. After a description of a standard algorithm, adaptations are made for the use with noisy measurements. The techniques presented in this thesis are tested in simulations and practical experiments. The experiments are performed on sets of real images taken with a digital camera. The results show the validity of the algorithms: registration parameters are computed with subpixel precision, and aliasing is accurately removed from the resulting high resolution image. This thesis is produced according to the concepts of reproducible research. All the results and examples used in this thesis are reproducible using the code and data available online
    corecore