80 research outputs found

    Robustness of speckle imaging techniques applied to horizontal imaging scenarios

    Get PDF
    Atmospheric turbulence near the ground severely limits the quality of imagery acquired over long horizontal paths. In defense, surveillance, and border security applications, there is interest in deploying man-portable, embedded systems incorporating image reconstruction methods to compensate turbulence effects. While many image reconstruction methods have been proposed, their suitability for use in man-portable embedded systems is uncertain. To be effective, these systems must operate over significant variations in turbulence conditions while subject to other variations due to operation by novice users. Systems that meet these requirements and are otherwise designed to be immune to the factors that cause variation in performance are considered robust. In addition robustness in design, the portable nature of these systems implies a preference for systems with a minimum level of computational complexity. Speckle imaging methods have recently been proposed as being well suited for use in man-portable horizontal imagers. In this work, the robustness of speckle imaging methods is established by identifying a subset of design parameters that provide immunity to the expected variations in operating conditions while minimizing the computation time necessary for image recovery. Design parameters are selected by parametric evaluation of system performance as factors external to the system are varied. The precise control necessary for such an evaluation is made possible using image sets of turbulence degraded imagery developed using a novel technique for simulating anisoplanatic image formation over long horizontal paths. System performance is statistically evaluated over multiple reconstruction using the Mean Squared Error (MSE) to evaluate reconstruction quality. In addition to more general design parameters, the relative performance the bispectrum and the Knox-Thompson phase recovery methods is also compared. As an outcome of this work it can be concluded that speckle-imaging techniques are robust to the variation in turbulence conditions and user controlled parameters expected when operating during the day over long horizontal paths. Speckle imaging systems that incorporate 15 or more image frames and 4 estimates of the object phase per reconstruction provide up to 45% reduction in MSE and 68% reduction in the deviation. In addition, Knox-Thompson phase recover method is shown to produce images in half the time required by the bispectrum. The quality of images reconstructed using Knox-Thompson and bispectrum methods are also found to be nearly identical. Finally, it is shown that certain blind image quality metrics can be used in place of the MSE to evaluate quality in field scenarios. Using blind metrics rather depending on user estimates allows for reconstruction quality that differs from the minimum MSE by as little as 1%, significantly reducing the deviation in performance due to user action

    Signal Processing for Synthetic Aperture Sonar Image Enhancement

    Get PDF
    This thesis contains a description of SAS processing algorithms, offering improvements in Fourier-based reconstruction, motion-compensation, and autofocus. Fourier-based image reconstruction is reviewed and improvements shown as the result of improved system modelling. A number of new algorithms based on the wavenumber algorithm for correcting second order effects are proposed. In addition, a new framework for describing multiple-receiver reconstruction in terms of the bistatic geometry is presented and is a useful aid to understanding. Motion-compensation techniques for allowing Fourier-based reconstruction in widebeam geometries suffering large-motion errors are discussed. A motion-compensation algorithm exploiting multiple receiver geometries is suggested and shown to provide substantial improvement in image quality. New motion compensation techniques for yaw correction using the wavenumber algorithm are discussed. A common framework for describing phase estimation is presented and techniques from a number of fields are reviewed within this framework. In addition a new proof is provided outlining the relationship between eigenvector-based autofocus phase estimation kernels and the phase-closure techniques used astronomical imaging. Micronavigation techniques are reviewed and extensions to the shear average single-receiver micronavigation technique result in a 3 - 4 fold performance improvement when operating on high-contrast images. The stripmap phase gradient autofocus (SPGA) algorithm is developed and extends spotlight SAR PGA to the wide-beam, wide-band stripmap geometries common in SAS imaging. SPGA supersedes traditional PGA-based stripmap autofocus algorithms such as mPGA and PCA - the relationships between SPGA and these algorithms is discussed. SPGA's operation is verified on simulated and field-collected data where it provides significant image improvement. SPGA with phase-curvature based estimation is shown and found to perform poorly compared with phase-gradient techniques. The operation of SPGA on data collected from Sydney Harbour is shown with SPGA able to improve resolution to near the diffraction-limit. Additional analysis of practical stripmap autofocus operation in presence of undersampling and space-invariant blurring is presented with significant comment regarding the difficulties inherent in autofocusing field-collected data. Field-collected data from trials in Sydney Harbour is presented along with associated autofocus results from a number of algorithms

    Restoration of Images Taken Through a Turbulent Medium

    Full text link
    This thesis investigates the problem of how information contained in multiple, short exposure images of the same scene taken through (and distorted by) a turbulent medium (turbulent atmosphere or moving water surface) may be extracted and combined to produce a single image with superior quality and higher resolution. This problem is generally termed image restoration. It has many applications in fields as diverse as remote sensing, military intelligence, surveillance and recognition at a long distance, and other imaging problems which suffer from turbulent media, including e.g. the atmosphere and moving water surface. Wide-area/near-to-ground imaging (through atmosphere) and water imaging are the two main focuses of this thesis. The central technique used to solve these problems is speckle imaging, which is used to process a large number of images of the object with short exposure times such that the turbulent effect is frozen in each frame. A robust and efficient method using the bispectrum is developed to recover an almost diffraction-limited sharp image using the information contained in the captured short exposure images. Both the accuracy and the potential of these new algorithms have been investigated. Motivated by the lucky imaging technique which was used to select superior frames for astronomical imaging application, a new and more efficient technique is proposed. This technique is called lucky region, and it is aimed at selecting image regions with high quality as opposed to selecting a whole image as a lucky image. A new algorithm using bicoherence is proposed for lucky region selection. Its performance, as well as practical factors that may affect the performance, are investigated both theoretically and empirically. To further improve the quality of the recovered clean image after the speckle bispectrum processing, we also investigate blind deconvolution. One of the original contributions is to use natural image sparsity as a prior knowledge for the turbulence image restoration problem. A new algorithm is proposed and its performance is validated experimentally. The new methods are extended to the case of water imaging: restoration of images distorted by moving water waves. It is shown that this problem can be effectively solved by techniques developed in this thesis. Possible practical applications include various forms of ocean observation

    Electrical and Computer Engineering Annual Report 2016

    Get PDF
    Faculty Directory Faculty Highlights Faculty Fellow Program Multidisciplinary Research Fills Critical Needs Better, Faster Technology Metamaterials: Searching for the Perfect Lens The Nontraditional Power of Demand Dispatch Space, Solar Power\u27s Next Frontier Kit Cischke, Award-Winning Senior Lecturer Faculty Publications ECE Academy Class of 2016 Staff Profile: Michele Kamppinen For the Love of Teaching: Jenn Winikus Graduate Student Highlights Undergraduate Student Highlights External Advisory Committee Contracts and Grants Department Statistics AAES National Engineering Awardhttps://digitalcommons.mtu.edu/ece-annualreports/1002/thumbnail.jp

    Robust Subspace Estimation Using Low-rank Optimization. Theory And Applications In Scene Reconstruction, Video Denoising, And Activity Recognition.

    Get PDF
    In this dissertation, we discuss the problem of robust linear subspace estimation using low-rank optimization and propose three formulations of it. We demonstrate how these formulations can be used to solve fundamental computer vision problems, and provide superior performance in terms of accuracy and running time. Consider a set of observations extracted from images (such as pixel gray values, local features, trajectories . . . etc). If the assumption that these observations are drawn from a liner subspace (or can be linearly approximated) is valid, then the goal is to represent each observation as a linear combination of a compact basis, while maintaining a minimal reconstruction error. One of the earliest, yet most popular, approaches to achieve that is Principal Component Analysis (PCA). However, PCA can only handle Gaussian noise, and thus suffers when the observations are contaminated with gross and sparse outliers. To this end, in this dissertation, we focus on estimating the subspace robustly using low-rank optimization, where the sparse outliers are detected and separated through the `1 norm. The robust estimation has a two-fold advantage: First, the obtained basis better represents the actual subspace because it does not include contributions from the outliers. Second, the detected outliers are often of a specific interest in many applications, as we will show throughout this thesis. We demonstrate four different formulations and applications for low-rank optimization. First, we consider the problem of reconstructing an underwater sequence by removing the iii turbulence caused by the water waves. The main drawback of most previous attempts to tackle this problem is that they heavily depend on modelling the waves, which in fact is ill-posed since the actual behavior of the waves along with the imaging process are complicated and include several noise components; therefore, their results are not satisfactory. In contrast, we propose a novel approach which outperforms the state-of-the-art. The intuition behind our method is that in a sequence where the water is static, the frames would be linearly correlated. Therefore, in the presence of water waves, we may consider the frames as noisy observations drawn from a the subspace of linearly correlated frames. However, the noise introduced by the water waves is not sparse, and thus cannot directly be detected using low-rank optimization. Therefore, we propose a data-driven two-stage approach, where the first stage “sparsifies” the noise, and the second stage detects it. The first stage leverages the temporal mean of the sequence to overcome the structured turbulence of the waves through an iterative registration algorithm. The result of the first stage is a high quality mean and a better structured sequence; however, the sequence still contains unstructured sparse noise. Thus, we employ a second stage at which we extract the sparse errors from the sequence through rank minimization. Our method converges faster, and drastically outperforms state of the art on all testing sequences. Secondly, we consider a closely related situation where an independently moving object is also present in the turbulent video. More precisely, we consider video sequences acquired in a desert battlefields, where atmospheric turbulence is typically present, in addition to independently moving targets. Typical approaches for turbulence mitigation follow averaging or de-warping techniques. Although these methods can reduce the turbulence, they distort the independently moving objects which can often be of great interest. Therefore, we address the iv problem of simultaneous turbulence mitigation and moving object detection. We propose a novel three-term low-rank matrix decomposition approach in which we decompose the turbulence sequence into three components: the background, the turbulence, and the object. We simplify this extremely difficult problem into a minimization of nuclear norm, Frobenius norm, and `1 norm. Our method is based on two observations: First, the turbulence causes dense and Gaussian noise, and therefore can be captured by Frobenius norm, while the moving objects are sparse and thus can be captured by `1 norm. Second, since the object’s motion is linear and intrinsically different than the Gaussian-like turbulence, a Gaussian-based turbulence model can be employed to enforce an additional constraint on the search space of the minimization. We demonstrate the robustness of our approach on challenging sequences which are significantly distorted with atmospheric turbulence and include extremely tiny moving objects. In addition to robustly detecting the subspace of the frames of a sequence, we consider using trajectories as observations in the low-rank optimization framework. In particular, in videos acquired by moving cameras, we track all the pixels in the video and use that to estimate the camera motion subspace. This is particularly useful in activity recognition, which typically requires standard preprocessing steps such as motion compensation, moving object detection, and object tracking. The errors from the motion compensation step propagate to the object detection stage, resulting in miss-detections, which further complicates the tracking stage, resulting in cluttered and incorrect tracks. In contrast, we propose a novel approach which does not follow the standard steps, and accordingly avoids the aforementioned diffi- culties. Our approach is based on Lagrangian particle trajectories which are a set of dense trajectories obtained by advecting optical flow over time, thus capturing the ensemble motions v of a scene. This is done in frames of unaligned video, and no object detection is required. In order to handle the moving camera, we decompose the trajectories into their camera-induced and object-induced components. Having obtained the relevant object motion trajectories, we compute a compact set of chaotic invariant features, which captures the characteristics of the trajectories. Consequently, a SVM is employed to learn and recognize the human actions using the computed motion features. We performed intensive experiments on multiple benchmark datasets, and obtained promising results. Finally, we consider a more challenging problem referred to as complex event recognition, where the activities of interest are complex and unconstrained. This problem typically pose significant challenges because it involves videos of highly variable content, noise, length, frame size . . . etc. In this extremely challenging task, high-level features have recently shown a promising direction as in [53, 129], where core low-level events referred to as concepts are annotated and modelled using a portion of the training data, then each event is described using its content of these concepts. However, because of the complex nature of the videos, both the concept models and the corresponding high-level features are significantly noisy. In order to address this problem, we propose a novel low-rank formulation, which combines the precisely annotated videos used to train the concepts, with the rich high-level features. Our approach finds a new representation for each event, which is not only low-rank, but also constrained to adhere to the concept annotation, thus suppressing the noise, and maintaining a consistent occurrence of the concepts in each event. Extensive experiments on large scale real world dataset TRECVID Multimedia Event Detection 2011 and 2012 demonstrate that our approach consistently improves the discriminativity of the high-level features by a significant margin

     Ocean Remote Sensing with Synthetic Aperture Radar

    Get PDF
    The ocean covers approximately 71% of the Earth’s surface, 90% of the biosphere and contains 97% of Earth’s water. The Synthetic Aperture Radar (SAR) can image the ocean surface in all weather conditions and day or night. SAR remote sensing on ocean and coastal monitoring has become a research hotspot in geoscience and remote sensing. This book—Progress in SAR Oceanography—provides an update of the current state of the science on ocean remote sensing with SAR. Overall, the book presents a variety of marine applications, such as, oceanic surface and internal waves, wind, bathymetry, oil spill, coastline and intertidal zone classification, ship and other man-made objects’ detection, as well as remotely sensed data assimilation. The book is aimed at a wide audience, ranging from graduate students, university teachers and working scientists to policy makers and managers. Efforts have been made to highlight general principles as well as the state-of-the-art technologies in the field of SAR Oceanography
    corecore