1,226 research outputs found

    Automatic refocus and feature extraction of single-look complex SAR signatures of vessels

    Get PDF
    In recent years, spaceborne synthetic aperture radar ( SAR) technology has been considered as a complement to cooperative vessel surveillance systems thanks to its imaging capabilities. In this paper, a processing chain is presented to explore the potential of using basic stripmap single-look complex ( SLC) SAR images of vessels for the automatic extraction of their dimensions and heading. Local autofocus is applied to the vessels' SAR signatures to compensate blurring artefacts in the azimuth direction, improving both their image quality and their estimated dimensions. For the heading, the orientation ambiguities of the vessels' SAR signatures are solved using the direction of their ground-range velocity from the analysis of their Doppler spectra. Preliminary results are provided using five images of vessels from SLC RADARSAT-2 stripmap images. These results have shown good agreement with their respective ground-truth data from Automatic Identification System ( AIS) records at the time of the acquisitions.Postprint (published version

    A nonquadratic regularization-based technique for joint SAR imaging and model error correction

    Get PDF
    Regularization based image reconstruction algorithms have successfully been applied to the synthetic aperture radar (SAR) imaging problem. Such algorithms assume that the mathematical model of the imaging system is perfectly known. However, in practice, it is very common to encounter various types of model errors. One predominant example is phase errors which appear either due to inexact measurement of the location of the SAR sensing platform, or due to effects of propagation through atmospheric turbulence. We propose a nonquadratic regularization-based framework for joint image formation and model error correction. This framework leads to an iterative algorithm, which cycles through steps of image formation and model parameter estimation. This approach offers advantages over autofocus techniques that involve post-processing of a conventionally formed image. We present results on synthetic scenes, as well as the Air Force Research Laboratory (AFRL) Backhoe data set, demonstrating the effectiveness of the proposed approach

    Autofocus for digital Fresnel holograms by use of a Fresnelet-sparsity criterion

    Get PDF
    We propose a robust autofocus method for reconstructing digital Fresnel holograms. The numerical reconstruction involves simulating the propagation of a complex wave front to the appropriate distance. Since the latter value is difficult to determine manually, it is desirable to rely on an automatic procedure for finding the optimal distance to achieve high-quality reconstructions. Our algorithm maximizes a sharpness metric related to the sparsity of the signal’s expansion in distance-dependent waveletlike Fresnelet bases. We show results from simulations and experimental situations that confirm its applicability

    3D Capturing with Monoscopic Camera

    Get PDF
    This article presents a new concept of using the auto-focus function of the monoscopic camera sensor to estimate depth map information, which avoids not only using auxiliary equipment or human interaction, but also the introduced computational complexity of SfM or depth analysis. The system architecture that supports both stereo image and video data capturing, processing and display is discussed. A novel stereo image pair generation algorithm by using Z-buffer-based 3D surface recovery is proposed. Based on the depth map, we are able to calculate the disparity map (the distance in pixels between the image points in both views) for the image. The presented algorithm uses a single image with depth information (e.g. z-buffer) as an input and produces two images for left and right eye

    A Semantic Approach To Autonomous Mixing

    Get PDF

    PRECONDITIONING AND THE APPLICATION OF CONVOLUTIONAL NEURAL NETWORKS TO CLASSIFY MOVING TARGETS IN SAR IMAGERY

    Get PDF
    Synthetic Aperture Radar (SAR) is a principle that uses transmitted pulses that store and combine scene echoes to build an image that represents the scene reflectivity. SAR systems can be found on a wide variety of platforms to include satellites, aircraft, and more recently, unmanned platforms like the Global Hawk unmanned aerial vehicle. The next step is to process, analyze and classify the SAR data. The use of a convolutional neural network (CNN) to analyze SAR imagery is a viable method to achieve Automatic Target Recognition (ATR) in military applications. The CNN is an artificial neural network that uses convolutional layers to detect certain features in an image. These features correspond to a target of interest and train the CNN to recognize and classify future images. Moving targets present a major challenge to current SAR ATR methods due to the “smearing” effect in the image. Past research has shown that the combination of autofocus techniques and proper training with moving targets improves the accuracy of the CNN at target recognition. The current research includes improvement of the CNN algorithm and preconditioning techniques, as well as a deeper analysis of moving targets with complex motion such as changes to roll, pitch or yaw. The CNN algorithm was developed and verified using computer simulation.Lieutenant, United States NavyApproved for public release. Distribution is unlimited
    • 

    corecore