1,168 research outputs found

    Learning a Dilated Residual Network for SAR Image Despeckling

    Full text link
    In this paper, to break the limit of the traditional linear models for synthetic aperture radar (SAR) image despeckling, we propose a novel deep learning approach by learning a non-linear end-to-end mapping between the noisy and clean SAR images with a dilated residual network (SAR-DRN). SAR-DRN is based on dilated convolutions, which can both enlarge the receptive field and maintain the filter size and layer depth with a lightweight structure. In addition, skip connections and residual learning strategy are added to the despeckling model to maintain the image details and reduce the vanishing gradient problem. Compared with the traditional despeckling methods, the proposed method shows superior performance over the state-of-the-art methods on both quantitative and visual assessments, especially for strong speckle noise.Comment: 18 pages, 13 figures, 7 table

    Moving Target Analysis in ISAR Image Sequences with a Multiframe Marked Point Process Model

    Get PDF
    In this paper we propose a Multiframe Marked Point Process model of line segments and point groups for automatic target structure extraction and tracking in Inverse Synthetic Aperture Radar (ISAR) image sequences. For the purpose of dealing with scatterer scintillations and high speckle noise in the ISAR frames, we obtain the resulting target sequence by an iterative optimization process, which simultaneously considers the observed image data and various prior geometric interaction constraints between the target appearances in the consecutive frames. A detailed quantitative evaluation is performed on 8 real ISAR image sequences of different carrier ship and airplane targets, using a test database containing 545 manually annotated frames

    Scaled Synthetic Aperture Radar System Development

    Get PDF
    Synthetic Aperture Radar (SAR) systems generate two dimensional images of a target area using RF energy as opposed to light waves used by cameras. When cloud cover or other optical obstructions prevent camera imaging over a target area, SAR can be substituted to generate high resolution images. Linear frequency modulated signals are transmitted and received while a moving imaging platform traverses a target area to develop high resolution images through modern digital signal processing (DSP) techniques. The motivation for this joint thesis project is to design and construct a scaled SAR system to support Cal Poly radar projects. Objectives include low-cost, high resolution SAR architecture development for capturing images in desired target areas. To that end, a scaled SAR system was successfully designed, built, and tested. The current SAR system, however, does not perform azimuthal compression and range cell migration correction (image blur reduction). These functionalities can be pursued by future students joining the ongoing radar project. The SAR system includes RF modulating, demodulating, and amplifying circuitry, broadband antenna design, movement platform, LabView system control, and MATLAB signal processing. Each system block is individually described and analyzed followed by final measured data. To confirm system operation, images developed from data collected in a single target environment are presented and compared to the actual configuration

    Image Simulation in Remote Sensing

    Get PDF
    Remote sensing is being actively researched in the fields of environment, military and urban planning through technologies such as monitoring of natural climate phenomena on the earth, land cover classification, and object detection. Recently, satellites equipped with observation cameras of various resolutions were launched, and remote sensing images are acquired by various observation methods including cluster satellites. However, the atmospheric and environmental conditions present in the observed scene degrade the quality of images or interrupt the capture of the Earth's surface information. One method to overcome this is by generating synthetic images through image simulation. Synthetic images can be generated by using statistical or knowledge-based models or by using spectral and optic-based models to create a simulated image in place of the unobtained image at a required time. Various proposed methodologies will provide economical utility in the generation of image learning materials and time series data through image simulation. The 6 published articles cover various topics and applications central to Remote sensing image simulation. Although submission to this Special Issue is now closed, the need for further in-depth research and development related to image simulation of High-spatial and spectral resolution, sensor fusion and colorization remains.I would like to take this opportunity to express my most profound appreciation to the MDPI Book staff, the editorial team of Applied Sciences journal, especially Ms. Nimo Lang, the assistant editor of this Special Issue, talented authors, and professional reviewers

    2015 Oil Observing Tools: A Workshop Report

    Get PDF
    Since 2010, the National Oceanic and Atmospheric Administration (NOAA) and the National Aeronautics and Space Administration (NASA) have provided satellite-based pollution surveillance in United States waters to regulatory agencies such as the United States Coast Guard (USCG). These technologies provide agencies with useful information regarding possible oil discharges. Unfortunately, there has been confusion as to how to interpret the images collected by these satellites and other aerial platforms, which can generate misunderstandings during spill events. Remote sensor packages on aircraft and satellites have advantages and disadvantages vis-à-vis human observers, because they do not “see” features or surface oil the same way. In order to improve observation capabilities during oil spills, applicable technologies must be identified, and then evaluated with respect to their advantages and disadvantages for the incident. In addition, differences between sensors (e.g., visual, IR, multispectral sensors, radar) and platform packages (e.g., manned/unmanned aircraft, satellites) must be understood so that reasonable approaches can be made if applicable and then any data must be correctly interpreted for decision support. NOAA convened an Oil Observing Tools Workshop to focus on the above actions and identify training gaps for oil spill observers and remote sensing interpretation to improve future oil surveillance, observation, and mapping during spills. The Coastal Response Research Center (CRRC) assisted NOAA’s Office of Response and Restoration (ORR) with this effort. The workshop was held on October 20-22, 2015 at NOAA’s Gulf of Mexico Disaster Response Center in Mobile, AL. The expected outcome of the workshop was an improved understanding, and greater use of technology to map and assess oil slicks during actual spill events. Specific workshop objectives included: •Identify new developments in oil observing technologies useful for real-time (or near real-time) mapping of spilled oil during emergency events. •Identify merits and limitations of current technologies and their usefulness to emergency response mapping of oil and reliable prediction of oil surface transport and trajectory forecasts.Current technologies include: the traditional human aerial observer, unmanned aircraft surveillance systems, aircraft with specialized senor packages, and satellite earth observing systems. •Assess training needs for visual observation (human observers with cameras) and sensor technologies (including satellites) to build skills and enhance proper interpretation for decision support during actual events

    Artificial Neural Networks and Evolutionary Computation in Remote Sensing

    Get PDF
    Artificial neural networks (ANNs) and evolutionary computation methods have been successfully applied in remote sensing applications since they offer unique advantages for the analysis of remotely-sensed images. ANNs are effective in finding underlying relationships and structures within multidimensional datasets. Thanks to new sensors, we have images with more spectral bands at higher spatial resolutions, which clearly recall big data problems. For this purpose, evolutionary algorithms become the best solution for analysis. This book includes eleven high-quality papers, selected after a careful reviewing process, addressing current remote sensing problems. In the chapters of the book, superstructural optimization was suggested for the optimal design of feedforward neural networks, CNN networks were deployed for a nanosatellite payload to select images eligible for transmission to ground, a new weight feature value convolutional neural network (WFCNN) was applied for fine remote sensing image segmentation and extracting improved land-use information, mask regional-convolutional neural networks (Mask R-CNN) was employed for extracting valley fill faces, state-of-the-art convolutional neural network (CNN)-based object detection models were applied to automatically detect airplanes and ships in VHR satellite images, a coarse-to-fine detection strategy was employed to detect ships at different sizes, and a deep quadruplet network (DQN) was proposed for hyperspectral image classification

    Advances in Motion Estimators for Applications in Computer Vision

    Get PDF
    abstract: Motion estimation is a core task in computer vision and many applications utilize optical flow methods as fundamental tools to analyze motion in images and videos. Optical flow is the apparent motion of objects in image sequences that results from relative motion between the objects and the imaging perspective. Today, optical flow fields are utilized to solve problems in various areas such as object detection and tracking, interpolation, visual odometry, etc. In this dissertation, three problems from different areas of computer vision and the solutions that make use of modified optical flow methods are explained. The contributions of this dissertation are approaches and frameworks that introduce i) a new optical flow-based interpolation method to achieve minimally divergent velocimetry data, ii) a framework that improves the accuracy of change detection algorithms in synthetic aperture radar (SAR) images, and iii) a set of new methods to integrate Proton Magnetic Resonance Spectroscopy (1HMRSI) data into threedimensional (3D) neuronavigation systems for tumor biopsies. In the first application an optical flow-based approach for the interpolation of minimally divergent velocimetry data is proposed. The velocimetry data of incompressible fluids contain signals that describe the flow velocity. The approach uses the additional flow velocity information to guide the interpolation process towards reduced divergence in the interpolated data. In the second application a framework that mainly consists of optical flow methods and other image processing and computer vision techniques to improve object extraction from synthetic aperture radar images is proposed. The proposed framework is used for distinguishing between actual motion and detected motion due to misregistration in SAR image sets and it can lead to more accurate and meaningful change detection and improve object extraction from a SAR datasets. In the third application a set of new methods that aim to improve upon the current state-of-the-art in neuronavigation through the use of detailed three-dimensional (3D) 1H-MRSI data are proposed. The result is a progressive form of online MRSI-guided neuronavigation that is demonstrated through phantom validation and clinical application.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
    • …
    corecore