6,301 research outputs found

    The Vela Pulsar in the Near-Infrared

    Full text link
    We report on the first detection of the Vela pulsar in the near-infrared with the VLT/ISAAC in the Js and H bands. The pulsar magnitudes are Js=22.71 +/- 0.10 and H=22.04 +/- 0.16. We compare our results with the available multiwavelength data and show that the dereddened phase-averaged optical spectrum of the pulsar can be fitted with a power law F_nu propto nu^(-alpha_nu) with alpha_nu = 0.12 +/- 0.05, assuming the color excess E(B-V)=0.055 +/-0.005 based on recent spectral fits of the emission of the Vela pulsar and its supernova remnant in X-rays. The negative slope of the pulsar spectrum is different from the positive slope observed over a wide optical range in the young Crab pulsar spectrum. The near-infrared part of the Vela spectrum appears to have the same slope as the phase-averaged spectrum in the high energy X-ray tail, obtained in the 2-10 keV range with the RXTE. Both of these spectra can be fitted with a single power law suggesting their common origin. Because the phase-averaged RXTE spectrum in this range is dominated by the second X-ray peak of the pulsar light curve, coinciding with the second main peak of its optical pulse profile, we suggest that this optical peak can be redder than the first one. We also detect two faint extended structures in the 1.5''-3.1'' vicinity of the pulsar, projected on and aligned with the south-east jet and the inner arc of the pulsar wind nebula, detected in X-rays with Chandra. We discuss their possible association with the nebula.Comment: 12 pages, 8 figures, accepted for publication in A&A, the associated near-infrared images in the fits format are available at http://www.ioffe.ru/astro/NSG/obs/vela-ir

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    SAR Image Edge Detection: Review and Benchmark Experiments

    Get PDF
    Edges are distinct geometric features crucial to higher level object detection and recognition in remote-sensing processing, which is a key for surveillance and gathering up-to-date geospatial intelligence. Synthetic aperture radar (SAR) is a powerful form of remote-sensing. However, edge detectors designed for optical images tend to have low performance on SAR images due to the presence of the strong speckle noise-causing false-positives (type I errors). Therefore, many researchers have proposed edge detectors that are tailored to deal with the SAR image characteristics specifically. Although these edge detectors might achieve effective results on their own evaluations, the comparisons tend to include a very limited number of (simulated) SAR images. As a result, the generalized performance of the proposed methods is not truly reflected, as real-world patterns are much more complex and diverse. From this emerges another problem, namely, a quantitative benchmark is missing in the field. Hence, it is not currently possible to fairly evaluate any edge detection method for SAR images. Thus, in this paper, we aim to close the aforementioned gaps by providing an extensive experimental evaluation for SAR images on edge detection. To that end, we propose the first benchmark on SAR image edge detection methods established by evaluating various freely available methods, including methods that are considered to be the state of the art

    Terrain analysis using radar shape-from-shading

    Get PDF
    This paper develops a maximum a posteriori (MAP) probability estimation framework for shape-from-shading (SFS) from synthetic aperture radar (SAR) images. The aim is to use this method to reconstruct surface topography from a single radar image of relatively complex terrain. Our MAP framework makes explicit how the recovery of local surface orientation depends on the whereabouts of terrain edge features and the available radar reflectance information. To apply the resulting process to real world radar data, we require probabilistic models for the appearance of terrain features and the relationship between the orientation of surface normals and the radar reflectance. We show that the SAR data can be modeled using a Rayleigh-Bessel distribution and use this distribution to develop a maximum likelihood algorithm for detecting and labeling terrain edge features. Moreover, we show how robust statistics can be used to estimate the characteristic parameters of this distribution. We also develop an empirical model for the SAR reflectance function. Using the reflectance model, we perform Lambertian correction so that a conventional SFS algorithm can be applied to the radar data. The initial surface normal direction is constrained to point in the direction of the nearest ridge or ravine feature. Each surface normal must fall within a conical envelope whose axis is in the direction of the radar illuminant. The extent of the envelope depends on the corrected radar reflectance and the variance of the radar signal statistics. We explore various ways of smoothing the field of surface normals using robust statistics. Finally, we show how to reconstruct the terrain surface from the smoothed field of surface normal vectors. The proposed algorithm is applied to various SAR data sets containing relatively complex terrain structure

    Unsupervised multi-scale change detection from SAR imagery for monitoring natural and anthropogenic disasters

    Get PDF
    Thesis (Ph.D.) University of Alaska Fairbanks, 2017Radar remote sensing can play a critical role in operational monitoring of natural and anthropogenic disasters. Despite its all-weather capabilities, and its high performance in mapping, and monitoring of change, the application of radar remote sensing in operational monitoring activities has been limited. This has largely been due to: (1) the historically high costs associated with obtaining radar data; (2) slow data processing, and delivery procedures; and (3) the limited temporal sampling that was provided by spaceborne radar-based satellites. Recent advances in the capabilities of spaceborne Synthetic Aperture Radar (SAR) sensors have developed an environment that now allows for SAR to make significant contributions to disaster monitoring. New SAR processing strategies that can take full advantage of these new sensor capabilities are currently being developed. Hence, with this PhD dissertation, I aim to: (i) investigate unsupervised change detection techniques that can reliably extract signatures from time series of SAR images, and provide the necessary flexibility for application to a variety of natural, and anthropogenic hazard situations; (ii) investigate effective methods to reduce the effects of speckle and other noise on change detection performance; (iii) automate change detection algorithms using probabilistic Bayesian inferencing; and (iv) ensure that the developed technology is applicable to current, and future SAR sensors to maximize temporal sampling of a hazardous event. This is achieved by developing new algorithms that rely on image amplitude information only, the sole image parameter that is available for every single SAR acquisition. The motivation and implementation of the change detection concept are described in detail in Chapter 3. In the same chapter, I demonstrated the technique's performance using synthetic data as well as a real-data application to map wildfire progression. I applied Radiometric Terrain Correction (RTC) to the data to increase the sampling frequency, while the developed multiscaledriven approach reliably identified changes embedded in largely stationary background scenes. With this technique, I was able to identify the extent of burn scars with high accuracy. I further applied the application of the change detection technology to oil spill mapping. The analysis highlights that the approach described in Chapter 3 can be applied to this drastically different change detection problem with only little modification. While the core of the change detection technique remained unchanged, I made modifications to the pre-processing step to enable change detection from scenes of continuously varying background. I introduced the Lipschitz regularity (LR) transformation as a technique to normalize the typically dynamic ocean surface, facilitating high performance oil spill detection independent of environmental conditions during image acquisition. For instance, I showed that LR processing reduces the sensitivity of change detection performance to variations in surface winds, which is a known limitation in oil spill detection from SAR. Finally, I applied the change detection technique to aufeis flood mapping along the Sagavanirktok River. Due to the complex nature of aufeis flooded areas, I substituted the resolution-preserving speckle filter used in Chapter 3 with curvelet filters. In addition to validating the performance of the change detection results, I also provide evidence of the wealth of information that can be extracted about aufeis flooding events once a time series of change detection information was extracted from SAR imagery. A summary of the developed change detection techniques is conducted and suggested future work is presented in Chapter 6

    Effective SAR image despeckling based on bandlet and SRAD

    Get PDF
    Despeckling of a SAR image without losing features of the image is a daring task as it is intrinsically affected by multiplicative noise called speckle. This thesis proposes a novel technique to efficiently despeckle SAR images. Using an SRAD filter, a Bandlet transform based filter and a Guided filter, the speckle noise in SAR images is removed without losing the features in it. Here a SAR image input is given parallel to both SRAD and Bandlet transform based filters. The SRAD filter despeckles the SAR image and the despeckled output image is used as a reference image for the guided filter. In the Bandlet transform based despeckling scheme, the input SAR image is first decomposed using the bandlet transform. Then the coefficients obtained are thresholded using a soft thresholding rule. All coefficients other than the low-frequency ones are so adjusted. The generalized cross-validation (GCV) technique is employed here to find the most favorable threshold for each subband. The bandlet transform is able to extract edges and fine features in the image because it finds the direction where the function gives maximum value and in the same direction it builds extended orthogonal vectors. Simple soft thresholding using an optimum threshold despeckles the input SAR image. The guided filter with the help of a reference image removes the remaining speckle from the bandlet transform output. In terms of numerical and visual quality, the proposed filtering scheme surpasses the available despeckling schemes

    Mapping the 2021 October Flood Event in the Subsiding Taiyuan Basin By Multi-Temporal SAR Data

    Get PDF
    A flood event induced by heavy rainfall hit the Taiyuan basin in north China in early October of 2021. In this study, we map the flood event process using the multi-temporal synthetic aperture radar (SAR) images acquired by Sentinel-1. First, we develop a spatiotemporal filter based on low-rank tensor approximation (STF-LRTA) for removing the speckle noise in SAR images. Next, we employ the classic log-ratio change indicator and the minimum error threshold algorithm to characterize the flood using the filtered images. Finally, we relate the flood inundation to the land subsidence in the Taiyuan basin by jointly analyzing the multi-temporal SAR change detection results and interferometric SAR (InSAR) time-series measurements (pre-flood). The validation experiments compare the proposed filter with the Refined-Lee filter, Gamma filter, and an SHPS-based multi-temporal SAR filter. The results demonstrate the effectiveness and advantage of the proposed STF-LRTA method in SAR despeckling and detail preservation, and the applicability to change scenes. The joint analyses reveal that land subsidence might be an important contributor to the flood event, and the flood recession process linearly correlates with time and subsidence magnitude.This work was financially supported by the National Natural Science Foundation of China (grant numbers 41904001 and 41774006), the China Postdoctoral Science Foundation (grant number 2018M640733), the National Key Research and Development Program of China (grant number 2019YFC1509201), and the National Postdoctoral Program for Innovative Talents (grant number BX20180220)
    corecore