4,380 research outputs found

    Influence of camera distortions on satellite image registration and change detection applications

    Get PDF
    Applications such as change detection and digital elevation model extraction from optical images require a rigorous modeling of the acquisition geometry. We show that the unrecorded satellite jitter during image acquisition, and the uncertainties on the CCD arrays geometry are the current major limiting factors for applications requiring high accuracy. These artifacts are identified and quantified on several optical satellites, i.e., SPOT, ASTER, QuickBird, and HiRISE

    Providing the Third Dimension: High-resolution Multibeam Sonar as a Tool for Archaeological Investigations - An Example from the D-day Beaches of Normandy

    Get PDF
    In general, marine archaeological investigations begin in the archives, using historic maps, coast surveys, and other materials, to define submerged areas suspected to contain potentially significant historical sites. Following this research phase, a typical archaeological survey uses sidescan sonar and marine magnetometers as initial search tools. Targets are then examined through direct observation by divers, video, or photographs. Magnetometers can demonstrate the presence, absence, and relative susceptibility of ferrous objects but provide little indication of the nature of the target. Sidescan sonar can present a clear image of the overall nature of a target and its surrounding environment, but the sidescan image is often distorted and contains little information about the true 3-D shape of the object. Optical techniques allow precise identification of objects but suffer from very limited range, even in the best of situations. Modern high-resolution multibeam sonar offers an opportunity to cover a relatively large area from a safe distance above the target, while resolving the true three-dimensional (3-D) shape of the object with centimeter-level resolution. A clear demonstration of the applicability of highresolution multibeam sonar to wreck and artifact investigations occurred this summer when the Naval Historical Center (NHC), the Center for Coastal and Ocean Mapping (CCOM) at the University of New Hampshire, and Reson Inc., collaborated to explore the state of preservation and impact on the surrounding environment of a series of wrecks located off the coast of Normandy, France, adjacent to the American landing sectors The survey augmented previously collected magnetometer and high-resolution sidescan sonar data using a Reson 8125 high-resolution focused multibeam sonar with 240, 0.5° (at nadir) beams distributed over a 120° swath. The team investigated 21 areas in water depths ranging from about three -to 30 meters (m); some areas contained individual targets such as landing craft, barges, a destroyer, troop carrier, etc., while others contained multiple smaller targets such as tanks and trucks. Of particular interest were the well-preserved caissons and blockships of the artificial Mulberry Harbor deployed off Omaha Beach. The near-field beam-forming capability of the Reson 8125 combined with 3-D visualization techniques provided an unprecedented level of detail including the ability to recognize individual components of the wrecks (ramps, gun turrets, hatches, etc.), the state of preservation of the wrecks, and the impact of the wrecks on the surrounding seafloor

    Pan-Arctic lead detection from MODIS thermal infrared imagery

    Get PDF
    Polynyas and leads are key elements of the wintertime Arctic sea-ice cover. They play a crucial role in surface heat loss, potential ice formation and consequently in the seasonal sea-ice budget. While polynyas are generally sufficiently large to be observed with passive microwave satellite sensors, the monitoring of narrow leads requires the use of data at a higher spatial resolution. We apply and evaluate different lead segmentation techniques based on sea-ice surface temperatures as measured by the Moderate Resolution Imaging Spectroradiometer (MODIS). Daily lead composite maps indicate the presence of cloud artifacts that arise from ambiguities in the segmentation process and shortcomings in the MODIS cloud mask. A fuzzy cloud artifact filter is hence implemented to mitigate these effects and the associated potential misclassification of leads. The filter is adjusted with reference data from thermal infrared image sequences, and applied to daily MODIS data from January to April 2008. The daily lead product can be used to deduct the structure and dynamics of wintertime sea-ice leads and to assess seasonal divergence patterns of the Arctic Ocean

    In-Flight CCD Distortion Calibration for Pushbroom Satellites Based on Subpixel Correlation

    Get PDF
    We describe a method that allows for accurate inflight calibration of the interior orientation of any pushbroom camera and that in particular solves the problem of modeling the distortions induced by charge coupled device (CCD) misalignments. The distortion induced on the ground by each CCD is measured using subpixel correlation between the orthorectified image to be calibrated and an orthorectified reference image that is assumed distortion free. Distortions are modeled as camera defects, which are assumed constant over time. Our results show that in-flight interior orientation calibration reduces internal camera biases by one order of magnitude. In particular, we fully characterize and model the Satellite Pour l'Observation de la Terre (SPOT) 4-HRV1 sensor, and we conjecture that distortions mostly result from the mechanical strain produced when the satellite was launched rather than from effects of on-orbit thermal variations or aging. The derived calibration models have been integrated to the software package Coregistration of Optically Sensed Images and Correlation (COSI-Corr), freely available from the Caltech Tectonics Observatory website. Such calibration models are particularly useful in reducing biases in digital elevation models (DEMs) generated from stereo matching and in improving the accuracy of change detection algorithms

    Weakly Supervised Learning for Multi-Image Synthesis

    Get PDF
    Machine learning-based approaches have been achieving state-of-the-art results on many computer vision tasks. While deep learning and convolutional networks have been incredibly popular, these approaches come at the expense of huge amounts of labeled data required for training. Manually annotating large amounts of data, often millions of images in a single dataset, is costly and time consuming. To deal with the problem of data annotation, the research community has been exploring approaches that require less amount of labelled data. The central problem that we consider in this research is image synthesis without any manual labeling. Image synthesis is a classic computer vision task that requires understanding of image contents and their semantic and geometric properties. We propose that we can train image synthesis models by relying on sequences of videos and using weakly supervised learning. Large amounts of unlabeled data are freely available on the internet. We propose to set up the training in a multi-image setting so that we can use one of the images as the target - this allows us to rely only on images for training and removes the need for manual annotations. We demonstrate three main contributions in this work. First, we present a method of fusing multiple noisy overhead images to make a single, artifact-free image. We present a weakly supervised method that relies on crowd-sourced labels from online maps and a completely unsupervised variant that only requires a series of satellite images as inputs. Second, we propose a single-image novel view synthesis method for complex, outdoor scenes. We propose a learning-based method that uses pairs of nearby images captured on urban roads and their respective GPS coordinates as supervision. We show that a model trained with this automatically captured data can render a new view of a scene that can be as far as 10 meters from the input image. Third, we consider the problem of synthesizing new images of a scene under different conditions, such as time of day and season, based on a single input image. As opposed to existing methods, we do not need manual annotations for transient attributes, such as fog or snow, for training. We train our model by using streams of images captured from outdoor webcams and time-lapse videos. Through these applications, we show several settings where we can train state-of-the-art deep learning methods without manual annotations. This work focuses on three image synthesis tasks. We propose weakly supervised learning and remove requirements for manual annotations by relying on sequences of images. Our approach is in line with the research efforts that aim to minimize the labels required for training machine learning methods

    Big Earth Data for Cultural Heritage in the Copernicus Era

    Get PDF
    Digital data is stepping in its golden age characterized by an increasing growth of both classical and emerging big earth data along with trans- and multidisciplinary methodological approaches and services addressed to the study, preservation and sustainable exploitation of cultural heritage (CH). The availability of new digital technologies has opened new possibilities, unthinkable only a few years ago for cultural heritage. The currently available digital data, tools and services with particular reference to Copernicus initiatives make possible to characterize and understand the state of conservation of CH for preventive restoration and opened up a frontier of possibilities for the discovery of archaeological sites from above and also for supporting their excavation, monitoring and preservation. The different areas of intervention require the availability and integration of rigorous information from different sources for improving knowledge and interpretation, risk assessment and management in order to make more successful all the actions oriented to the preservation of cultural properties. One of the biggest challenges is to fully involve the citizen also from an emotional point of view connecting “pixels with people” and “bridging” remote sensing and social sensing

    Multi-Sensor Data Fusion for Cloud Removal in Global and All-Season Sentinel-2 Imagery

    Get PDF
    This work has been accepted by IEEE TGRS for publication. The majority of optical observations acquired via spaceborne earth imagery are affected by clouds. While there is numerous prior work on reconstructing cloud-covered information, previous studies are oftentimes confined to narrowly-defined regions of interest, raising the question of whether an approach can generalize to a diverse set of observations acquired at variable cloud coverage or in different regions and seasons. We target the challenge of generalization by curating a large novel data set for training new cloud removal approaches and evaluate on two recently proposed performance metrics of image quality and diversity. Our data set is the first publically available to contain a global sample of co-registered radar and optical observations, cloudy as well as cloud-free. Based on the observation that cloud coverage varies widely between clear skies and absolute coverage, we propose a novel model that can deal with either extremes and evaluate its performance on our proposed data set. Finally, we demonstrate the superiority of training models on real over synthetic data, underlining the need for a carefully curated data set of real observations. To facilitate future research, our data set is made available onlineComment: This work has been accepted by IEEE TGRS for publicatio
    • 

    corecore