17,546 research outputs found

    Airborne photogrammetry and LIDAR for DSM extraction and 3D change detection over an urban area : a comparative study

    Get PDF
    A digital surface model (DSM) extracted from stereoscopic aerial images, acquired in March 2000, is compared with a DSM derived from airborne light detection and ranging (lidar) data collected in July 2009. Three densely built-up study areas in the city centre of Ghent, Belgium, are selected, each covering approximately 0.4 km(2). The surface models, generated from the two different 3D acquisition methods, are compared qualitatively and quantitatively as to what extent they are suitable in modelling an urban environment, in particular for the 3D reconstruction of buildings. Then the data sets, which are acquired at two different epochs t(1) and t(2), are investigated as to what extent 3D (building) changes can be detected and modelled over the time interval. A difference model, generated by pixel-wise subtracting of both DSMs, indicates changes in elevation. Filters are proposed to differentiate 'real' building changes from false alarms provoked by model noise, outliers, vegetation, etc. A final 3D building change model maps all destructed and newly constructed buildings within the time interval t(2) - t(1). Based on the change model, the surface and volume of the building changes can be quantified

    Mass growth and mergers: direct observations of the luminosity function of LRG satellite galaxies out to z=0.7 from SDSS and BOSS images

    Get PDF
    We present a statistical study of the luminosity functions of galaxies surrounding luminous red galaxies (LRGs) at average redshifts =0.34 and =0.65. The luminosity functions are derived by extracting source photometry around more than 40,000 LRGs and subtracting foreground and background contamination using randomly selected control fields. We show that at both studied redshifts the average luminosity functions of the LRGs and their satellite galaxies are poorly fitted by a Schechter function due to a luminosity gap between the centrals and their most luminous satellites. We utilize a two-component fit of a Schechter function plus a log-normal distribution to demonstrate that LRGs are typically brighter than their most luminous satellite by roughly 1.3 magnitudes. This luminosity gap implies that interactions within LRG environments are typically restricted to minor mergers with mass ratios of 1:4 or lower. The luminosity functions further imply that roughly 35% of the mass in the environment is locked in the LRG itself, supporting the idea that mass growth through major mergers within the environment is unlikely. Lastly, we show that the luminosity gap may be at least partially explained by the selection of LRGs as the gap can be reproduced by sparsely sampling a Schechter function. In that case LRGs may represent only a small fraction of central galaxies in similar mass halos.Comment: ApJ accepted versio

    Semantic Cross-View Matching

    Full text link
    Matching cross-view images is challenging because the appearance and viewpoints are significantly different. While low-level features based on gradient orientations or filter responses can drastically vary with such changes in viewpoint, semantic information of images however shows an invariant characteristic in this respect. Consequently, semantically labeled regions can be used for performing cross-view matching. In this paper, we therefore explore this idea and propose an automatic method for detecting and representing the semantic information of an RGB image with the goal of performing cross-view matching with a (non-RGB) geographic information system (GIS). A segmented image forms the input to our system with segments assigned to semantic concepts such as traffic signs, lakes, roads, foliage, etc. We design a descriptor to robustly capture both, the presence of semantic concepts and the spatial layout of those segments. Pairwise distances between the descriptors extracted from the GIS map and the query image are then used to generate a shortlist of the most promising locations with similar semantic concepts in a consistent spatial layout. An experimental evaluation with challenging query images and a large urban area shows promising results

    The impact of uncertainty in satellite data on the assessment of flood inundation models

    Get PDF
    The performance of flood inundation models is often assessed using satellite observed data; however these data have inherent uncertainty. In this study we assess the impact of this uncertainty when calibrating a flood inundation model (LISFLOOD-FP) for a flood event in December 2006 on the River Dee, North Wales, UK. The flood extent is delineated from an ERS-2 SAR image of the event using an active contour model (snake), and water levels at the flood margin calculated through intersection of the shoreline vector with LiDAR topographic data. Gauged water levels are used to create a reference water surface slope for comparison with the satellite-derived water levels. Residuals between the satellite observed data points and those from the reference line are spatially clustered into groups of similar values. We show that model calibration achieved using pattern matching of observed and predicted flood extent is negatively influenced by this spatial dependency in the data. By contrast, model calibration using water elevations produces realistic calibrated optimum friction parameters even when spatial dependency is present. To test the impact of removing spatial dependency a new method of evaluating flood inundation model performance is developed by using multiple random subsamples of the water surface elevation data points. By testing for spatial dependency using Moran’s I, multiple subsamples of water elevations that have no significant spatial dependency are selected. The model is then calibrated against these data and the results averaged. This gives a near identical result to calibration using spatially dependent data, but has the advantage of being a statistically robust assessment of model performance in which we can have more confidence. Moreover, by using the variations found in the subsamples of the observed data it is possible to assess the effects of observational uncertainty on the assessment of flooding risk

    Image Subtraction Reduction of Open Clusters M35 & NGC 2158 In The K2 Campaign-0 Super-Stamp

    Full text link
    Observations were made of the open clusters M35 and NGC 2158 during the initial K2 campaign (C0). Reducing these data to high-precision photometric time-series is challenging due to the wide point spread function (PSF) and the blending of stellar light in such dense regions. We developed an image-subtraction-based K2 reduction pipeline that is applicable to both crowded and sparse stellar fields. We applied our pipeline to the data-rich C0 K2 super-stamp, containing the two open clusters, as well as to the neighboring postage stamps. In this paper, we present our image subtraction reduction pipeline and demonstrate that this technique achieves ultra-high photometric precision for sources in the C0 super-stamp. We extract the raw light curves of 3960 stars taken from the UCAC4 and EPIC catalogs and de-trend them for systematic effects. We compare our photometric results with the prior reductions published in the literature. For detrended, TFA-corrected sources in the 12--12.25 Kp\rm K_{p} magnitude range, we achieve a best 6.5 hour window running rms of 35 ppm falling to 100 ppm for fainter stars in the 14--14.25 Kp \rm K_{p} magnitude range. For stars with Kp>14\rm K_{p}> 14, our detrended and 6.5 hour binned light curves achieve the highest photometric precision. Moreover, all our TFA-corrected sources have higher precision on all time scales investigated. This work represents the first published image subtraction analysis of a K2 super-stamp. This method will be particularly useful for analyzing the Galactic bulge observations carried out during K2 campaign 9. The raw light curves and the final results of our detrending processes are publicly available at \url{http://k2.hatsurveys.org/archive/}.Comment: Accepted for publication in PASP. 14 pages, 5 figures, 2 tables. Light curves available from http://k2.hatsurveys.org/archive

    Supervised learning on graphs of spatio-temporal similarity in satellite image sequences

    Get PDF
    High resolution satellite image sequences are multidimensional signals composed of spatio-temporal patterns associated to numerous and various phenomena. Bayesian methods have been previously proposed in (Heas and Datcu, 2005) to code the information contained in satellite image sequences in a graph representation using Bayesian methods. Based on such a representation, this paper further presents a supervised learning methodology of semantics associated to spatio-temporal patterns occurring in satellite image sequences. It enables the recognition and the probabilistic retrieval of similar events. Indeed, graphs are attached to statistical models for spatio-temporal processes, which at their turn describe physical changes in the observed scene. Therefore, we adjust a parametric model evaluating similarity types between graph patterns in order to represent user-specific semantics attached to spatio-temporal phenomena. The learning step is performed by the incremental definition of similarity types via user-provided spatio-temporal pattern examples attached to positive or/and negative semantics. From these examples, probabilities are inferred using a Bayesian network and a Dirichlet model. This enables to links user interest to a specific similarity model between graph patterns. According to the current state of learning, semantic posterior probabilities are updated for all possible graph patterns so that similar spatio-temporal phenomena can be recognized and retrieved from the image sequence. Few experiments performed on a multi-spectral SPOT image sequence illustrate the proposed spatio-temporal recognition method
    • …
    corecore