183 research outputs found

    Rapid Disaster Analysis based on SAR Techniques

    Get PDF
    Due to all-day and all-weather capability spaceborne SAR is a valuable means for rapid mapping during and after disaster. In this paper, three change detection techniques based on SAR data are discussed: (1) initial coarse change detection, (2) flooded area detection, and (3) linear-feature change detection. The 2011 Tohoku Earthquake and Tsunami is used as case study, where earthquake and tsunami events provide a complex case for this study. In (1), pre- and post-event TerraSAR-X images are coregistered accurately to produce a false-color image. Such image provides a quick and rough overview of potential changes, which is useful for initial decision making and identifies areas worthwhile to be analysed further in more depth. In (2), the post-event TerraSAR-X image is used to extract the flooded area by morphological approaches. In (3), we are interested in detecting changes of linear shape as indicator for modified man-made objects. Morphological approaches, e.g. thresholding, simply extract pixel-based changes in the difference image. However, in this manner many irrelevant changes are highlighted, too (e.g., farming activity, speckle). In this study, Curvelet filtering is applied in the difference image not only to suppress false alarms but also to enhance the change signals of linear-feature form (e.g. buildings) in settlements. Afterwards, thresholding is conducted to extract linear-shaped changed areas. These three techniques mentioned above are designed to be simple and applicable in timely disaster analysis. They are all validated by comparing with the change map produced by Center for Satellite Based Crisis Information, DLR

    PERSISTENT SCATTERER AIDED FACADE LATTICE EXTRACTION IN SINGLE AIRBORNE OPTICAL OBLIQUE IMAGES

    Get PDF
    We present a new method to extract patterns of regular facade structures from single optical oblique images. To overcome the missing three-dimensional information we incorporate structural information derived from Persistent Scatter (PS) point cloud data into our method. Single oblique images and PS point clouds have never been combined before and offer promising insights into the compatibility of remotely sensed data of different kinds. Even though the appearance of facades is significantly different, many characteristics of the prominent patterns can be seen in both types of data and can be transferred across the sensor domains. To justify the extraction based on regular facade patterns we show that regular facades appear rather often in typical airborne oblique imagery of urban scenes. The extraction of regular patterns is based on well established tools like cross correlation and is extended by incorporating a module for estimating a window lattice model using a genetic algorithm. Among others the results of our approach can be used to derive a deeper understanding of the emergence of Persistent Scatterers and their fusion with optical imagery. To demonstrate the applicability of the approach we present a concept for data fusion aiming at facade lattices extraction in PS and optical data

    MONITORING CONCEPTS FOR COASTAL AREAS USING LIDAR DATA

    Get PDF

    SLAM for Indoor Mapping of Wide Area Construction Environments

    Get PDF
    Simultaneous localization and mapping (SLAM), i.e., the reconstruction of the environment represented by a (3D) map and the concurrent pose estimation, has made astonishing progress. Meanwhile, large scale applications aiming at the data collection in complex environments like factory halls or construction sites are becoming feasible. However, in contrast to small scale scenarios with building interiors separated to single rooms, shop floors or construction areas require measures at larger distances in potentially texture less areas under difficult illumination. Pose estimation is further aggravated since no GNSS measures are available as it is usual for such indoor applications. In our work, we realize data collection in a large factory hall by a robot system equipped with four stereo cameras as well as a 3D laser scanner. We apply our state-of-the-art LiDAR and visual SLAM approaches and discuss the respective pros and cons of the different sensor types for trajectory estimation and dense map generation in such an environment. Additionally, dense and accurate depth maps are generated by 3D Gaussian splatting, which we plan to use in the context of our project aiming on the automatic construction and site monitoring

    Land subsidence hazard in iran revealed by country-scale analysis of sentinel-1 insar

    Get PDF
    Many areas across Iran are subject to land subsidence, a sign of exceeding stress due to the over-extraction of groundwater during the past decades. This paper uses a huge dataset of Sentinel-1, acquired since 2014 in 66 image frames of 250×250km, to identify and monitor land subsidence across Iran. Using a two-step time series analysis, we first identify subsidence zones at a medium scale of 100m across the country. For the first time, our results provide a comprehensive nationwide map of subsidence in Iran and recognize its spatial distribution and magnitude. Then, in the second step of analysis, we quantify the deformation time series at the highest possible resolution to study its impact on civil infrastructure. The results spots the hazard posed by land subsidence to different infrastructure. Examples of road and railways affected by land subsidence hazard in Tehran and Mashhad, two of the most populated cities in Iran, are presented in this study

    Exploring cloud-based platforms for rapid insar time series analysis

    Get PDF
    The idea of near real-time deformation analysis using Synthetic Aperture Radar (SAR) data as a response to natural and anthropogenic disasters has been an interesting topic in the last years. A major limiting factor for this purpose has been the non-availability of both spatially and temporally homogeneous SAR datasets. This has now been resolved thanks to the SAR data provided by the Sentinel-1A/B missions, freely available at a global scale via the Copernicus program of the European Space Agency (ESA). Efficient InSAR analysis in the era of Sentinel demands working with cloud-based platforms to tackle problems posed by large volumes of data. In this study, we explore a variety of existing cloud-based platforms for Multioral Interferometric SAR (MTI) analysis and discuss their opportunities and limitations

    Refinement type contracts for verification of scientific investigative software

    Full text link
    Our scientific knowledge is increasingly built on software output. User code which defines data analysis pipelines and computational models is essential for research in the natural and social sciences, but little is known about how to ensure its correctness. The structure of this code and the development process used to build it limit the utility of traditional testing methodology. Formal methods for software verification have seen great success in ensuring code correctness but generally require more specialized training, development time, and funding than is available in the natural and social sciences. Here, we present a Python library which uses lightweight formal methods to provide correctness guarantees without the need for specialized knowledge or substantial time investment. Our package provides runtime verification of function entry and exit condition contracts using refinement types. It allows checking hyperproperties within contracts and offers automated test case generation to supplement online checking. We co-developed our tool with a medium-sized (\approx3000 LOC) software package which simulates decision-making in cognitive neuroscience. In addition to helping us locate trivial bugs earlier on in the development cycle, our tool was able to locate four bugs which may have been difficult to find using traditional testing methods. It was also able to find bugs in user code which did not contain contracts or refinement type annotations. This demonstrates how formal methods can be used to verify the correctness of scientific software which is difficult to test with mainstream approaches

    Planck 2015 results. XIV. Dark energy and modified gravity

    Get PDF
    We study the implications of Planck data for models of dark energy (DE) and modified gravity (MG), beyond the cosmological constant scenario. We start with cases where the DE only directly affects the background evolution, considering Taylor expansions of the equation of state, principal component analysis and parameterizations related to the potential of a minimally coupled DE scalar field. When estimating the density of DE at early times, we significantly improve present constraints. We then move to general parameterizations of the DE or MG perturbations that encompass both effective field theories and the phenomenology of gravitational potentials in MG models. Lastly, we test a range of specific models, such as k-essence, f(R) theories and coupled DE. In addition to the latest Planck data, for our main analyses we use baryonic acoustic oscillations, type-Ia supernovae and local measurements of the Hubble constant. We further show the impact of measurements of the cosmological perturbations, such as redshift-space distortions and weak gravitational lensing. These additional probes are important tools for testing MG models and for breaking degeneracies that are still present in the combination of Planck and background data sets. All results that include only background parameterizations are in agreement with LCDM. When testing models that also change perturbations (even when the background is fixed to LCDM), some tensions appear in a few scenarios: the maximum one found is \sim 2 sigma for Planck TT+lowP when parameterizing observables related to the gravitational potentials with a chosen time dependence; the tension increases to at most 3 sigma when external data sets are included. It however disappears when including CMB lensing
    corecore