153 research outputs found
Land subsidence hazard in iran revealed by country-scale analysis of sentinel-1 insar
Many areas across Iran are subject to land subsidence, a sign of exceeding stress due to the over-extraction of groundwater during the past decades. This paper uses a huge dataset of Sentinel-1, acquired since 2014 in 66 image frames of 250×250km, to identify and monitor land subsidence across Iran. Using a two-step time series analysis, we first identify subsidence zones at a medium scale of 100m across the country. For the first time, our results provide a comprehensive nationwide map of subsidence in Iran and recognize its spatial distribution and magnitude. Then, in the second step of analysis, we quantify the deformation time series at the highest possible resolution to study its impact on civil infrastructure. The results spots the hazard posed by land subsidence to different infrastructure. Examples of road and railways affected by land subsidence hazard in Tehran and Mashhad, two of the most populated cities in Iran, are presented in this study
Exploring cloud-based platforms for rapid insar time series analysis
The idea of near real-time deformation analysis using Synthetic Aperture Radar (SAR) data as a response to natural and anthropogenic disasters has been an interesting topic in the last years. A major limiting factor for this purpose has been the non-availability of both spatially and temporally homogeneous SAR datasets. This has now been resolved thanks to the SAR data provided by the Sentinel-1A/B missions, freely available at a global scale via the Copernicus program of the European Space Agency (ESA). Efficient InSAR analysis in the era of Sentinel demands working with cloud-based platforms to tackle problems posed by large volumes of data. In this study, we explore a variety of existing cloud-based platforms for Multioral Interferometric SAR (MTI) analysis and discuss their opportunities and limitations
Refinement type contracts for verification of scientific investigative software
Our scientific knowledge is increasingly built on software output. User code
which defines data analysis pipelines and computational models is essential for
research in the natural and social sciences, but little is known about how to
ensure its correctness. The structure of this code and the development process
used to build it limit the utility of traditional testing methodology. Formal
methods for software verification have seen great success in ensuring code
correctness but generally require more specialized training, development time,
and funding than is available in the natural and social sciences. Here, we
present a Python library which uses lightweight formal methods to provide
correctness guarantees without the need for specialized knowledge or
substantial time investment. Our package provides runtime verification of
function entry and exit condition contracts using refinement types. It allows
checking hyperproperties within contracts and offers automated test case
generation to supplement online checking. We co-developed our tool with a
medium-sized (3000 LOC) software package which simulates
decision-making in cognitive neuroscience. In addition to helping us locate
trivial bugs earlier on in the development cycle, our tool was able to locate
four bugs which may have been difficult to find using traditional testing
methods. It was also able to find bugs in user code which did not contain
contracts or refinement type annotations. This demonstrates how formal methods
can be used to verify the correctness of scientific software which is difficult
to test with mainstream approaches
Planck 2015 results. XIV. Dark energy and modified gravity
We study the implications of Planck data for models of dark energy (DE) and modified gravity (MG), beyond the cosmological constant scenario. We start with cases where the DE only directly affects the background evolution, considering Taylor expansions of the equation of state, principal component analysis and parameterizations related to the potential of a minimally coupled DE scalar field. When estimating the density of DE at early times, we significantly improve present constraints. We then move to general parameterizations of the DE or MG perturbations that encompass both effective field theories and the phenomenology of gravitational potentials in MG models. Lastly, we test a range of specific models, such as k-essence, f(R) theories and coupled DE. In addition to the latest Planck data, for our main analyses we use baryonic acoustic oscillations, type-Ia supernovae and local measurements of the Hubble constant. We further show the impact of measurements of the cosmological perturbations, such as redshift-space distortions and weak gravitational lensing. These additional probes are important tools for testing MG models and for breaking degeneracies that are still present in the combination of Planck and background data sets. All results that include only background parameterizations are in agreement with LCDM. When testing models that also change perturbations (even when the background is fixed to LCDM), some tensions appear in a few scenarios: the maximum one found is \sim 2 sigma for Planck TT+lowP when parameterizing observables related to the gravitational potentials with a chosen time dependence; the tension increases to at most 3 sigma when external data sets are included. It however disappears when including CMB lensing
The association between adverse pregnancy outcomes and maternal human papillomavirus infection: a systematic review protocol
- …