49 research outputs found
FLOOD-WATER LEVEL ESTIMATION FROM SOCIAL MEDIA IMAGES
In the event of a flood, being able to build accurate flood level maps is essential for supporting emergency plan operations. In order to build such maps, it is important to collect observations from the disaster area. Social media platforms can be useful sources of information in this case, as people located in the flood area tend to share text and pictures depicting the current situation. Developing an effective and fully automatized method able to retrieve data from social media and extract useful information in real-time is crucial for a quick and proper response to these catastrophic events. In this paper, we propose a method to quantify flood-water from images gathered from social media. If no prior information about the zone where the picture was taken is available, one possible way to estimate the flood level consists of assessing how much the objects appearing in the image are submerged in water. There are various factors that make this task difficult: i) the precise size of the objects appearing in the image might not be known; ii) flood-water appearing in different zones of the image scene might have different height; iii) objects may be only partially visible as they can be submerged in water. In order to solve these problems, we propose a method that first locates selected classes of objects whose sizes are approximately known, then, it leverages this property to estimate the water level. To prove the validity of this approach, we first build a flood-water image dataset, then we use it to train a deep learning model. We finally show the ability of our trained model to recognize objects and at the same time predict correctly flood-water level
Domain Adaptation for Semantic Segmentation of Historical Panchromatic Orthomosaics in Central Africa
Multitemporal environmental and urban studies are essential to guide policy making to ultimately improve human wellbeing in the Global South. Land-cover products derived from historical aerial orthomosaics acquired decades ago can provide important evidence to inform long-term studies. To reduce the manual labelling effort by human experts and to scale to large, meaningful regions, we investigate in this study how domain adaptation techniques and deep learning can help to efficiently map land cover in Central Africa. We propose and evaluate a methodology that is based on unsupervised adaptation to reduce the cost of generating reference data for several cities and across different dates. We present the first application of domain adaptation based on fully convolutional networks for semantic segmentation of a dataset of historical panchromatic orthomosaics for land-cover generation for two focus cities Goma-Gisenyi and Bukavu. Our experimental evaluation shows that the domain adaptation methods can reach an overall accuracy between 60% and 70% for different regions. If we add a small amount of labelled data from the target domain, too, further performance gains can be achieved
Robust Damage Estimation of Typhoon Goni on Coconut Crops with Sentinel-2 Imagery
Typhoon Goni crossed several provinces in the Philippines where agriculture has high socioeconomic importance, including the top-3 provinces in terms of planted coconut trees. We have used a computational model to infer coconut tree density from satellite images before and after the typhoonâs passage, and in this way estimate the number of damaged trees. Our area of study around the typhoonâs path covers 15.7 Mha, and includes 47 of the 87 provinces in the Philippines. In validation areas our model predicts coconut tree density with a Mean Absolute Error of 5.9 Trees/ha. In Camarines Sur we estimated that 3.5 M of the 4.6 M existing coconut trees were damaged by the typhoon. Overall we estimated that 14.1 M coconut trees were affected by the typhoon inside our area of study. Our validation images confirm that trees are rarely uprooted and damages are largely due to reduced canopy cover of standing trees. On validation areas, our model was able to detect affected coconut trees with 88.6% accuracy, 75% precision and 90% recall. Our method delivers spatially fine-grained change maps for coconut plantations in the area of study, including unchanged, damaged and new trees. Beyond immediate damage assessment, gradual changes in coconut density may serve as a proxy for future changes in yield
Deep-doLCE. A deep learning approach for the color reconstruction of digitized lenticular film
Some of the first home movies in color were shot on 16 mm lenticular film during the 1920s to 1940s. This very special film is embossed with a vertical array of hundreds of tiny cylindrical lenses that allowed to record color scenes on a black&white silver emulsion. The most efficient approach to obtain digital color images from these historical motion pictures is to scan the silver emulsion in high-
resolution and let a software extract the encoded color information. The present work focuses on the localization of the lenticular screen, which is the first and most complicated step of the color reconstruction. A âclassicâ signal processing method proved to deliver successful results in some cases, but often adverse factorsâdamaged or warped film, scanning problemsâhinder the successful localization of the lenticular screen. Deep-doLCE explores a more advanced and robust method, using a big dataset of digitized lenticular films to train a new deep learning software. The aim is to create an easy-to-use software that revives awareness of the lenticular color processes thus making these precious historical color movies available again to public and securing them for posterity
Modeling of Residual GNSS Station Motions through Meteorological Data in a Machine Learning Approach
Long-term Global Navigation Satellite System (GNSS) height residual time series contain signals that are related to environmental influences. A big part of the residuals can be explained by environmental surface loadings, expressed through physical models. This work aims to find a model that connects raw meteorological parameters with the GNSS residuals. The approach is to train a Temporal Convolutional Network (TCN) on 206 GNSS stations in central Europe, after which the resulting model is applied to 68 test stations in the same area. When comparing the Root Mean Square (RMS) error reduction of the time series reduced by physical models, and, by the TCN model, the latter reduction rate is, on average, 0.8% lower. In a second experiment, the TCN is utilized to further reduce the RMS of the time series, of which the loading models were already subtracted. This yields additional 2.7% of RMS reduction on average, resulting in a mean RMS reduction of 28.6% overall. The results suggests that a TCN, using meteorological features as input data, is able to reconstruct the reductions almost on the same level as physical models. Trained on the residuals, reduced by environmental loadings, the TCN is still able to slightly increase the overall reduction of variations in the GNSS station position time series
Modeling of Residual GNSS Station Motions through Meteorological Data in a Machine Learning Approach
Long-term Global Navigation Satellite System (GNSS) height residual time series contain signals that are related to environmental influences. A big part of the residuals can be explained by environmental surface loadings, expressed through physical models. This work aims to find a model that connects raw meteorological parameters with the GNSS residuals. The approach is to train a Temporal Convolutional Network (TCN) on 206 GNSS stations in central Europe, after which the resulting model is applied to 68 test stations in the same area. When comparing the Root Mean Square (RMS) error reduction of the time series reduced by physical models, and, by the TCN model, the latter reduction rate is, on average, 0.8% lower. In a second experiment, the TCN is utilized to further reduce the RMS of the time series, of which the loading models were already subtracted. This yields additional 2.7% of RMS reduction on average, resulting in a mean RMS reduction of 28.6% overall. The results suggests that a TCN, using meteorological features as input data, is able to reconstruct the reductions almost on the same level as physical models. Trained on the residuals, reduced by environmental loadings, the TCN is still able to slightly increase the overall reduction of variations in the GNSS station position time series
Recommended from our members
Metal ion release from fine particulate matter sampled in the Po Valley to an aqueous solution mimicking fog water: Kinetics and solubility
Metals are among the key aerosol components exerting adverse health effects. Their toxic properties may vary depending on their chemical form and solubility, which can be affected by aqueous processing during aerosol atmospheric lifetime. In this work, fine particulate matter (PM2.5) was collected in the city centre of Padua in the Po Valley (Italy), during a winter campaign. Part of the sampling filters were used to measure the kinetics by which metal ions and other elements can leach from PM2.5 to an aqueous solution mimicking fog water in the winter in temperate climate regions (pH 4.7, 5°C). The leaching process was interpreted by a first order kinetics, and the fitting of the experimental data allowed to obtain the leaching kinetic constants and the equilibrium concentrations (i.e. at infinite time) for all elements. The remaining filter parts were mineralised, through two subsequent extraction steps, and the extracts were analysed by ICP-MS to gain the total elemental content of PM for a large number of elements. We found that elements can leach from PM with half times generally between 10-40 minutes, which is a timescale compatible with atmospheric aqueous processing during fog events. For instance, aluminium in PM2.5 dissolved with an average k = 0.0185 min-1, and t1/2 = 37.5 min. Nevertheless, a fraction of the elements was immediately solubilised after contact with the extraction solution suggesting that metal ion solubilisation may already had started during particle lifetime in the atmosphere
Digital taxonomist: Identifying plant species in community scientistsâ photographs
Automatic identification of plant specimens from amateur photographs could improve species range maps, thus supporting ecosystems research as well as conservation efforts. However, classifying plant specimens based on image data alone is challenging: some species exhibit large variations in visual appearance, while at the same time different species are often visually similar; additionally, species observations follow a highly imbalanced, long-tailed distribution due to differences in abundance as well as observer biases. On the other hand, most species observations are accompanied by side information about the spatial, temporal and ecological context. Moreover, biological species are not an unordered list of classes but embedded in a hierarchical taxonomic structure. We propose a multimodal deep learning model that takes into account these additional cues in a unified framework. Our Digital Taxonomist is able to identify plant species in photographs better than a classifier trained on the image content alone, the performance gained is over 6 percent points in terms of accuracy