49 research outputs found

    FLOOD-WATER LEVEL ESTIMATION FROM SOCIAL MEDIA IMAGES

    Get PDF
    In the event of a flood, being able to build accurate flood level maps is essential for supporting emergency plan operations. In order to build such maps, it is important to collect observations from the disaster area. Social media platforms can be useful sources of information in this case, as people located in the flood area tend to share text and pictures depicting the current situation. Developing an effective and fully automatized method able to retrieve data from social media and extract useful information in real-time is crucial for a quick and proper response to these catastrophic events. In this paper, we propose a method to quantify flood-water from images gathered from social media. If no prior information about the zone where the picture was taken is available, one possible way to estimate the flood level consists of assessing how much the objects appearing in the image are submerged in water. There are various factors that make this task difficult: i) the precise size of the objects appearing in the image might not be known; ii) flood-water appearing in different zones of the image scene might have different height; iii) objects may be only partially visible as they can be submerged in water. In order to solve these problems, we propose a method that first locates selected classes of objects whose sizes are approximately known, then, it leverages this property to estimate the water level. To prove the validity of this approach, we first build a flood-water image dataset, then we use it to train a deep learning model. We finally show the ability of our trained model to recognize objects and at the same time predict correctly flood-water level

    Writing in Britain and Ireland, c. 400 to c. 800

    Get PDF
    No abstract available

    Domain Adaptation for Semantic Segmentation of Historical Panchromatic Orthomosaics in Central Africa

    Full text link
    Multitemporal environmental and urban studies are essential to guide policy making to ultimately improve human wellbeing in the Global South. Land-cover products derived from historical aerial orthomosaics acquired decades ago can provide important evidence to inform long-term studies. To reduce the manual labelling effort by human experts and to scale to large, meaningful regions, we investigate in this study how domain adaptation techniques and deep learning can help to efficiently map land cover in Central Africa. We propose and evaluate a methodology that is based on unsupervised adaptation to reduce the cost of generating reference data for several cities and across different dates. We present the first application of domain adaptation based on fully convolutional networks for semantic segmentation of a dataset of historical panchromatic orthomosaics for land-cover generation for two focus cities Goma-Gisenyi and Bukavu. Our experimental evaluation shows that the domain adaptation methods can reach an overall accuracy between 60% and 70% for different regions. If we add a small amount of labelled data from the target domain, too, further performance gains can be achieved

    Robust Damage Estimation of Typhoon Goni on Coconut Crops with Sentinel-2 Imagery

    Full text link
    Typhoon Goni crossed several provinces in the Philippines where agriculture has high socioeconomic importance, including the top-3 provinces in terms of planted coconut trees. We have used a computational model to infer coconut tree density from satellite images before and after the typhoon’s passage, and in this way estimate the number of damaged trees. Our area of study around the typhoon’s path covers 15.7 Mha, and includes 47 of the 87 provinces in the Philippines. In validation areas our model predicts coconut tree density with a Mean Absolute Error of 5.9 Trees/ha. In Camarines Sur we estimated that 3.5 M of the 4.6 M existing coconut trees were damaged by the typhoon. Overall we estimated that 14.1 M coconut trees were affected by the typhoon inside our area of study. Our validation images confirm that trees are rarely uprooted and damages are largely due to reduced canopy cover of standing trees. On validation areas, our model was able to detect affected coconut trees with 88.6% accuracy, 75% precision and 90% recall. Our method delivers spatially fine-grained change maps for coconut plantations in the area of study, including unchanged, damaged and new trees. Beyond immediate damage assessment, gradual changes in coconut density may serve as a proxy for future changes in yield

    Deep-doLCE. A deep learning approach for the color reconstruction of digitized lenticular film

    Full text link
    Some of the first home movies in color were shot on 16 mm lenticular film during the 1920s to 1940s. This very special film is embossed with a vertical array of hundreds of tiny cylindrical lenses that allowed to record color scenes on a black&white silver emulsion. The most efficient approach to obtain digital color images from these historical motion pictures is to scan the silver emulsion in high- resolution and let a software extract the encoded color information. The present work focuses on the localization of the lenticular screen, which is the first and most complicated step of the color reconstruction. A ‘classic’ signal processing method proved to deliver successful results in some cases, but often adverse factors—damaged or warped film, scanning problems—hinder the successful localization of the lenticular screen. Deep-doLCE explores a more advanced and robust method, using a big dataset of digitized lenticular films to train a new deep learning software. The aim is to create an easy-to-use software that revives awareness of the lenticular color processes thus making these precious historical color movies available again to public and securing them for posterity

    Modeling of Residual GNSS Station Motions through Meteorological Data in a Machine Learning Approach

    No full text
    Long-term Global Navigation Satellite System (GNSS) height residual time series contain signals that are related to environmental influences. A big part of the residuals can be explained by environmental surface loadings, expressed through physical models. This work aims to find a model that connects raw meteorological parameters with the GNSS residuals. The approach is to train a Temporal Convolutional Network (TCN) on 206 GNSS stations in central Europe, after which the resulting model is applied to 68 test stations in the same area. When comparing the Root Mean Square (RMS) error reduction of the time series reduced by physical models, and, by the TCN model, the latter reduction rate is, on average, 0.8% lower. In a second experiment, the TCN is utilized to further reduce the RMS of the time series, of which the loading models were already subtracted. This yields additional 2.7% of RMS reduction on average, resulting in a mean RMS reduction of 28.6% overall. The results suggests that a TCN, using meteorological features as input data, is able to reconstruct the reductions almost on the same level as physical models. Trained on the residuals, reduced by environmental loadings, the TCN is still able to slightly increase the overall reduction of variations in the GNSS station position time series

    Modeling of Residual GNSS Station Motions through Meteorological Data in a Machine Learning Approach

    No full text
    Long-term Global Navigation Satellite System (GNSS) height residual time series contain signals that are related to environmental influences. A big part of the residuals can be explained by environmental surface loadings, expressed through physical models. This work aims to find a model that connects raw meteorological parameters with the GNSS residuals. The approach is to train a Temporal Convolutional Network (TCN) on 206 GNSS stations in central Europe, after which the resulting model is applied to 68 test stations in the same area. When comparing the Root Mean Square (RMS) error reduction of the time series reduced by physical models, and, by the TCN model, the latter reduction rate is, on average, 0.8% lower. In a second experiment, the TCN is utilized to further reduce the RMS of the time series, of which the loading models were already subtracted. This yields additional 2.7% of RMS reduction on average, resulting in a mean RMS reduction of 28.6% overall. The results suggests that a TCN, using meteorological features as input data, is able to reconstruct the reductions almost on the same level as physical models. Trained on the residuals, reduced by environmental loadings, the TCN is still able to slightly increase the overall reduction of variations in the GNSS station position time series

    Digital taxonomist: Identifying plant species in community scientists’ photographs

    Full text link
    Automatic identification of plant specimens from amateur photographs could improve species range maps, thus supporting ecosystems research as well as conservation efforts. However, classifying plant specimens based on image data alone is challenging: some species exhibit large variations in visual appearance, while at the same time different species are often visually similar; additionally, species observations follow a highly imbalanced, long-tailed distribution due to differences in abundance as well as observer biases. On the other hand, most species observations are accompanied by side information about the spatial, temporal and ecological context. Moreover, biological species are not an unordered list of classes but embedded in a hierarchical taxonomic structure. We propose a multimodal deep learning model that takes into account these additional cues in a unified framework. Our Digital Taxonomist is able to identify plant species in photographs better than a classifier trained on the image content alone, the performance gained is over 6 percent points in terms of accuracy
    corecore