9,674 research outputs found
The agricultural impact of the 2015–2016 floods in Ireland as mapped through Sentinel 1 satellite imagery
peer-reviewedIrish Journal of Agricultural and Food Research | Volume 58: Issue 1
The agricultural impact of the 2015–2016 floods in Ireland as mapped through Sentinel 1 satellite imagery
R. O’Haraemail
, S. Green
and T. McCarthy
DOI: https://doi.org/10.2478/ijafr-2019-0006 | Published online: 11 Oct 2019
PDF
Abstract
Article
PDF
References
Recommendations
Abstract
The capability of Sentinel 1 C-band (5 cm wavelength) synthetic aperture radio detection and ranging (RADAR) (abbreviated as SAR) for flood mapping is demonstrated, and this approach is used to map the extent of the extensive floods that occurred throughout the Republic of Ireland in the winter of 2015–2016. Thirty-three Sentinel 1 images were used to map the area and duration of floods over a 6-mo period from November 2015 to April 2016. Flood maps for 11 separate dates charted the development and persistence of floods nationally. The maximum flood extent during this period was estimated to be ~24,356 ha. The depth of rainfall influenced the magnitude of flood in the preceding 5 d and over more extended periods to a lesser degree. Reduced photosynthetic activity on farms affected by flooding was observed in Landsat 8 vegetation index difference images compared to the previous spring. The accuracy of the flood map was assessed against reports of flooding from affected farms, as well as other satellite-derived maps from Copernicus Emergency Management Service and Sentinel 2. Monte Carlo simulated elevation data (20 m resolution, 2.5 m root mean square error [RMSE]) were used to estimate the flood’s depth and volume. Although the modelled flood height showed a strong correlation with the measured river heights, differences of several metres were observed. Future mapping strategies are discussed, which include high–temporal-resolution soil moisture data, as part of an integrated multisensor approach to flood response over a range of spatial scales
Decomposition-based and Interference Perception for Infrared and Visible Image Fusion in Complex Scenes
Infrared and visible image fusion has emerged as a prominent research in
computer vision. However, little attention has been paid on complex scenes
fusion, causing existing techniques to produce sub-optimal results when suffers
from real interferences. To fill this gap, we propose a decomposition-based and
interference perception image fusion method. Specifically, we classify the
pixels of visible image from the degree of scattering of light transmission,
based on which we then separate the detail and energy information of the image.
This refined decomposition facilitates the proposed model in identifying more
interfering pixels that are in complex scenes. To strike a balance between
denoising and detail preservation, we propose an adaptive denoising scheme for
fusing detail components. Meanwhile, we propose a new weighted fusion rule by
considering the distribution of image energy information from the perspective
of multiple directions. Extensive experiments in complex scenes fusions cover
adverse weathers, noise, blur, overexposure, fire, as well as downstream tasks
including semantic segmentation, object detection, salient object detection and
depth estimation, consistently indicate the effectiveness and superiority of
the proposed method compared with the recent representative methods
Is it Raining Outside? Detection of Rainfall using General-Purpose Surveillance Cameras
In integrated surveillance systems based on visual cameras, the mitigation of
adverse weather conditions is an active research topic. Within this field, rain
removal algorithms have been developed that artificially remove rain streaks
from images or video. In order to deploy such rain removal algorithms in a
surveillance setting, one must detect if rain is present in the scene. In this
paper, we design a system for the detection of rainfall by the use of
surveillance cameras. We reimplement the former state-of-the-art method for
rain detection and compare it against a modern CNN-based method by utilizing 3D
convolutions. The two methods are evaluated on our new AAU Visual Rain Dataset
(VIRADA) that consists of 215 hours of general-purpose surveillance video from
two traffic crossings. The results show that the proposed 3D CNN outperforms
the previous state-of-the-art method by a large margin on all metrics, for both
of the traffic crossings. Finally, it is shown that the choice of
region-of-interest has a large influence on performance when trying to
generalize the investigated methods. The AAU VIRADA dataset and our
implementation of the two rain detection algorithms are publicly available at
https://bitbucket.org/aauvap/aau-virada.Comment: 10 pages, 7 figures, CVPR2019 V4AS worksho
Driving in the Rain: A Survey toward Visibility Estimation through Windshields
Rain can significantly impair the driver’s sight and affect his performance when driving in wet conditions. Evaluation of driver visibility in harsh weather, such as rain, has garnered considerable research since the advent of autonomous vehicles and the emergence of intelligent transportation systems. In recent years, advances in computer vision and machine learning led to a significant number of new approaches to address this challenge. However, the literature is fragmented and should be reorganised and analysed to progress in this field. There is still no comprehensive survey article that summarises driver visibility methodologies, including classic and recent data-driven/model-driven approaches on the windshield in rainy conditions, and compares their generalisation performance fairly. Most ADAS and AD systems are based on object detection. Thus, rain visibility plays a key role in the efficiency of ADAS/AD functions used in semi- or fully autonomous driving. This study fills this gap by reviewing current state-of-the-art solutions in rain visibility estimation used to reconstruct the driver’s view for object detection-based autonomous driving. These solutions are classified as rain visibility estimation systems that work on (1) the perception components of the ADAS/AD function, (2) the control and other hardware components of the ADAS/AD function, and (3) the visualisation and other software components of the ADAS/AD function. Limitations and unsolved challenges are also highlighted for further research
WEATHER LORE VALIDATION TOOL USING FUZZY COGNITIVE MAPS BASED ON COMPUTER VISION
Published ThesisThe creation of scientific weather forecasts is troubled by many technological challenges (Stern
& Easterling, 1999) while their utilization is generally dismal. Consequently, the majority of
small-scale farmers in Africa continue to consult some forms of weather lore to reach various
cropping decisions (Baliscan, 2001). Weather lore is a body of informal folklore (Enock, 2013),
associated with the prediction of the weather, and based on indigenous knowledge and human
observation of the environment. As such, it tends to be more holistic, and more localized to the
farmers’ context. However, weather lore has limitations; for instance, it has an inability to offer
forecasts beyond a season. Different types of weather lore exist, utilizing almost all available
human senses (feel, smell, sight and hearing). Out of all the types of weather lore in existence, it
is the visual or observed weather lore that is mostly used by indigenous societies, to come up
with weather predictions.
On the other hand, meteorologists continue to treat this knowledge as superstition, partly because
there is no means to scientifically evaluate and validate it. The visualization and characterization
of visual sky objects (such as moon, clouds, stars, and rainbows) in forecasting weather are
significant subjects of research. To realize the integration of visual weather lore in modern
weather forecasting systems, there is a need to represent and scientifically substantiate this form
of knowledge. This research was aimed at developing a method for verifying visual weather lore that is used by
traditional communities to predict weather conditions. To realize this verification, fuzzy
cognitive mapping was used to model and represent causal relationships between selected visual
weather lore concepts and weather conditions. The traditional knowledge used to produce these
maps was attained through case studies of two communities (in Kenya and South Africa).These
case studies were aimed at understanding the weather lore domain as well as the causal effects
between metrological and visual weather lore. In this study, common astronomical weather lore
factors related to cloud physics were identified as: bright stars, dispersed clouds, dry weather,
dull stars, feathery clouds, gathering clouds, grey clouds, high clouds, layered clouds, low
clouds, stars, medium clouds, and rounded clouds. Relationships between the concepts were also
identified and formally represented using fuzzy cognitive maps. On implementing the verification tool, machine vision was used to recognize sky objects
captured using a sky camera, while pattern recognition was employed in benchmarking and
scoring the objects. A wireless weather station was used to capture real-time weather parameters.
The visualization tool was then designed and realized in a form of software artefact, which
integrated both computer vision and fuzzy cognitive mapping for experimenting visual weather
lore, and verification using various statistical forecast skills and metrics. The tool consists of four
main sub-components: (1) Machine vision that recognizes sky objects using support vector
machine classifiers using shape-based feature descriptors; (2) Pattern recognition–to benchmark
and score objects using pixel orientations, Euclidean distance, canny and grey-level concurrence
matrix; (3) Fuzzy cognitive mapping that was used to represent knowledge (i.e. active hebbian
learning algorithm was used to learn until convergence); and (4) A statistical computing
component was used for verifications and forecast skills including brier score and contingency
tables for deterministic forecasts.
Rigorous evaluation of the verification tool was carried out using independent (not used in the
training and testing phases) real-time images from Bloemfontein, South Africa, and Voi-Kenya.
The real-time images were captured using a sky camera with GPS location services. The results
of the implementation were tested for the selected weather conditions (for example, rain, heat, cold, and dry conditions), and found to be acceptable (the verified prediction accuracies were
over 80%). The recommendation in this study is to apply the implemented method for processing
tasks, towards verifying all other types of visual weather lore. In addition, the use of the method
developed also requires the implementation of modules for processing and verifying other types
of weather lore, such as sounds, and symbols of nature. Since time immemorial, from Australia to Asia, Africa to Latin America, local communities have
continued to rely on weather lore observations to predict seasonal weather as well as its effects
on their livelihoods (Alcock, 2014). This is mainly based on many years of personal experiences
in observing weather conditions. However, when it comes to predictions for longer lead-times
(i.e. over a season), weather lore is uncertain (Hornidge & Antweiler, 2012). This uncertainty has
partly contributed to the current status where meteorologists and other scientists continue to treat
weather lore as superstition (United-Nations, 2004), and not capable of predicting weather.
One of the problems in testing the confidence in weather lore in predicting weather is due to
wide varieties of weather lore that are found in the details of indigenous sayings, which are
tightly coupled to locality and pattern variations(Oviedo et al., 2008). This traditional knowledge
is entrenched within the day-to-day socio-economic activities of the communities using it and is
not globally available for comparison and validation (Huntington, Callaghan, Fox, & Krupnik,
2004). Further, this knowledge is based on local experience that lacks benchmarking techniques;
so that harmonizing and integrating it within the science-based weather forecasting systems is a
daunting task (Hornidge & Antweiler, 2012). It is partly for this reason that the question of
validation of weather lore has not yet been substantially investigated. Sufficient expanded
processes of gathering weather observations, combined with comparison and validation, can produce some useful information. Since forecasting weather accurately is a challenge even with
the latest supercomputers (BBC News Magazine, 2013), validated weather lore can be useful if it
is incorporated into modern weather prediction systems.
Validation of traditional knowledge is a necessary step in the management of building integrated
knowledge-based systems. Traditional knowledge incorporated into knowledge-based systems
has to be verified for enhancing systems’ reliability. Weather lore knowledge exists in different
forms as identified by traditional communities; hence it needs to be tied together for comparison
and validation. The development of a weather lore validation tool that can integrate a framework
for acquiring weather data and methods of representing the weather lore in verifiable forms can
be a significant step in the validation of weather lore against actual weather records using
conventional weather-observing instruments. The success of validating weather lore could
stimulate the opportunity for integrating acceptable weather lore with modern systems of weather prediction to improve actionable information for decision making that relies on seasonal weather
prediction.
In this study a hybrid method is developed that includes computer vision and fuzzy cognitive
mapping techniques for verifying visual weather lore. The verification tool was designed with
forecasting based on mimicking visual perception, and fuzzy thinking based on the cognitive
knowledge of humans. The method provides meaning to humanly perceivable sky objects so that
computers can understand, interpret, and approximate visual weather outcomes.
Questionnaires were administered in two case study locations (KwaZulu-Natal province in South
Africa, and Taita-Taveta County in Kenya), between the months of March and July 2015. The
two case studies were conducted by interviewing respondents on how visual astronomical and
meteorological weather concepts cause weather outcomes. The two case studies were used to
identify causal effects of visual astronomical and meteorological objects to weather conditions.
This was followed by finding variations and comparisons, between the visual weather lore
knowledge in the two case studies. The results from the two case studies were aggregated in
terms of seasonal knowledge. The causal links between visual weather concepts were
investigated using these two case studies; results were compared and aggregated to build up
common knowledge. The joint averages of the majority of responses from the case studies were determined for each set of interacting concepts.
The modelling of the weather lore verification tool consists of input, processing components and
output. The input data to the system are sky image scenes and actual weather observations from
wireless weather sensors. The image recognition component performs three sub-tasks, including:
detection of objects (concepts) from image scenes, extraction of detected objects, and
approximation of the presence of the concepts by comparing extracted objects to ideal objects.
The prediction process involves the use of approximated concepts generated in the recognition
component to simulate scenarios using the knowledge represented in the fuzzy cognitive maps.
The verification component evaluates the variation between the predictions and actual weather
observations to determine prediction errors and accuracy.
To evaluate the tool, daily system simulations were run to predict and record probabilities of
weather outcomes (i.e. rain, heat index/hotness, dry, cold index). Weather observations were
captured periodically using a wireless weather station. This process was repeated several times until there was sufficient data to use for the verification process. To match the range of the
predicted weather outcomes, the actual weather observations (measurement) were transformed
and normalized to a range [0, 1].In the verification process, comparisons were made between the
actual observations and weather outcome prediction values by computing residuals (error values)
from the observations. The error values and the squared error were used to compute the Mean
Squared Error (MSE), and the Root Mean Squared Error (RMSE), for each predicted weather
outcome.
Finally, the validity of the visual weather lore verification model was assessed using data from a
different geographical location. Actual data in the form of daily sky scenes and weather
parameters were acquired from Voi, Kenya, from December 2015 to January 2016.The results on
the use of hybrid techniques for verification of weather lore is expected to provide an incentive
in integrating indigenous knowledge on weather with modern numerical weather prediction
systems for accurate and downscaled weather forecasts
A Deep Learning approach for monitoring severe rainfall in urban catchments using consumer cameras. Models development and deployment on a case study in Matera (Italy) Un approccio basato sul Deep Learning per monitorare le piogge intense nei bacini urbani utilizzando fotocamere generiche. Sviluppo e implementazione di modelli su un caso di studio a Matera (Italia)
In the last 50 years, flooding has figured as the most frequent and widespread natural disaster globally. Extreme precipitation events stemming from climate change could alter the hydro-geological regime resulting in increased flood risk. Near real-time precipitation monitoring at local scale is essential for flood risk mitigation in urban and suburban areas, due to their high vulnerability. Presently, most of the rainfall data is obtained from ground‐based measurements or remote sensing that provide limited information in terms of temporal or spatial resolution. Other problems may be due to the high costs. Furthermore, rain gauges are unevenly spread and usually placed away from urban centers. In this context, a big potential is represented by the use of innovative techniques to develop low-cost monitoring systems. Despite the diversity of purposes, methods and epistemological fields, the literature on the visual effects of the rain supports the idea of camera-based rain sensors but tends to be device-specific. The present thesis aims to investigate the use of easily available photographing devices as rain detectors-gauges to develop a dense network of low-cost rainfall sensors to support the traditional methods with an expeditious solution embeddable into smart devices. As opposed to existing works, the study focuses on maximizing the number of image sources (like smartphones, general-purpose surveillance cameras, dashboard cameras, webcams, digital cameras, etc.). This encompasses cases where it is not possible to adjust the camera parameters or obtain shots in timelines or videos. Using a Deep Learning approach, the rainfall characterization can be achieved through the analysis of the perceptual aspects that determine whether and how a photograph represents a rainy condition. The first scenario of interest for the supervised learning was a binary classification; the binary output (presence or absence of rain) allows the detection of the presence of precipitation: the cameras act as rain detectors. Similarly, the second scenario of interest was a multi-class classification; the multi-class output described a range of quasi-instantaneous rainfall intensity: the cameras act as rain estimators. Using Transfer Learning with Convolutional Neural Networks, the developed models were compiled, trained, validated, and tested. The preparation of the classifiers included the preparation of a suitable dataset encompassing unconstrained verisimilar settings: open data, several data owned by National Research Institute for Earth Science and Disaster Prevention - NIED (dashboard cameras in Japan coupled with high precision multi-parameter radar data), and experimental activities conducted in the NIED Large Scale Rainfall Simulator. The outcomes were applied to a real-world scenario, with the experimentation through a pre-existent surveillance camera using 5G connectivity provided by Telecom Italia S.p.A. in the city of Matera (Italy). Analysis unfolded on several levels providing an overview of generic issues relating to the urban flood risk paradigm and specific territorial questions inherent with the case study. These include the context aspects, the important role of rainfall from driving the millennial urban evolution to determining present criticality, and components of a Web prototype for flood risk communication at local scale. The results and the model deployment raise the possibility that low‐cost technologies and local capacities can help to retrieve rainfall information for flood early warning systems based on the identification of a significant meteorological state. The binary model reached accuracy and F1 score values of 85.28% and 0.86 for the test, and 83.35% and 0.82 for the deployment. The multi-class model reached test average accuracy and macro-averaged F1 score values of 77.71% and 0.73 for the 6-way classifier, and 78.05% and 0.81 for the 5-class. The best performances were obtained in heavy rainfall and no-rain conditions, whereas the mispredictions are related to less severe precipitation. The proposed method has limited operational requirements, can be easily and quickly implemented in real use cases, exploiting pre-existent devices with a parsimonious use of economic and computational resources. The classification can be performed on single photographs taken in disparate conditions by commonly used acquisition devices, i.e. by static or moving cameras without adjusted parameters. This approach is especially useful in urban areas where measurement methods such as rain gauges encounter installation difficulties or operational limitations or in contexts where there is no availability of remote sensing data. The system does not suit scenes that are also misleading for human visual perception. The approximations inherent in the output are acknowledged. Additional data may be gathered to address gaps that are apparent and improve the accuracy of the precipitation intensity prediction. Future research might explore the integration with further experiments and crowdsourced data, to promote communication, participation, and dialogue among stakeholders and to increase public awareness, emergency response, and civic engagement through the smart community idea.Negli ultimi 50 anni, le alluvioni si sono confermate come il disastro naturale più frequente e diffuso a livello globale. Tra gli impatti degli eventi meteorologici estremi, conseguenti ai cambiamenti climatici, rientrano le alterazioni del regime idrogeologico con conseguente incremento del rischio alluvionale. Il monitoraggio delle precipitazioni in tempo quasi reale su scala locale è essenziale per la mitigazione del rischio di alluvione in ambito urbano e periurbano, aree connotate da un'elevata vulnerabilità. Attualmente, la maggior parte dei dati sulle precipitazioni è ottenuta da misurazioni a terra o telerilevamento che forniscono informazioni limitate in termini di risoluzione temporale o spaziale. Ulteriori problemi possono derivare dagli elevati costi. Inoltre i pluviometri sono distribuiti in modo non uniforme e spesso posizionati piuttosto lontano dai centri urbani, comportando criticità e discontinuità nel monitoraggio. In questo contesto, un grande potenziale è rappresentato dall'utilizzo di tecniche innovative per sviluppare sistemi inediti di monitoraggio a basso costo. Nonostante la diversità di scopi, metodi e campi epistemologici, la letteratura sugli effetti visivi della pioggia supporta l'idea di sensori di pioggia basati su telecamera, ma tende ad essere specifica per dispositivo scelto. La presente tesi punta a indagare l'uso di dispositivi fotografici facilmente reperibili come rilevatori-misuratori di pioggia, per sviluppare una fitta rete di sensori a basso costo a supporto dei metodi tradizionali con una soluzione rapida incorporabile in dispositivi intelligenti. A differenza dei lavori esistenti, lo studio si concentra sulla massimizzazione del numero di fonti di immagini (smartphone, telecamere di sorveglianza generiche, telecamere da cruscotto, webcam, telecamere digitali, ecc.). Ciò comprende casi in cui non sia possibile regolare i parametri fotografici o ottenere scatti in timeline o video. Utilizzando un approccio di Deep Learning, la caratterizzazione delle precipitazioni può essere ottenuta attraverso l'analisi degli aspetti percettivi che determinano se e come una fotografia rappresenti una condizione di pioggia. Il primo scenario di interesse per l'apprendimento supervisionato è una classificazione binaria; l'output binario (presenza o assenza di pioggia) consente la rilevazione della presenza di precipitazione: gli apparecchi fotografici fungono da rivelatori di pioggia. Analogamente, il secondo scenario di interesse è una classificazione multi-classe; l'output multi-classe descrive un intervallo di intensità delle precipitazioni quasi istantanee: le fotocamere fungono da misuratori di pioggia. Utilizzando tecniche di Transfer Learning con reti neurali convoluzionali, i modelli sviluppati sono stati compilati, addestrati, convalidati e testati. La preparazione dei classificatori ha incluso la preparazione di un set di dati adeguato con impostazioni verosimili e non vincolate: dati aperti, diversi dati di proprietà del National Research Institute for Earth Science and Disaster Prevention - NIED (telecamere dashboard in Giappone accoppiate con dati radar multiparametrici ad alta precisione) e attività sperimentali condotte nel simulatore di pioggia su larga scala del NIED. I risultati sono stati applicati a uno scenario reale, con la sperimentazione attraverso una telecamera di sorveglianza preesistente che utilizza la connettività 5G fornita da Telecom Italia S.p.A. nella città di Matera (Italia). L'analisi si è svolta su più livelli, fornendo una panoramica sulle questioni relative al paradigma del rischio di alluvione in ambito urbano e questioni territoriali specifiche inerenti al caso di studio. Queste ultime includono diversi aspetti del contesto, l'importante ruolo delle piogge dal guidare l'evoluzione millenaria della morfologia urbana alla determinazione delle criticità attuali, oltre ad alcune componenti di un prototipo Web per la comunicazione del rischio alluvionale su scala locale. I risultati ottenuti e l'implementazione del modello corroborano la possibilità che le tecnologie a basso costo e le capacità locali possano aiutare a caratterizzare la forzante pluviometrica a supporto dei sistemi di allerta precoce basati sull'identificazione di uno stato meteorologico significativo. Il modello binario ha raggiunto un'accuratezza e un F1-score di 85,28% e 0,86 per il set di test e di 83,35% e 0,82 per l'implementazione nel caso di studio. Il modello multi-classe ha raggiunto un'accuratezza media e F1-score medio (macro-average) di 77,71% e 0,73 per il classificatore a 6 vie e 78,05% e 0,81 per quello a 5 classi. Le prestazioni migliori sono state ottenute nelle classi relative a forti precipitazioni e assenza di pioggia, mentre le previsioni errate sono legate a precipitazioni meno estreme. Il metodo proposto richiede requisiti operativi limitati, può essere implementato facilmente e rapidamente in casi d'uso reali, sfruttando dispositivi preesistenti con un uso parsimonioso di risorse economiche e computazionali. La classificazione può essere eseguita su singole fotografie scattate in condizioni disparate da dispositivi di acquisizione di uso comune, ovvero da telecamere statiche o in movimento senza regolazione dei parametri. Questo approccio potrebbe essere particolarmente utile nelle aree urbane in cui i metodi di misurazione come i pluviometri incontrano difficoltà di installazione o limitazioni operative o in contesti in cui non sono disponibili dati di telerilevamento o radar. Il sistema non si adatta a scene che sono fuorvianti anche per la percezione visiva umana. I limiti attuali risiedono nelle approssimazioni intrinseche negli output. Per colmare le lacune evidenti e migliorare l'accuratezza della previsione dell'intensità di precipitazione, sarebbe possibile un'ulteriore raccolta di dati. Sviluppi futuri potrebbero riguardare l'integrazione con ulteriori esperimenti in campo e dati da crowdsourcing, per promuovere comunicazione, partecipazione e dialogo aumentando la resilienza attraverso consapevolezza pubblica e impegno civico in una concezione di comunità smart
In Arcadia: Landscape filming in a toxic wasteland Game engine affordances and post-game narratives
Videogames, whether immersive simulations or abstract puzzlers, impose their own set of internal logics upon the player. If the player decides to transgress or subvert the rules or normal behaviour without directly affecting the software system itself through modification or hacking, these same internal logics still affect the player or ‘subvertors’ behaviour; the videogame has its own set of affordances, the properties that an artefact or system has that influence interaction
- …