5,295 research outputs found
Flood dynamics derived from video remote sensing
Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models.
Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science
Arctic tundra shrubification can obscure increasing levels of soil erosion in NDVI assessments of land cover derived from satellite imagery
This research was supported by the St Andrews World Leading Scholarship.Monitoring soil erosion in the Arctic tundra is complicated by the highly fragmentated nature of the landscape and the limited spatial resolution of even high-resolution satellite data. The expansion of shrubs across the Arctic has led to substantial changes in vegetation composition that alter the spectral reflectance and directly affect vegetation indices such as the normalized difference vegetation index (NDVI), which is widely applied for environmental monitoring. This change can mask soil erosion if datasets with too coarse spatial resolutions are used, as increases in NDVI driven by shrub expansion can obscure concurrent increases in barren land cover. Here we created land cover maps from a multispectral uncrewed aerial vehicle (UAV) and land cover survey and assessed satellite imagery from PlanetScope, Sentinel-2 and Landsat-8 for several areas in north-eastern Iceland. Additionally, we used a novel application of the Shannon evenness index (SHEI) to evaluate levels of pixel mixing. Our results show that shrub expansion can lead to spectral confusion, which can obscure soil erosion processes and emphasize the importance of considering spatial resolution when monitoring highly fragmented landscapes. We demonstrate that remote sensing data with a resolution < 3 m greatly improves the amount of information captured in an Icelandic tundra environment. The spatial resolution of Landsat data (30 m) is inadequate for environmental monitoring in our study area. We found that the best platform for monitoring tundra land cover is Sentinel-2 when used in combination with multispectral UAV acquisitions for validation. Our study has the potential to improve environmental monitoring capabilities by introducing the use of SHEI to assess pixel mixing and determine optimal spatial resolutions. This approach combined with comparing remote sensing imagery of different spatial and time scales significantly advances our comprehension of land cover changes, including greening and soil degradation, in the Arctic tundra.Publisher PDFPeer reviewe
Self-supervised learning for transferable representations
Machine learning has undeniably achieved remarkable advances thanks to large labelled datasets and supervised learning. However, this progress is constrained by the labour-intensive annotation process. It is not feasible to generate extensive labelled datasets for every problem we aim to address. Consequently, there has been a notable shift in recent times toward approaches that solely leverage raw data. Among these, self-supervised learning has emerged as a particularly powerful approach, offering scalability to massive datasets and showcasing considerable potential for effective knowledge transfer. This thesis investigates self-supervised representation learning with a strong focus on computer vision applications. We provide a comprehensive survey of self-supervised methods across various modalities, introducing a taxonomy that categorises them into four distinct families while also highlighting practical considerations for real-world implementation. Our focus thenceforth is on the computer vision modality, where we perform a comprehensive benchmark evaluation of state-of-the-art self supervised models against many diverse downstream transfer tasks. Our findings reveal that self-supervised models often outperform supervised learning across a spectrum of tasks, albeit with correlations weakening as tasks transition beyond classification, particularly for datasets with distribution shifts. Digging deeper, we investigate the influence of data augmentation on the transferability of contrastive learners, uncovering a trade-off between spatial and appearance-based invariances that generalise to real-world transformations. This begins to explain the differing empirical performances achieved by self-supervised learners on different downstream tasks, and it showcases the advantages of specialised representations produced with tailored augmentation. Finally, we introduce a novel self-supervised pre-training algorithm for object detection, aligning pre-training with downstream architecture and objectives, leading to reduced localisation errors and improved label efficiency. In conclusion, this thesis contributes a comprehensive understanding of self-supervised representation learning and its role in enabling effective transfer across computer vision tasks
A novel segmentation approach for crop modeling using a plenoptic light-field camera : going from 2D to 3D
OMICASCrop phenotyping is a desirable task in crop characterization since it allows the farmer
to make early decisions, and therefore be more productive.
This research is motivated by the generation of tools for rice crop phenotyping within
the OMICAS research ecosystem framework. It proposes implementing the image process-
ing technologies and artificial intelligence technics through a multisensory approach with
multispectral information. Three main stages are covered: (i) A segmentation approach
that allows identifying the biological material associated with plants, and the main contri-
bution is the GFKuts segmentation approach; (ii) a strategy that allows the development
of sensory fusion between three different cameras, a 3D camera, an infrared multispectral
camera, and a thermal multispectral camera, this stage is developed through a complex
object detection approach; and (iii) the characterization of a 4D model that generates
topological relationships with the information of the point cloud, the main contribution
of this strategy is the improvement of the point cloud captured by the 3D sensor, in this
sense, this stage improves the acquisition of any 3D sensor.
This research presents a development that receives information from multiple sensors,
especially infrared 2D, and generates a single 4D model in geometric space [X, Y, Z]. This
model integrates the color information of 5 channels and topological information, relating
the points in space. Overall, the research allows the integration of the 3D information from
any sensor\technology and the multispectral channels from any multispectral camera, to
generate direct non-invasive measurements on the plant.Magíster en Ingeniería ElectrónicaMagíster en Inteligencia ArtificialMaestríahttps://orcid.org/ 0000-0002-1477-6825https://scholar.google.com/citations?user=cpuxcwgAAAAJ&hl=eshttps://scienti.minciencias.gov.co/cvlac/visualizador/generarCurriculoCv.do?cod_rh=000155691
Review on Automatic Variable-Rate Spraying Systems Based on Orchard Canopy Characterization
Pesticide consumption and environmental pollution in orchards can be greatly decreased by combining variable-rate spray treatments with proportional control systems. Nowadays, farmers can use variable-rate canopy spraying to apply weed killers only where they are required which provides environmental friendly and cost-effective crop protection chemicals. Moreover, restricting the use of pesticides as Plant Protection Products (PPP) while maintaining appropriate canopy deposition is a serious challenge. Additionally, automatic sprayers that adjust their application rates to the size and shape of orchard plantations has indicated a significant potential for reducing the use of pesticides. For the automatic spraying, the existing research used an Artificial Intelligence and Machine Learning. Also, spraying efficiency can be increased by lowering spray losses from ground deposition and off-target drift. Therefore, this study involves a thorough examination of the existing variable-rate spraying techniques in orchards. In addition to providing examples of their predictions and briefly addressing the influences on spraying parameters, it also presents various alternatives to avoiding pesticide overuse and explores their advantages and disadvantages
Airborne Drones for Water Quality Mapping in Inland, Transitional and Coastal Waters-MapEO Water Data Processing and Validation
Using airborne drones to monitor water quality in inland, transitional or coastal surface waters is an emerging research field. Airborne drones can fly under clouds at preferred times, capturing data at cm resolution, filling a significant gap between existing in situ, airborne and satellite remote sensing capabilities. Suitable drones and lightweight cameras are readily available on the market, whereas deriving water quality products from the captured image is not straightforward; vignetting effects, georeferencing, the dynamic nature and high light absorption efficiency of water, sun glint and sky glint effects require careful data processing. This paper presents the data processing workflow behind MapEO water, an end-to-end cloud-based solution that deals with the complexities of observing water surfaces and retrieves water-leaving reflectance and water quality products like turbidity and chlorophyll-a (Chl-a) concentration. MapEO water supports common camera types and performs a geometric and radiometric correction and subsequent conversion to reflectance and water quality products. This study shows validation results of water-leaving reflectance, turbidity and Chl-a maps derived using DJI Phantom 4 pro and MicaSense cameras for several lakes across Europe. Coefficients of determination values of 0.71 and 0.93 are obtained for turbidity and Chl-a, respectively. We conclude that airborne drone data has major potential to be embedded in operational monitoring programmes and can form useful links between satellite and in situ observations
Scalable Exploration of Complex Objects and Environments Beyond Plain Visual Replication
Digital multimedia content and presentation means are rapidly increasing their sophistication and are now capable of describing detailed representations of the physical world. 3D exploration experiences allow people to appreciate, understand and interact with intrinsically virtual objects.
Communicating information on objects requires the ability to explore them under different angles, as well as to mix highly photorealistic or illustrative presentations of the object themselves with additional data that provides additional insights on these objects, typically represented in the form of annotations. Effectively providing these capabilities requires the solution of important problems in visualization and user interaction.
In this thesis, I studied these problems in the cultural heritage-computing-domain, focusing on the very common and important special case of mostly planar, but visually, geometrically, and semantically rich objects. These could be generally roughly flat objects with a standard frontal viewing direction (e.g., paintings, inscriptions, bas-reliefs), as well as visualizations of fully 3D objects from a particular point of views (e.g., canonical views of buildings or statues). Selecting a precise application domain and a specific presentation mode allowed me to concentrate on the well defined use-case of the exploration of annotated relightable stratigraphic models (in particular, for local and remote museum presentation).
My main results and contributions to the state of the art have been a novel technique for interactively controlling visualization lenses while automatically maintaining good focus-and-context parameters, a novel approach for avoiding clutter in an annotated model and for guiding users towards interesting areas, and a method for structuring audio-visual object annotations into a graph and for using that graph to improve guidance and support storytelling and automated tours.
We demonstrated the effectiveness and potential of our techniques by performing interactive exploration sessions on various screen sizes and types ranging from desktop devices to large-screen displays for a walk-up-and-use museum installation.
KEYWORDS - Computer Graphics, Human-Computer Interaction, Interactive Lenses, Focus-and-Context, Annotated Models, Cultural Heritage Computing
- …