23 research outputs found

    GPU-accelerated 3D visualisation and analysis of migratory behaviour of long lived birds

    Get PDF
    With the amount of data we collect increasing, due to the efficacy of tagging technology improving, the methods we previously applied have begun to take longer and longer to process. As we move forward, it is important that the methods we develop also evolve with the data we collect. Maritime visualisation has already begun to leverage the power of parallel processing to accelerate visualisation. However, some of these techniques require the use of distributed computing, that while useful for datasets that contain billions of points, is harder to implement due to hardware requirements. Here we show that movement ecology can also significantly benefit from the use of parallel processing, while using GPGPU acceleration to enable the use of a single workstation. With only minor adjustments, algorithms can be implemented in parallel, enabling for computation to be completed in real time. We show this by first implementing a GPGPU accelerated visualisation of global environmental datasets. Through the use of OpenGL and CUDA, it is possible to visualise a dataset containing over 25 million datapoints per timestamp and swap between timestamps in 5ms, allowing for environmental context to be considered when visualising trajectories in real time. These can then be used alongside different GPU accelerated visualisation methods, such as aggregate flow diagrams, to explore large datasets in real time. We also continue to apply GPGPU acceleration to the analysis of migratory data through the use of parallel primitives. With these parallel primitives we show that GPGPU acceleration can allow researchers to accelerate their workflow without the need to completely understand the complexities of GPU programming, allowing for orders of magnitude faster computation times when compared to sequential CPU methods

    Engineering a Low-Cost Remote Sensing Capability for Deep-Space Applications

    Full text link
    Systems engineering (SE) has been a useful tool for providing objective processes to breaking down complex technical problems to simpler tasks, while concurrently generating metrics to provide assurance that the solution is fit-for-purpose. Tailored forms of SE have also been used by cubesat mission designers to assist in reducing risk by providing iterative feedback and key artifacts to provide managers with the evidence to adjust resources and tasking for success. Cubesat-sized spacecraft are being planned, built and in some cases, flown to provide a lower-cost entry point for deep-space exploration. This is particularly important for agencies and countries with lower space exploration budgets, where specific mission objectives can be used to develop tailored payloads within tighter constraints, while also returning useful scientific results or engineering data. In this work, a tailored SE tradespace approach was used to help determine how a 6 unit (6U) cubesat could be built from commercial-off-the-shelf (COTS)-based components and undertake remote sensing missions near Mars or near-Earth Asteroids. The primary purpose of these missions is to carry a hyperspectral sensor sensitive to 600-800nm wavelengths (hereafter defined as “red-edge”), that will investigate mineralogy characteristics commonly associated with oxidizing and hydrating environments in red-edge. Minerals of this type remain of high interest for indicators of present or past habitability for life, or active geologic processes. Implications of operating in a deep-space environment were considered as part of engineering constraints of the design, including potential reduction of available solar energy, changes in thermal environment and background radiation, and vastly increased communications distances. The engineering tradespace analysis identified realistic COTS options that could satisfy mission objectives for the 6U cubesat bus while also accommodating a reasonable degree of risk. The exception was the communication subsystem, in which case suitable capability was restricted to one particular option. This analysis was used to support an additional trade investigation into the type of sensors that would be most suitable for building the red-edge hyperspectral payload. This was in part constrained by ensuring not only that readily available COTS sensors were used, but that affordability, particularly during a geopolitical environment that was affecting component supply surety and access to manufacturing facilities, was optimized. It was found that a number of sensor options were available for designing a useful instrument, although the rapid development and life-of-type issues with COTS sensors restricted the ability to obtain useful metrics on their performance in the space environment. Additional engineering testing was conducted by constructing hyperspectral sensors using sensors popular in science, technology, engineering and mathematics (STEM) contexts. Engineering and performance metrics of the payload containing the sensors was conducted; and performance of these sensors in relevant analogous environments. A selection of materials exhibiting spectral phenomenology in the red-edge portion of the spectrum was used to produce metrics on the performance of the sensors. It was found that low-cost cameras were able to distinguish between most minerals, although they required a wider spectral range to do so. Additionally, while Raspberry Pi cameras have been popular with scientific applications, a low-cost camera without a Bayer filter markedly improved spectral sensitivity. Consideration for space-environment testing was also trialed in additional experiments using high-altitude balloons to reach the near-space environment. The sensor payloads experienced conditions approximating the surface of Mars, and results were compared with Landsat 7, a heritage Earth sensing satellite, using a popular vegetation index. The selected Raspberry Pi cameras were able to provide useful results from near-space that could be compared with space imagery. Further testing incorporated comparative analysis of custom-built sensors using readily available Raspberry Pi and astronomy cameras, and results from Mastcam and Mastcam/z instruments currently on the surface of Mars. Two sensor designs were trialed in field settings possessing Mars-analogue materials, and a subset of these materials were analysed using a laboratory-grade spectro-radiometer. Results showed the Raspberry Pi multispectral camera would be best suited for broad-scale indications of mineralogy that could be targeted by the pushbroom sensor. This sensor was found to possess a narrower spectral range than the Mastcam and Mastcam/z but was sensitive to a greater number of bands within this range. The pushbroom sensor returned data on spectral phenomenology associated with attributes of Minerals of the type found on Mars. The actual performance of the payload in appropriate conditions was important to provide critical information used to risk reduce future designs. Additionally, the successful outcomes of the trials reduced risk for their application in a deep space environment. The SE and practical performance testing conducted in this thesis could be developed further to design, build and fly a hyperspectral sensor, sensitive to red-edge wavelengths, on a deep-space cubesat mission. Such a mission could be flown at reasonable cost yet return useful scientific and engineering data

    SEnSeI: A Deep Learning Module for Creating Sensor Independent Cloud Masks

    Get PDF
    We introduce a novel neural network architecture -- Spectral ENcoder for SEnsor Independence (SEnSeI) -- by which several multispectral instruments, each with different combinations of spectral bands, can be used to train a generalised deep learning model. We focus on the problem of cloud masking, using several pre-existing datasets, and a new, freely available dataset for Sentinel-2. Our model is shown to achieve state-of-the-art performance on the satellites it was trained on (Sentinel-2 and Landsat 8), and is able to extrapolate to sensors it has not seen during training such as Landsat 7, Per\'uSat-1, and Sentinel-3 SLSTR. Model performance is shown to improve when multiple satellites are used in training, approaching or surpassing the performance of specialised, single-sensor models. This work is motivated by the fact that the remote sensing community has access to data taken with a hugely variety of sensors. This has inevitably led to labelling efforts being undertaken separately for different sensors, which limits the performance of deep learning models, given their need for huge training sets to perform optimally. Sensor independence can enable deep learning models to utilise multiple datasets for training simultaneously, boosting performance and making them much more widely applicable. This may lead to deep learning approaches being used more frequently for on-board applications and in ground segment data processing, which generally require models to be ready at launch or soon afterwards

    Integrated Applications of Geo-Information in Environmental Monitoring

    Get PDF
    This book focuses on fundamental and applied research on geo-information technology, notably optical and radar remote sensing and algorithm improvements, and their applications in environmental monitoring. This Special Issue presents ten high-quality research papers covering up-to-date research in land cover change and desertification analyses, geo-disaster risk and damage evaluation, mining area restoration assessments, the improvement and development of algorithms, and coastal environmental monitoring and object targeting. The purpose of this Special Issue is to promote exchanges, communications and share the research outcomes of scientists worldwide and to bridge the gap between scientific research and its applications for advancing and improving society

    On the Modeling of Dynamic-Systems using Sequence-based Deep Neural-Networks

    Get PDF
    The objective of this thesis is the adaptation and development of sequence-based Neural-Networks (NNs) applied to the modeling of dynamic systems. More specifically, we will focus our study on 2 sub-problems: the modeling of time-series, the modeling and control of multiple-input multiple-output (MIMO) systems. These 2 sub-problems will be explored through the modeling of crops, and the modeling and control of robots. To solve these problems, we build on NNs and training schemes allowing our models to out-perform the state-of-the-art results in their respective fields. In the irrigation field, we show that NNs are powerful tools capable of modeling the water consumption of crops while observing only a portion of what is currently required by reference methods. We further demonstrate the potential of NNs by inferring irrigation recommendations in real-time. In robotics, we show that prioritization techniques can be used to learn better robot dynamic models. We apply the models learned using these methods inside an Model Predictive Control (MPC) controller, further demonstrating their benefits. Additionally, we leverage Dreamer, an Model Based Reinforcement Learning (MBRL) agent, to solve visuomotor tasks. We demonstrate that MBRL controllers can be used for sensor-based control on real robots without being trained on real systems. Adding to this result, we developed a physics-guided variant of DREAMER. This variation of the original algorithm is more flexible and designed for mobile robots. This novel framework enables reusing previously learned dynamics and transferring environment knowledge to other robots. Furthermore, using this new model, we train agents to reach various goals without interacting with the system. This increases the reusability of the learned models and makes for a highly data-efficient learning scheme. Moreover, this allows for efficient dynamics randomization, creating robust agents that transfer well to unseen dynamics.Ph.D

    Proceedings of the 2021 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    2021, the annual joint workshop of the Fraunhofer IOSB and KIT IES was hosted at the IOSB in Karlsruhe. For a week from the 2nd to the 6th July the doctoral students extensive reports on the status of their research. The results and ideas presented at the workshop are collected in this book in the form of detailed technical reports

    Proceedings of the 2021 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    2021, the annual joint workshop of the Fraunhofer IOSB and KIT IES was hosted at the IOSB in Karlsruhe. For a week from the 2nd to the 6th July the doctoral students extensive reports on the status of their research. The results and ideas presented at the workshop are collected in this book in the form of detailed technical reports
    corecore