82 research outputs found

    Raster Time Series: Learning and Processing

    Get PDF
    As the amount of remote sensing data is increasing at a high rate, due to great improvements in sensor technology, efficient processing capabilities are of utmost importance. Remote sensing data from satellites is crucial in many scientific domains, like biodiversity and climate research. Because weather and climate are of particular interest for almost all living organisms on earth, the efficient classification of clouds is one of the most important problems. Geostationary satellites such as Meteosat Second Generation (MSG) offer the only possibility to generate long-term cloud data sets with high spatial and temporal resolution. This work, therefore, addresses research problems on efficient and parallel processing of MSG data to enable new applications and insights. First, we address the lack of a suitable processing chain to generate a long-term Fog and Low Stratus (FLS) time series. We present an efficient MSG data processing chain that processes multiple tasks simultaneously, and raster data in parallel using the Open Computing Language (OpenCL). The processing chain delivers a uniform FLS classification that combines day and night approaches in a single method. As a result, it is possible to calculate a year of FLS rasters quite easy. The second topic presents the application of Convolutional Neural Networks (CNN) for cloud classification. Conventional approaches to cloud detection often only classify single pixels and ignore the fact that clouds are highly dynamic and spatially continuous entities. Therefore, we propose a new method based on deep learning. Using a CNN image segmentation architecture, the presented Cloud Segmentation CNN (CS-CNN) classifies all pixels of a scene simultaneously. We show that CS-CNN is capable of processing multispectral satellite data to identify continuous phenomena such as highly dynamic clouds. The proposed approach provides excellent results on MSG satellite data in terms of quality, robustness, and runtime, in comparison to Random Forest (RF), another widely used machine learning method. Finally, we present the processing of raster time series with a system for Visualization, Transformation, and Analysis (VAT) of spatio-temporal data. It enables data-driven research with explorative workflows and uses time as an integral dimension. The combination of various raster and vector data time series enables new applications and insights. We present an application that combines weather information and aircraft trajectories to identify patterns in bad weather situations

    High Performance Free Surface LBM on GPUs

    Get PDF

    The readying of applications for heterogeneous computing

    Get PDF
    High performance computing is approaching a potentially significant change in architectural design. With pressures on the cost and sheer amount of power, additional architectural features are emerging which require a re-think to the programming models deployed over the last two decades. Today's emerging high performance computing (HPC) systems are maximising performance per unit of power consumed resulting in the constituent parts of the system to be made up of a range of different specialised building blocks, each with their own purpose. This heterogeneity is not just limited to the hardware components but also in the mechanisms that exploit the hardware components. These multiple levels of parallelism, instruction sets and memory hierarchies, result in truly heterogeneous computing in all aspects of the global system. These emerging architectural solutions will require the software to exploit tremendous amounts of on-node parallelism and indeed programming models to address this are emerging. In theory, the application developer can design new software using these models to exploit emerging low power architectures. However, in practice, real industrial scale applications last the lifetimes of many architectural generations and therefore require a migration path to these next generation supercomputing platforms. Identifying that migration path is non-trivial: With applications spanning many decades, consisting of many millions of lines of code and multiple scientific algorithms, any changes to the programming model will be extensive and invasive and may turn out to be the incorrect model for the application in question. This makes exploration of these emerging architectures and programming models using the applications themselves problematic. Additionally, the source code of many industrial applications is not available either due to commercial or security sensitivity constraints. This thesis highlights this problem by assessing current and emerging hard- ware with an industrial strength code, and demonstrating those issues described. In turn it looks at the methodology of using proxy applications in place of real industry applications, to assess their suitability on the next generation of low power HPC offerings. It shows there are significant benefits to be realised in using proxy applications, in that fundamental issues inhibiting exploration of a particular architecture are easier to identify and hence address. Evaluations of the maturity and performance portability are explored for a number of alternative programming methodologies, on a number of architectures and highlighting the broader adoption of these proxy applications, both within the authors own organisation, and across the industry as a whole

    Reconfigurable Antenna Systems: Platform implementation and low-power matters

    Get PDF
    Antennas are a necessary and often critical component of all wireless systems, of which they share the ever-increasing complexity and the challenges of present and emerging trends. 5G, massive low-orbit satellite architectures (e.g. OneWeb), industry 4.0, Internet of Things (IoT), satcom on-the-move, Advanced Driver Assistance Systems (ADAS) and Autonomous Vehicles, all call for highly flexible systems, and antenna reconfigurability is an enabling part of these advances. The terminal segment is particularly crucial in this sense, encompassing both very compact antennas or low-profile antennas, all with various adaptability/reconfigurability requirements. This thesis work has dealt with hardware implementation issues of Radio Frequency (RF) antenna reconfigurability, and in particular with low-power General Purpose Platforms (GPP); the work has encompassed Software Defined Radio (SDR) implementation, as well as embedded low-power platforms (in particular on STM32 Nucleo family of micro-controller). The hardware-software platform work has been complemented with design and fabrication of reconfigurable antennas in standard technology, and the resulting systems tested. The selected antenna technology was antenna array with continuously steerable beam, controlled by voltage-driven phase shifting circuits. Applications included notably Wireless Sensor Network (WSN) deployed in the Italian scientific mission in Antarctica, in a traffic-monitoring case study (EU H2020 project), and into an innovative Global Navigation Satellite Systems (GNSS) antenna concept (patent application submitted). The SDR implementation focused on a low-cost and low-power Software-defined radio open-source platform with IEEE 802.11 a/g/p wireless communication capability. In a second embodiment, the flexibility of the SDR paradigm has been traded off to avoid the power consumption associated to the relevant operating system. Application field of reconfigurable antenna is, however, not limited to a better management of the energy consumption. The analysis has also been extended to satellites positioning application. A novel beamforming method has presented demonstrating improvements in the quality of signals received from satellites. Regarding those who deal with positioning algorithms, this advancement help improving precision on the estimated position

    Modelling of Extreme Ocean Waves Using High Performance Computing

    No full text
    This thesis describes the development of a fully nonlinear numerical model for the simulation of surface water waves. The model has the ability to compute the evolution of both limiting and overturning waves arising from the focussing of wave components in realistic ocean spectra. To accomplish this task, a multiple-flux implementation of a boundary element method is used to describe the evolution of a free surface in the time domain over an arbitrary bed geometry. Unfortunately, boundary element methods are inherently computationally expensive and although approximations exist to reduce the complexity of the problem, the effects of their use in physical space is unclear. To overcome some of the computational intensity, the present work employs novel computational approaches to both reduce the run time of the simulations and make accessible predictions of wave fields that were previously unfeasible. The advances in computational aspects are made through the use of parallel algorithms running in a distributed computing environment. Further acceleration is gained by running parts of the algorithm on many-core co-processing devices in the form of the, habitually called, graphics processing unit. Once a reasonably efficient implementation of the boundary element method is achieved, attention is turned to further algorithmic optimisations, particularly in respect of computing the kinematics field underlying the extreme wave events. The flexibility of the model is demonstrated through the accurate simulation of extreme wave events, this includes near-breaking and overturning wave phenomena. Finally, by harnessing the power of high performance computing technologies, the model is applied to an engineering design problem concerning the wave-induced loading of an offshore jacket structure. The work presented is not merely a study of a single wave event and its interaction with a structure, but rather a whole multitude of wave-structure interaction events that could not have been computed within a realistic time frame were it not for the use of high performance computing. The outcome of this work is the harnessing of distributed and accelerated computing to enable the rapid calculation of numerous fully nonlinear wave loading events to provide a game changing outlook on structural design and the reliability for offshore structures; such calculations having not previously been possible

    High Performance Computing Facility Operational Assessment, 2012 Oak Ridge Leadership Computing Facility

    Full text link

    Simulations of complex atmospheric flows using GPUs - the model ASAMgpu -: Simulations of complex atmospheric flows using GPUs - the model ASAMgpu -

    Get PDF
    Die vorliegende Arbeit beschreibt die Entwicklung des hochauflösenden Atmosphärenmodells ASAMgpu. Dabei handelt es sich um ein sogenanntes Grobstrukturmodell bei dem gröbere Strukturen mit typischen Skalen von Deka- bis Kilometern in der atmosphärischen Grenzschicht explizit aufgelöst werden. Hochfrequentere Anteile und deren Dissipation müssen dabei entweder explizit mit einem Turbulenzmodell oder, wie im Falle des beschriebenen Modells, implizit behandelt werden. Dazu wurde der Advektionsoperator mit einem dissipativen Upwind-Verfahren dritter Ordnung diskretisiert. Das Modell beinhaltet ein Zwei-Momenten-Schema zur Beschreibung mikrophysikalischer Prozesse. Ein weiterer wichtiger Aspekt ist die verwendete thermodynamische Variable, die einige Vorteile herkömmlicher Ansätze vereint. Im Falle adiabatischer Prozesse stellt sie eine Erhaltungsgröße dar und die Quellen und Senken im Falle von Phasenumwandlungen sind leicht ableitbar. Außerdem können die benötigten Größen Temperatur und Druck explizit berechnet werden. Das gesamte Modell wurde in C++ implementiert und verwendet OpenGL und die OpenGL Shader Language (GLSL) um die nötigen Berechnungen auf Grafikkarten durchzuführen. Durch diesen Ansatz können genannte Simulationen, für die bisher Supercomputer nötig waren, sehr preisgünstig und energieeffizient durchgeführt werden. Neben der Modellbeschreibung werden die Ergebnisse einiger erfolgreicher Test-Simulationen, darunter drei Fälle mit mariner bewölkter Grenzschicht mit flacher Cumulusbewölkung, vorgestellt

    Context Awareness for Navigation Applications

    Get PDF
    This thesis examines the topic of context awareness for navigation applications and asks the question, “What are the benefits and constraints of introducing context awareness in navigation?” Context awareness can be defined as a computer’s ability to understand the situation or context in which it is operating. In particular, we are interested in how context awareness can be used to understand the navigation needs of people using mobile computers, such as smartphones, but context awareness can also benefit other types of navigation users, such as maritime navigators. There are countless other potential applications of context awareness, but this thesis focuses on applications related to navigation. For example, if a smartphone-based navigation system can understand when a user is walking, driving a car, or riding a train, then it can adapt its navigation algorithms to improve positioning performance. We argue that the primary set of tools available for generating context awareness is machine learning. Machine learning is, in fact, a collection of many different algorithms and techniques for developing “computer systems that automatically improve their performance through experience” [1]. This thesis examines systematically the ability of existing algorithms from machine learning to endow computing systems with context awareness. Specifically, we apply machine learning techniques to tackle three different tasks related to context awareness and having applications in the field of navigation: (1) to recognize the activity of a smartphone user in an indoor office environment, (2) to recognize the mode of motion that a smartphone user is undergoing outdoors, and (3) to determine the optimal path of a ship traveling through ice-covered waters. The diversity of these tasks was chosen intentionally to demonstrate the breadth of problems encompassed by the topic of context awareness. During the course of studying context awareness, we adopted two conceptual “frameworks,” which we find useful for the purpose of solidifying the abstract concepts of context and context awareness. The first such framework is based strongly on the writings of a rhetorician from Hellenistic Greece, Hermagoras of Temnos, who defined seven elements of “circumstance”. We adopt these seven elements to describe contextual information. The second framework, which we dub the “context pyramid” describes the processing of raw sensor data into contextual information in terms of six different levels. At the top of the pyramid is “rich context”, where the information is expressed in prose, and the goal for the computer is to mimic the way that a human would describe a situation. We are still a long way off from computers being able to match a human’s ability to understand and describe context, but this thesis improves the state-of-the-art in context awareness for navigation applications. For some particular tasks, machine learning has succeeded in outperforming humans, and in the future there are likely to be tasks in navigation where computers outperform humans. One example might be the route optimization task described above. This is an example of a task where many different types of information must be fused in non-obvious ways, and it may be that computer algorithms can find better routes through ice-covered waters than even well-trained human navigators. This thesis provides only preliminary evidence of this possibility, and future work is needed to further develop the techniques outlined here. The same can be said of the other two navigation-related tasks examined in this thesis
    • …
    corecore