421 research outputs found

    High Performance Computing Applications in Remote Sensing Studies for Land Cover Dynamics

    Get PDF
    Global and regional land cover studies require the ability to apply complex models on selected subsets of large amounts of multi-sensor and multi-temporal data sets that have been derived from raw instrument measurements using widely accepted pre-processing algorithms. The computational and storage requirements of most such studies far exceed what is possible on a single workstation environment. We have been pursuing a new approach that couples scalable and open distributed heterogeneous hardware with the development of high performance software for processing, indexing, and organizing remotely sensed data. Hierarchical data management tools are used to ingest raw data, create metadata, and organize the archived data so as to automatically achieve computational load balancing among the available nodes and minimize I/O overheads. We illustrate our approach with four specific examples. The first is the development of the first fast operational scheme for the atmospheric correction of Landsat TM scenes, while the second example focuses on image segmentation using a novel hierarchical connected components algorithm. Retrieval of global BRDF (Bidirectional Reflectance Distribution Function) in the red and near infrared wavelengths using four years (1983 to 1986) of Pathfinder AVHRR Land (PAL) data set is the focus of our third example. The fourth example is the development of a hierarchical data organization scheme that allows on-demand processing and retrieval of regional and global AVHRR data sets. Our results show that substantial improvements in computational times can be achieved by using the high performance computing technology

    D13.2 Techniques and performance analysis on energy- and bandwidth-efficient communications and networking

    Get PDF
    Deliverable D13.2 del projecte europeu NEWCOM#The report presents the status of the research work of the various Joint Research Activities (JRA) in WP1.3 and the results that were developed up to the second year of the project. For each activity there is a description, an illustration of the adherence to and relevance with the identified fundamental open issues, a short presentation of the main results, and a roadmap for the future joint research. In the Annex, for each JRA, the main technical details on specific scientific activities are described in detail.Peer ReviewedPostprint (published version

    A Survey of the DVB-T Spectrum: Opportunities for Cognitive Mobile Users

    Get PDF

    Evaluating Tessellation and Screen-Space Ambient Occlusion in WebGL-Based Real-Time Application

    Get PDF
    abstract: Tessellation and Screen-Space Ambient Occlusion are algorithms which have been widely-used in real-time rendering in the past decade. They aim to enhance the details of the mesh, cast better shadow effects and improve the quality of the rendered images in real time. WebGL is a web-based graphics library derived from OpenGL ES used for rendering in web applications. It is relatively new and has been rapidly evolving, this has resulted in it supporting a subset of rendering features normally supported by desktop applications. In this thesis, the research is focusing on evaluating Curved PN-Triangles tessellation with Screen Space Ambient Occlusion (SSAO), Horizon-Based Ambient Occlusion (HBAO) and Horizon-Based Ambient Occlusion Plus (HBAO+) in WebGL-based real-time application and comparing its performance to desktop based application and to discuss the capabilities, limitations and bottlenecks of WebGL 1.0.Dissertation/ThesisWebGL ProgramOpenGL ProgramMasters Thesis Computer Science 201

    Distributed workflows with Jupyter

    Get PDF
    The designers of a new coordination interface enacting complex workflows have to tackle a dichotomy: choosing a language-independent or language-dependent approach. Language-independent approaches decouple workflow models from the host code's business logic and advocate portability. Language-dependent approaches foster flexibility and performance by adopting the same host language for business and coordination code. Jupyter Notebooks, with their capability to describe both imperative and declarative code in a unique format, allow taking the best of the two approaches, maintaining a clear separation between application and coordination layers but still providing a unified interface to both aspects. We advocate the Jupyter Notebooks’ potential to express complex distributed workflows, identifying the general requirements for a Jupyter-based Workflow Management System (WMS) and introducing a proof-of-concept portable implementation working on hybrid Cloud-HPC infrastructures. As a byproduct, we extended the vanilla IPython kernel with workflow-based parallel and distributed execution capabilities. The proposed Jupyter-workflow (Jw) system is evaluated on common scenarios for High Performance Computing (HPC) and Cloud, showing its potential in lowering the barriers between prototypical Notebooks and production-ready implementations

    A Flexible FPGA-based Control Platform for Superconducting Multi-Qubit Experiments

    Get PDF

    Redundant disk arrays: Reliable, parallel secondary storage

    Get PDF
    During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures

    Workflow models for heterogeneous distributed systems

    Get PDF
    The role of data in modern scientific workflows becomes more and more crucial. The unprecedented amount of data available in the digital era, combined with the recent advancements in Machine Learning and High-Performance Computing (HPC), let computers surpass human performances in a wide range of fields, such as Computer Vision, Natural Language Processing and Bioinformatics. However, a solid data management strategy becomes crucial for key aspects like performance optimisation, privacy preservation and security. Most modern programming paradigms for Big Data analysis adhere to the principle of data locality: moving computation closer to the data to remove transfer-related overheads and risks. Still, there are scenarios in which it is worth, or even unavoidable, to transfer data between different steps of a complex workflow. The contribution of this dissertation is twofold. First, it defines a novel methodology for distributed modular applications, allowing topology-aware scheduling and data management while separating business logic, data dependencies, parallel patterns and execution environments. In addition, it introduces computational notebooks as a high-level and user-friendly interface to this new kind of workflow, aiming to flatten the learning curve and improve the adoption of such methodology. Each of these contributions is accompanied by a full-fledged, Open Source implementation, which has been used for evaluation purposes and allows the interested reader to experience the related methodology first-hand. The validity of the proposed approaches has been demonstrated on a total of five real scientific applications in the domains of Deep Learning, Bioinformatics and Molecular Dynamics Simulation, executing them on large-scale mixed cloud-High-Performance Computing (HPC) infrastructures

    A distributed simulation environment for multibody physics

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 1998.Includes bibliographical references (leaves 128-134).A distributed simulation environment, which can be used to model multibody physics, is developed. The software design is based on the object oriented paradigm and is implemented in C++ to run on a single workstation or multiple processors in parallel. It provides facilities to set up a multibody physics simulation, including arbitrary 3D geometric representation, particle interactions such as contacts and constraints, and visualization for postprocessing. Contact detection, the process of automatic identifying the geometric overlap between objects, is generally the most time-consuming procedure in the overall discrete element analysis pipeline. The computational cost of contact detection grows as a function of both the number of particles and the complexity of the geometric representation of each body. This thesis presents algorithms that significantly reduce the computational cost of the contact detection problem. The hashtable-based spatial reasoning algorithm demonstrates an O(M) performance, where M is the number of particles in the simulation system for a restricted set of particles. The discrete function representation (DFR) scheme is employed to model the surface geometry of complex 3D objects. DFR-based contact detection between a pair of objects exhibits an O(N) running time performance, where N is the number of surface point used to represent each object. In practice this results in a significant speedup over traditional techniques. A distributed DEM simulation environment is built on top of a set of software tools which exploit the parallelism embedded in the DEM analysis and which take advantage of a high-speed communications network to achieve good parallel performance. The goal is of reducing the entire computing time of of large-scale simulation problems to order O(N) is shown to be achieveable using the algorithms described.by Jen-Diann Chiou.Ph.D

    Spectrum Utilisation and Management in Cognitive Radio Networks

    Get PDF
    • …
    corecore