387,332 research outputs found

    A Component-Based Middleware for a Reliable Distributed and Reconfigurable Spacecraft Onboard Computer

    Get PDF
    Emerging applications for space missions require increasing processing performance from the onboard computers. DLR's project “Onboard Computer - Next Generation” (OBC-NG) develops a distributed, reconfigurable computer architecture to provide increased performance while maintaining the high reliability of classical spacecraft computer architectures. Growing system complexity requires an advanced onboard middleware, handling distributed (realtime) applications and error mitigation by reconfiguration. The OBC-NG middleware follows the Component-Based Software Engineering (CBSE) approach. Using composite components, applications and management tasks can easily be distributed and relocated on the processing nodes of the network. Additionally, reuse of components for future missions is facilitated. This paper presents the flexible middleware architecture, the composite component framework, the middleware services and the model-driven Application Programming Interface (API) design of OBC-NG. Tests are conducted to validate the middleware concept and to investigate the reconfiguration efficiency as well as the reliability of the system. A relevant use case shows the advantages of CBSE for the development of distributed reconfigurable onboard software

    Autonomic behavioural framework for structural parallelism over heterogeneous multi-core systems.

    Get PDF
    With the continuous advancement in hardware technologies, significant research has been devoted to design and develop high-level parallel programming models that allow programmers to exploit the latest developments in heterogeneous multi-core/many-core architectures. Structural programming paradigms propose a viable solution for e ciently programming modern heterogeneous multi-core architectures equipped with one or more programmable Graphics Processing Units (GPUs). Applying structured programming paradigms, it is possible to subdivide a system into building blocks (modules, skids or components) that can be independently created and then used in di erent systems to derive multiple functionalities. Exploiting such systematic divisions, it is possible to address extra-functional features such as application performance, portability and resource utilisations from the component level in heterogeneous multi-core architecture. While the computing function of a building block can vary for di erent applications, the behaviour (semantic) of the block remains intact. Therefore, by understanding the behaviour of building blocks and their structural compositions in parallel patterns, the process of constructing and coordinating a structured application can be automated. In this thesis we have proposed Structural Composition and Interaction Protocol (SKIP) as a systematic methodology to exploit the structural programming paradigm (Building block approach in this case) for constructing a structured application and extracting/injecting information from/to the structured application. Using SKIP methodology, we have designed and developed Performance Enhancement Infrastructure (PEI) as a SKIP compliant autonomic behavioural framework to automatically coordinate structured parallel applications based on the extracted extra-functional properties related to the parallel computation patterns. We have used 15 di erent PEI-based applications (from large scale applications with heavy input workload that take hours to execute to small-scale applications which take seconds to execute) to evaluate PEI in terms of overhead and performance improvements. The experiments have been carried out on 3 di erent Heterogeneous (CPU/GPU) multi-core architectures (including one cluster machine with 4 symmetric nodes with one GPU per node and 2 single machines with one GPU per machine). Our results demonstrate that with less than 3% overhead, we can achieve up to one order of magnitude speed-up when using PEI for enhancing application performance

    A Methodology for Transforming Java Applications Towards Real-Time Performance

    Get PDF
    The development of real-time systems has traditionally been based on low-level programming languages, such as C and C++, as these provide a fine-grained control of the applications temporal behavior. However, the usage of such programming languages suffers from increased complexity and high error rates compared to high-level languages such as Java. The Java programming language provides many benefits to software development such as automatic memory management and platform independence. However, Java is unable to provide any real-time guarantees, as the high-level benefits come at the cost of unpredictable temporal behavior.This thesis investigates the temporal characteristics of the Java language and analyses several possibilities for introducing real-time guarantees, including official language extensions and commercial runtime environments. Based on this analysis a new methodology is proposed for Transforming Java Applications towards Real-time Performance (TJARP). This method motivates a clear definition of timing requirements, followed by an analysis of the system through use of the formal modeling languageVDM-RT. Finally, the method provides a set of structured guidelines to facilitate the choice of strategy for obtaining real-time performance using Java. To further support this choice, an analysis is presented of available solutions, supported by a simple case study and a series of benchmarks.Furthermore, this thesis applies the TJARP method to a complex industrialcase study provided by a leading supplier of mission critical systems. Thecase study proves how the TJARP method is able to analyze an existing and complex system, and successfully introduce hard real-time guaranteesin critical sub-components

    LABEC, the INFN ion beam laboratory of nuclear techniques for environment and cultural heritage

    Get PDF
    The LABEC laboratory, the INFN ion beam laboratory of nuclear techniques for environment and cultural heritage, located in the Scientific and Technological Campus of the University of Florence in Sesto Fiorentino, started its operational activities in 2004, after INFN decided in 2001 to provide our applied nuclear physics group with a large laboratory dedicated to applications of accelerator-related analytical techniques, based on a new 3 MV Tandetron accelerator. The new accelerator greatly improved the performance of existing Ion Beam Analysis (IBA) applications (for which we were using since the 1980s an old single-ended Van de Graaff accelerator) and in addition allowed to start a novel activity of Accelerator Mass Spectrometry (AMS), in particular for 14C dating. Switching between IBA and AMS operation became very easy and fast, which allowed us high flexibility in programming the activities, mainly focused on studies of cultural heritage and atmospheric aerosol composition, but including also applications to biology, geology, material science and forensics, ion implantation, tests of radiation damage to components, detector performance tests and low-energy nuclear physics. This paper describes the facilities presently available in the LABEC laboratory, their technical features and some success stories of recent applications

    MITRA: Robust Architecture for Distributed Metadata Indexing

    Get PDF
    In the post-exascale era storage systems, a fundamental challenge faced by the research community is the efficient and scalable access to the stored information while meeting the high-performance requirements of big data applications. In this dissertation, we studied the limitations in the existing state-of-the-art architectures and proposed a system to address the challenges of scalability and high performance. Our proposed solution, called MITRA, supports several scientific formats, i.e., Hierarchical Data Format (HDF), network Common Data Form (netCDF), and Comma-Separated Values (CSV), and is composed of several software components that work together to provide high I/O throughput to user applications. The key novelty of MITRA lies in supporting a variety of file formats, generation and indexing of metadata for scientific datasets, and optimizing data lookup time while providing scalability of storage subsystem with the increasing amount of data. MITRA generates and manages indices using a relational database which can be effectively accessed using conventional application programming interfaces (APIs). We evaluated the performance of MITRA and compare it with the traditional approaches for its ingestion speed, content processing, lookup time, and scalability for the generated indices. Our evaluation reveals that the rich metadata indices of MITRA improve system lookup by reducing the search space for the metadata that is not present in indices. Moreover, MITRA outperforms the existing approach in terms of scalability as indices grow in size by balancing the load between available hardware resources
    corecore