404 research outputs found

    Refactoring Software in the Automotive Domain for Execution on Heterogeneous Platforms

    Get PDF
    The most important way to achieve higher performance in computer systems is through heterogeneous computing, i.e., by adopting hardware platforms containing more than one type of processor, such as CPUs, GPUs, and FPGAs. Several types of algorithms can be executed significantly faster on a heterogeneous platform. However, migrating CPU-executable software to other types of execution platforms poses a number of challenges to software engineering. Significant efforts are required in such type of migration, particularly for re-architecting and re-implementing the software. Further, optimizing it in terms of performance and other runtime properties can be very challenging, making the process complex, expensive, and errorprone. Therefore, a systematic approach based on explicit and justified architectural decisions is needed for a successful refactoring process from a homogeneous to a heterogeneous platform. In this paper, we propose a decision framework that supports engineers when refactoring software systems to accommodate heterogeneous platforms. It includes the assessment of important factors in order to minimize the risk of recurrent problems in the process. Through a set of questions, practitioners are able to formulate answers that will help in making appropriate architectural decisions to accommodate heterogeneous platforms. The contents of the framework have been developed and evolved based on discussions with architects and developers in the automotive domain

    HPM-Frame: A Decision Framework for Executing Software on Heterogeneous Platforms

    Full text link
    Heterogeneous computing is one of the most important computational solutions to meet rapidly increasing demands on system performance. It typically allows the main flow of applications to be executed on a CPU while the most computationally intensive tasks are assigned to one or more accelerators, such as GPUs and FPGAs. The refactoring of systems for execution on such platforms is highly desired but also difficult to perform, mainly due the inherent increase in software complexity. After exploration, we have identified a current need for a systematic approach that supports engineers in the refactoring process -- from CPU-centric applications to software that is executed on heterogeneous platforms. In this paper, we introduce a decision framework that assists engineers in the task of refactoring software to incorporate heterogeneous platforms. It covers the software engineering lifecycle through five steps, consisting of questions to be answered in order to successfully address aspects that are relevant for the refactoring procedure. We evaluate the feasibility of the framework in two ways. First, we capture the practitioner's impressions, concerns and suggestions through a questionnaire. Then, we conduct a case study showing the step-by-step application of the framework using a computer vision application in the automotive domain.Comment: Manuscript submitted to the Journal of Systems and Softwar

    A Process-oriented Approach for Migrating Software to Heterogeneous Platforms

    Get PDF
    Context: Heterogeneous computing, i.e., computing performed on processors of different types - such as combination of CPUs and GPUs, or CPUs and FPGAs - has shown to be a feasible path towards higher performance and less energy consumption. However, this approach imposes a number of challenges on the software side that must be addressed in order to achieve the aforementioned advantages.Objective: The objective of this thesis is to improve the process of software deployment on heterogeneous platforms. Through a detailed analysis of the state-of-the-art and state-of-the-practice, we aim to provide a reasoning framework for engineers to migrate software to be executed on such platforms.Method: To achieve our goal, we conducted: (i) a literature review in the form of a systematic mapping study on software deployment on heterogeneous platforms; (ii) a multiple case study in industry that highlights the main challenges and concerns in the state-of-the-practice in the area; and (iii) a study in which we propose and evaluate a decision framework to guide engineers in migrating software for execution on heterogeneous platforms, with a case study in the automotive domain.Results: In the mapping study, we provided a thorough classification of the identified concerns and approaches to deploying software on heterogeneous platforms. Among other findings, we discovered a lack of holistic approaches that include development processes, as well as few validation studies in industrial contexts. In the second study, we discovered and analyzed common practices and challenges that companies face when using heterogeneous platforms. One of such challenges is related to the lack of approaches that cover the software development lifecycle. In the third study, we proposed a decision framework that guides engineers in the process of reasoning for migrating software for execution on heterogeneous platforms. It consists of five stages (assessing, re-architecting, developing, deploying, evaluating), each containing a set of aspects to be addressed through the answers to predefined questions.Conclusions: This thesis addresses a gap that was identified in both theory and practice concerning the lack of holistic approaches to migrate software for execution on heterogeneous platforms. Our proposed approach addresses the problem through systematic guidance for engineers.Future work: In the future, we intend to further refine the proposed framework through case studies in domains other than automotive. We will explore its integration with existing software engineering processes in industrial contexts, performing in-depth analysis of the required adaptations and providing detailed solutions within the stages of the framework

    Bao: A Lightweight Static Partitioning Hypervisor for Modern Multi-Core Embedded Systems

    Get PDF

    Design Criteria to Architect Continuous Experimentation for Self-Driving Vehicles

    Full text link
    The software powering today's vehicles surpasses mechatronics as the dominating engineering challenge due to its fast evolving and innovative nature. In addition, the software and system architecture for upcoming vehicles with automated driving functionality is already processing ~750MB/s - corresponding to over 180 simultaneous 4K-video streams from popular video-on-demand services. Hence, self-driving cars will run so much software to resemble "small data centers on wheels" rather than just transportation vehicles. Continuous Integration, Deployment, and Experimentation have been successfully adopted for software-only products as enabling methodology for feedback-based software development. For example, a popular search engine conducts ~250 experiments each day to improve the software based on its users' behavior. This work investigates design criteria for the software architecture and the corresponding software development and deployment process for complex cyber-physical systems, with the goal of enabling Continuous Experimentation as a way to achieve continuous software evolution. Our research involved reviewing related literature on the topic to extract relevant design requirements. The study is concluded by describing the software development and deployment process and software architecture adopted by our self-driving vehicle laboratory, both based on the extracted criteria.Comment: Copyright 2017 IEEE. Paper submitted and accepted at the 2017 IEEE International Conference on Software Architecture. 8 pages, 2 figures. Published in IEEE Xplore Digital Library, URL: http://ieeexplore.ieee.org/abstract/document/7930218

    Parallel Optimisations of Perceived Quality Simulations

    Get PDF
    Processor architectures have changed significantly, with fast single core processors replaced by a diverse range of multicore processors. New architectures require code to be executed in parallel to realize these performance gains. This is straightforward for small applications, where analysis and refactoring is simple or existing tools can parallelise automatically, however the organic growth and large complicated data structures common in mature industrial applications can make parallelisation more difficult. One such application is studied, a mature Windows C++ application used for the visualisation of Perceived Quality (PQ). PQ simulations enable the visualisation of how manufacturing variations affect the look of the final product. The application is commonly used, however suffers from performance issues. Previous parallelisation attempts have failed. The issues associated with parallelising a mature industrial application are investigated. A methodology to investigate, analyse and evaluate the methods and tools available is produced. The shortfalls of these methods and tools are identified, and the methods used to overcome them explained. The parallel version of the software is evaluated for performance. Case studies centring on the significant use cases of the application help to understand the impact on the user. Automated compilers provided no parallelism, while the manual parallelisation using OpenMP required significant refactoring. A number of data dependency issues resulted in some serialised code. Performance scaled with the number of physical cores when applied to certain problems, however the unresolved bottlenecks resulted in mixed results for users. Use in verification did benefit, however those in early design stages did not. Without tools to aid analysis of complex data structures, parallelism could remain out of reach for industrial applications. Methods used here successfully, such as serialisation, and code isolation and serialisation, could be used effectively by such tools

    08331 Abstracts Collection -- Perspectives Workshop: Model Engineering of Complex Systems (MECS)

    Get PDF
    From 10.08. to 13.08.2008, the Dagstuhl Seminar 08331 ``Perspectives Workshop: Model Engineering of Complex Systems (MECS)\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Evaluation of the parallel computational capabilities of embedded platforms for critical systems

    Get PDF
    Modern critical systems need higher performance which cannot be delivered by the simple architectures used so far. Latest embedded architectures feature multi-cores and GPUs, which can be used to satisfy this need. In this thesis we parallelise relevant applications from multiple critical domains represented in the GPU4S benchmark suite, and perform a comparison of the parallel capabilities of candidate platforms for use in critical systems. In particular, we port the open source GPU4S Bench benchmarking suite in the OpenMP programming model, and we benchmark the candidate embedded heterogeneous multi-core platforms of the H2020 UP2DATE project, NVIDIA TX2, NVIDIA Xavier and Xilinx Zynq Ultrascale+, in order to drive the selection of the research platform which will be used in the next phases of the project. Our result indicate that in terms of CPU and GPU performance, the NVIDIA Xavier is the highest performing platform
    • …
    corecore