5,284 research outputs found

    CONTREX: Design of embedded mixed-criticality CONTRol systems under consideration of EXtra-functional properties

    Get PDF
    The increasing processing power of today’s HW/SW platforms leads to the integration of more and more functions in a single device. Additional design challenges arise when these functions share computing resources and belong to different criticality levels. CONTREX complements current activities in the area of predictable computing platforms and segregation mechanisms with techniques to consider the extra-functional properties, i.e., timing constraints, power, and temperature. CONTREX enables energy efficient and cost aware design through analysis and optimization of these properties with regard to application demands at different criticality levels. This article presents an overview of the CONTREX European project, its main innovative technology (extension of a model based design approach, functional and extra-functional analysis with executable models and run-time management) and the final results of three industrial use-cases from different domain (avionics, automotive and telecommunication).The work leading to these results has received funding from the European Community’s Seventh Framework Programme FP7/2007-2011 under grant agreement no. 611146

    Everybody Needs Somebody Sometimes: Validation of Adaptive Recovery in Robotic Space Operations

    Get PDF
    This work assesses an adaptive approach to fault recovery in autonomous robotic space operations, which uses indicators of opportunity, such as physiological state measurements and observations of past human assistant performance, to inform future selections. We validated our reinforcement learning approach using data we collected from humans executing simulated mission scenarios. We present a method of structuring humanfactors experiments that permits collection of relevant indicator of opportunity and assigned assistance task performance data, as well as evaluation of our adaptive approach, without requiring large numbers of test subjects. Application of our reinforcement learning algorithm to our experimental data shows that our adaptive assistant selection approach can achieve lower cumulative regret compared to existing non-adaptive baseline approaches when using real human data. Our work has applications beyond space robotics to any application where autonomy failures may occur that require external intervention

    A high-level power model for MPSoC on FPGA

    Get PDF

    FLECSim-SoC: A Flexible End-to-End Co-Design Simulation Framework for System on Chips

    Get PDF
    Hardware accelerators for deep neural networks (DNNs) have established themselves over the past decade. Most developments have worked towards higher efficiency with an individual application in mind. This highlights the strong relationship between co-designing the accelerator together with the requirements of the application. Currently for a structured design flow, however, it lacks a tool to evaluate a DNN accelerator embedded in a System on Chip (SoC) platform.To address this gap in the state of the art, we introduce FLECSim, a tool framework that enables an end-to-end simulation of an SoC with dedicated accelerators, CPUs and memories. FLECSim offers flexible configuration of the system and straightforward integration of new accelerator models in both SystemC and RTL, which allows for early design verification. During the simulation, FLECSim provides metrics of the SoC, which can be used to explore the design space. Finally, we present the capabilities of FLECSim, perform an exemplary evaluation with a systolic array-based accelerator and explore the design parameters in terms of accelerator size, power and performance

    Generating high-performance arithmetic operators for FPGAs

    Get PDF
    This article addresses the development of complex, heavily parameterized and flexible operators to be used in FPGA-based floating-point accelerators. Languages such as VHDL or Verilog are not ideally suited for this task. The main problem is the automation of problems such as parameter-directed or target-directed architectural optimization, pipeline optimization, and generation of relevant test benches. This article introduces FloPoCo, an open object-oriented software framework designed to address these issues. Written in C++, it inputs operator specifications, a target FPGA and and an objective frequency, and outputs synthesisable VHDL fine-tuned for this FPGA at this frequency. Its design choices are discussed and validated on various operators

    Digital LDO modelling techniques for performance estimation at early design stage

    Get PDF
    This work studies the transient responses and steady-state ripples of digital low dropout (LDO) voltage regulators. Simulation models as well as closed-form expressions are provided for estimating the LDO output settling behaviour after load current or reference voltage changes. Estimation equations for the magnitude and frequency of LDO output steady-state ripples are also presented. The accuracy of the developed models is verified by comparing estimation data with results obtained from circuit simulations. The use of the developed estimation equations in design space exploration is also demonstrated

    Bayesian online learning for energy-aware resource orchestration in virtualized RANs

    Get PDF
    Proceedings of: IEEE International Conference on Computer Communications, 10-13 May 2021, Vancouver, BC, Canada.Radio Access Network Virtualization (vRAN) will spearhead the quest towards supple radio stacks that adapt to heterogeneous infrastructure: from energy-constrained platforms deploying cells-on-wheels (e.g., drones) or battery-powered cells to green edge clouds. We perform an in-depth experimental analysis of the energy consumption of virtualized Base Stations (vBSs) and render two conclusions: (i) characterizing performance and power consumption is intricate as it depends on human behavior such as network load or user mobility; and (ii) there are many control policies and some of them have non-linear and monotonic relations with power and throughput. Driven by our experimental insights, we argue that machine learning holds the key for vBS control. We formulate two problems and two algorithms: (i) BP-vRAN, which uses Bayesian online learning to balance performance and energy consumption, and (ii) SBP-vRAN, which augments our Bayesian optimization approach with safe controls that maximize performance while respecting hard power constraints. We show that our approaches are data-efficient and have provably performance, which is paramount for carrier-grade vRANs. We demonstrate the convergence and flexibility of our approach and assess its performance using an experimental prototype.This work was supported by the European Commission through Grant No. 856709 (5Growth) and Grant No. 101017109 (DAEMON); and by SFI through Grant No. SFI 17/CDA/4760

    Energy and performance-aware application mapping for inhomogeneous 3D networks-on-chip

    Get PDF
    Three dimensional Networks-on-Chip (3D NoCs) have evolved as an ideal solution to the communication demands and complexity of future high density many core architectures. However, the design practicality of 3D NoCs faces several challenges such as thermal issues, high power consumption and area overhead of 3D routers as well as high complexity and cost of vertical link implementation. To mitigate the performance and manufacturing cost of 3D NoCs, inhomogeneous architectures have emerged to combine 2D and 3D routers in 3D NoCs producing lower area and energy consumption while maintaining the performance of homogeneous 3D NoCs. Due to the limited number of vertical links, application mapping on inhomogeneous 3D NoCs can be complex. However, application mapping has a great impact on the performance and energy consumption of NoCs. This paper presents an energy and performance aware application mapping algorithm for inhomogeneous 3D NoCs. The algorithm has been evaluated with various realistic traffic patterns and compared with existing mapping algorithms. Experimental results show NoCs mapped with the proposed algorithm have lower energy consumption and significant reduction in packet delays compared to the existing algorithms and comparable average packet latency with Branch-and-Bound

    ALOJA: A framework for benchmarking and predictive analytics in Hadoop deployments

    Get PDF
    This article presents the ALOJA project and its analytics tools, which leverages machine learning to interpret Big Data benchmark performance data and tuning. ALOJA is part of a long-term collaboration between BSC and Microsoft to automate the characterization of cost-effectiveness on Big Data deployments, currently focusing on Hadoop. Hadoop presents a complex run-time environment, where costs and performance depend on a large number of configuration choices. The ALOJA project has created an open, vendor-neutral repository, featuring over 40,000 Hadoop job executions and their performance details. The repository is accompanied by a test-bed and tools to deploy and evaluate the cost-effectiveness of different hardware configurations, parameters and Cloud services. Despite early success within ALOJA, a comprehensive study requires automation of modeling procedures to allow an analysis of large and resource-constrained search spaces. The predictive analytics extension, ALOJA-ML, provides an automated system allowing knowledge discovery by modeling environments from observed executions. The resulting models can forecast execution behaviors, predicting execution times for new configurations and hardware choices. That also enables model-based anomaly detection or efficient benchmark guidance by prioritizing executions. In addition, the community can benefit from ALOJA data-sets and framework to improve the design and deployment of Big Data applications.This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 639595). This work is partially supported by the Ministry of Economy of Spain under contracts TIN2012-34557 and 2014SGR1051.Peer ReviewedPostprint (published version

    Margin allocation and trade-off in complex systems design and optimization

    Get PDF
    Presented is an approach for interactive margin management. Existing methods enable a fixed set of allowable margin combinations to be identified, but these have limitations with regard to supporting interactive exploration of the effects of: 1) margins on other margins, 2) margins on performance and 3) margins on the probabilities of constraint satisfaction. To this purpose, the concept of a margin space is introduced. It is bi-directionally linked to the design space, to enable the designer to understand how assigning margins on certain parameters limits the allowable margins that can be assigned to other parameters. Also, a novel framework has been developed. It incorporates the margin space concept as well as enablers, including interactive visualization techniques, which can aide the designer to explore the margin and design spaces dynamically, as well as the effects of margins on the probability of constraint satisfaction and on performance. The framework was implemented into a prototype software tool, AirCADia, which was used for a qualitative evaluation by practicing designers. The evaluation, conducted as part of the EU TOICA project, demonstrated the usefulness of the approach
    • …
    corecore