8 research outputs found

    Performance and Energy Trade-offs Analysis of L2 on-Chip Cache Architectures for Embedded MPSoCs

    Get PDF
    On-chip memory organization is one of the most important aspects that can influence the overall system behavior in multi-processor systems. Following the trend set by high-performance processors, high-end embedded cores are moving from single-level on chip caches to a two-level on-chip cache hierarchy. Whereas in the embedded world there is general consensus on L1 private caches, for L2 there is still not a dominant architectural paradigm. Cache architectures that work for high performance computers turn out to be inefficient for embedded systems (mainly due to power-efficiency issues). This paper presents a virtual platform for design space exploration of L2 cache architectures in low-power Multi-Processor-Systems-on-Chip (MPSoCs). The tool contains several L2 caches templates, and new architectures can be easily added using our flexible plugin system. Given a set of constrains for a specific system (power, area, performance), our tool will perform extensive exploration to find the cache organization that best suits our needs. Through some practical experiments, we show how it is possible to select the optimal L2 cache, and how this kind of tool can help designers avoid some common misconceptions. Benchmarking results in the experiments section will show that for a case study with multiple processors running communicating tasks allocated on different cores, the private L2 cache organization still performs better than the shared one

    Simulation of High-Performance Memory Allocators

    Get PDF
    Current general-purpose memory allocators do not provide sufficient speed or flexibility for modern highperformance applications. To optimize metrics like performance, memory usage and energy consumption, software engineers often write custom allocators from scratch, which is a difficult and error-prone process. In this paper, we present a flexible and efficient simulator to study Dynamic Memory Managers (DMMs), a composition of one or more memory allocators. This novel approach allows programmers to simulate custom and general DMMs, which can be composed without incurring any additional runtime overhead or additional programming cost. We show that this infrastructure simplifies DMM construction, mainly because the target application does not need to be compiled every time a new DMM must be evaluated. Within a search procedure, the system designer can choose the "best" allocator by simulation for a particular target application. In our evaluation, we show that our scheme will deliver better performance, less memory usage and less energy consumption than single memory allocator

    Simulation of High-Performance Memory Allocators

    Get PDF
    Current general-purpose memory allocators do not provide sufficient speed or flexibility for modern highperformance applications. To optimize metrics like performance, memory usage and energy consumption, software engineers often write custom allocators from scratch, which is a difficult and error-prone process. In this paper, we present a flexible and efficient simulator to study Dynamic Memory Managers (DMMs), a composition of one or more memory allocators. This novel approach allows programmers to simulate custom and general DMMs, which can be composed without incurring any additional runtime overhead or additional programming cost. We show that this infrastructure simplifies DMM construction, mainly because the target application does not need to be compiled every time a new DMM must be evaluated. Within a search procedure, the system designer can choose the "best" allocator by simulation for a particular target application. In our evaluation, we show that our scheme will deliver better performance, less memory usage and less energy consumption than single memory allocators

    A Study of Efficiency Improvement in Test Automation for Electronic Invoicing Software Solutions

    Get PDF
    Testing is the process of experimenting or evaluating a system, either manually or automatically, to verify that it meets specified requirements or to identify differences between expected and observed results. Software testing, on the other hand, covers the dynamically performed verification activities to meet the expected behavior of software from an infinite number of work areas, with a limited number of appropriately selected tests. Test Automation is an important step of Continuous Deployment. Automation reduces bug fix costs by testing early and testing often principles. Automation projects also increase quality and reliability. But also the test Automation should be fast to keep track of the test. In this project, I aim to improve the testing efficiency and methodology. During my internship my supervisor and my team leader expected me to make some improvements on tests, try to find some ways to make the tests faster, or find what might be making the test slower and how to discard these parts.Testing is the process of experimenting or evaluating a system, either manually or automatically, to verify that it meets specified requirements or to identify differences between expected and observed results. Software testing, on the other hand, covers the dynamically performed verification activities to meet the expected behavior of software from an infinite number of work areas, with a limited number of appropriately selected tests. Test Automation is an important step of Continuous Deployment. Automation reduces bug fix costs by testing early and testing often principles. Automation projects also increase quality and reliability. But also the test Automation should be fast to keep track of the test. In this project, I aim to improve the testing efficiency and methodology. During my internship my supervisor and my team leader expected me to make some improvements on tests, try to find some ways to make the tests faster, or find what might be making the test slower and how to discard these parts

    Simulation of High-Performance Memory Allocators

    Get PDF
    This study presents a single-core and a multi-core processor architecture for health monitoring systems where slow biosignal events and highly parallel computations exist. The single-core architecture is composed of a processing core (PC), an instruction memory (IM) and a data memory (DM), while the multi-core architecture consists of PCs, individual IMs for each core, a shared DM and an interconnection crossbar between the cores and the DM. These architectures are compared with respect to power vs. performance trade-offs for a multi-lead electrocardiogram signal conditioning application exploiting near threshold computing. The results show that the multi-core solution consumes 66%less power for high computation requirements (50.1 MOps/s), whereas 10.4% more power for low computation needs (681 kOps/s)

    A Parallel Evolutionary Algorithm to Optimize Dynamic Memory Managers in Embedded Systems

    Get PDF
    For the last 30 years, several dynamic memory managers (DMMs) have been proposed. Such DMMs include first fit, best fit, segregated fit and buddy systems. Since the performance, memory usage and energy consumption of each DMM differs, software engineers often face difficult choices in selecting the most suitable approach for their applications. This issue has special impact in the field of portable consumer embedded systems, that must execute a limited amount of multimedia applications (e.g., 3D games, video players, signal processing software, etc.), demanding high performance and extensive memory usage at a low energy consumption. Recently, we have developed a novel methodology based on genetic programming to automatically design custom DMMs, optimizing performance, memory usage and energy consumption. However, although this process is automatic and faster than state-of-the-art optimizations, it demands intensive computation, resulting in a time-consuming process. Thus, parallel processing can be very useful to enable to explore more solutions spending the same time, as well as to implement new algorithms. In this paper we present a novel parallel evolutionary algorithm for DMMs optimization in embedded systems, based on the Discrete Event Specification (DEVS) formalism over a Service Oriented Architecture (SOA) framework. Parallelism significantly improves the performance of the sequential exploration algorithm. On the one hand, when the number of generations are the same in both approaches, our parallel optimization framework is able to reach a speed-up of 86.40% when compared with other state-of-the-art approaches. On the other, it improves the global quality (i.e., level of performance, low memory usage and low energy consumption) of the final DMM obtained in a 36.36% with respect to two well-known general-purpose DMMs and two state-of-the-art optimization methodologies

    Systematic Dynamic Memory Management Design Methodology for Reduced Memory Footprint

    Get PDF
    New portable consumer embedded devices must execute multimedia and wireless network applications that demand extensive memory footprint. Moreover, they must heavily rely on Dynamic Memory (DM) due to the unpredictability of the input data (e.g. 3D streams features) and system behavior (e.g. number of applications running concurrently defined by the user). Within this context, consistent design methodologies that can tackle efficiently the complex DM behavior of these multimedia and network applications are in great need. In this paper, we present a new methodology that allows to design custom DM management mechanisms with a reduced memory footprint for such kind of dynamic applications. First, our methodology describes the large design space of DM management decisions for multimedia and wireless network applications. Then, we propose a suitable way to traverse the aforementioned design space and construct custom DM managers that minimize the DM used by these highly dynamic applications. As a result, our methodology achieves improvements of memory footprint by 60 % on average in real case studies over the current state-of-the-art DM managers used for these types of dynamic applications
    corecore