3,991 research outputs found

    Evolutionary Design of the Memory Subsystem

    Full text link
    The memory hierarchy has a high impact on the performance and power consumption in the system. Moreover, current embedded systems, included in mobile devices, are specifically designed to run multimedia applications, which are memory intensive. This increases the pressure on the memory subsystem and affects the performance and energy consumption. In this regard, the thermal problems, performance degradation and high energy consumption, can cause irreversible damage to the devices. We address the optimization of the whole memory subsystem with three approaches integrated as a single methodology. Firstly, the thermal impact of register file is analyzed and optimized. Secondly, the cache memory is addressed by optimizing cache configuration according to running applications and improving both performance and power consumption. Finally, we simplify the design and evaluation process of general-purpose and customized dynamic memory manager, in the main memory. To this aim, we apply different evolutionary algorithms in combination with memory simulators and profiling tools. This way, we are able to evaluate the quality of each candidate solution and take advantage of the exploration of solutions given by the optimization algorithm.We also provide an experimental experience where our proposal is assessed using well-known benchmark applications

    Simulation of High-Performance Memory Allocators

    Get PDF
    Current general-purpose memory allocators do not provide sufficient speed or flexibility for modern highperformance applications. To optimize metrics like performance, memory usage and energy consumption, software engineers often write custom allocators from scratch, which is a difficult and error-prone process. In this paper, we present a flexible and efficient simulator to study Dynamic Memory Managers (DMMs), a composition of one or more memory allocators. This novel approach allows programmers to simulate custom and general DMMs, which can be composed without incurring any additional runtime overhead or additional programming cost. We show that this infrastructure simplifies DMM construction, mainly because the target application does not need to be compiled every time a new DMM must be evaluated. Within a search procedure, the system designer can choose the "best" allocator by simulation for a particular target application. In our evaluation, we show that our scheme will deliver better performance, less memory usage and less energy consumption than single memory allocator

    Simulation of High-Performance Memory Allocators

    Get PDF
    Current general-purpose memory allocators do not provide sufficient speed or flexibility for modern highperformance applications. To optimize metrics like performance, memory usage and energy consumption, software engineers often write custom allocators from scratch, which is a difficult and error-prone process. In this paper, we present a flexible and efficient simulator to study Dynamic Memory Managers (DMMs), a composition of one or more memory allocators. This novel approach allows programmers to simulate custom and general DMMs, which can be composed without incurring any additional runtime overhead or additional programming cost. We show that this infrastructure simplifies DMM construction, mainly because the target application does not need to be compiled every time a new DMM must be evaluated. Within a search procedure, the system designer can choose the "best" allocator by simulation for a particular target application. In our evaluation, we show that our scheme will deliver better performance, less memory usage and less energy consumption than single memory allocators

    Simulation of High-Performance Memory Allocators

    Get PDF
    This study presents a single-core and a multi-core processor architecture for health monitoring systems where slow biosignal events and highly parallel computations exist. The single-core architecture is composed of a processing core (PC), an instruction memory (IM) and a data memory (DM), while the multi-core architecture consists of PCs, individual IMs for each core, a shared DM and an interconnection crossbar between the cores and the DM. These architectures are compared with respect to power vs. performance trade-offs for a multi-lead electrocardiogram signal conditioning application exploiting near threshold computing. The results show that the multi-core solution consumes 66%less power for high computation requirements (50.1 MOps/s), whereas 10.4% more power for low computation needs (681 kOps/s)

    Reinforcement Learning

    Get PDF
    Brains rule the world, and brain-like computation is increasingly used in computers and electronic devices. Brain-like computation is about processing and interpreting data or directly putting forward and performing actions. Learning is a very important aspect. This book is on reinforcement learning which involves performing actions to achieve a goal. The first 11 chapters of this book describe and extend the scope of reinforcement learning. The remaining 11 chapters show that there is already wide usage in numerous fields. Reinforcement learning can tackle control tasks that are too complex for traditional, hand-designed, non-learning controllers. As learning computers can deal with technical complexities, the tasks of human operators remain to specify goals on increasingly higher levels. This book shows that reinforcement learning is a very dynamic area in terms of theory and applications and it shall stimulate and encourage new research in this field

    Evolutionary Computation

    Get PDF
    This book presents several recent advances on Evolutionary Computation, specially evolution-based optimization methods and hybrid algorithms for several applications, from optimization and learning to pattern recognition and bioinformatics. This book also presents new algorithms based on several analogies and metafores, where one of them is based on philosophy, specifically on the philosophy of praxis and dialectics. In this book it is also presented interesting applications on bioinformatics, specially the use of particle swarms to discover gene expression patterns in DNA microarrays. Therefore, this book features representative work on the field of evolutionary computation and applied sciences. The intended audience is graduate, undergraduate, researchers, and anyone who wishes to become familiar with the latest research work on this field

    Human Machine Interaction

    Get PDF
    In this book, the reader will find a set of papers divided into two sections. The first section presents different proposals focused on the human-machine interaction development process. The second section is devoted to different aspects of interaction, with a special emphasis on the physical interaction

    Traceability improvement for software miniaturization

    Get PDF
    On the one hand, software companies try to reach the maximum number of customers, which often translate into integrating more features into their programs, leading to an increase in size, memory footprint, screen complexity, and so on. On the other hand, hand-held devices are now pervasive and their customers ask for programs similar to those they use everyday on their desktop computers. Companies are left with two options, either to develop new software for hand-held devices or perform manual refactoring to port it on hand-held devices, but both options are expensive and laborious. Software miniaturization can aid companies to port their software to hand-held devices. However, traceability is backbone of software miniaturization, without up-to-date traceability links it becomes diffi cult to recover desired artefacts for miniaturized software. Unfortunately, due to continuous changes, it is a tedious and time-consuming task to keep traceability links up-to-date. Often traceability links become outdated or completely vanish. Several traceability recovery approaches have been developed in the past. Each approach has some benefits and limitations. However, these approaches do not tell which factors can affect traceability recovery process. Our current research proposal is based on the premise that controlling potential quality factors and combining different traceability approaches can improve traceability quality for software miniaturization. In this research proposal, we introduce traceability improvement for software miniaturization (TISM) process. TISM has three sub processes, namely, traceability factor controller (TFC), hybrid traceability (HT), and software miniaturization optimization (SMO). TFC is a semi automatic process, it provides solution for factors, that can affect traceability process. TFC uses a generic format to document trace quality affecting factors. TFC results will help practitioners and researcher to improve their tool, techniques, and approaches. In the HT different traceability, recovery approaches are combined to trace functional and non-functional requirements. HT also works on improving precision and recall with the help of TFC. Finally these links have been used by SMO to identify required artefacts and optimize using scalability, performance, and portability parameters. We will conduct two case studies to aid TISM. The contributions of this research proposal can be summarised as follow: (i) traceability support for software miniaturization and optimization, (ii) a hybrid approach that combines the best of available traceability approaches to trace functional, non-functional requirements, and provides return-on-investment analysis, (iii) traceability quality factor controller that records the quality factors and provide support for avoiding or controlling them

    Curriculum Change 2008-2009

    Get PDF

    Course Description

    Get PDF
    corecore