1,058 research outputs found

    Dynamically Tuning Processor Resources with Adaptive Processing

    Get PDF
    The productivity of modern society has become inextricably linked to its ability to produce energy-efficient computing technology. Increasingly sophisticated mobile computing systems, powered for hours solely by batteries, continue to proliferate rapidly throughout society, while battery technology improves at a much slower pace. In large data centers that handle everything from online orders for a dot-com company to sophisticated Web searches, row upon row of tightly packed computers may be warehoused in a city block. Microprocessor energy wastage in such a facility directly translates into higher electric bills. Simply receiving sufficient electricity from utilities to power such a center is no longer certain. Given this situation, energy efficiency has rapidly moved to the forefront of modern microprocessor design. The adaptive processing approach to improving microprocessor energy efficiency dynamically tunes major microprocessor resources—such as caches and hardware queues—during execution to better match varying application needs.1,2 This tuning usually involves reducing the size of a resource when its full capabilities are not needed, then restoring the disabled portions when they are needed again. Dynamically tailoring processor resources in active use contrasts sharply with techniques that simply turn off entire sections of a processor when they become idle. Presenting the application with the required amount of hardware—and nothing more— throughout its execution can achieve a potentially significant reduction in energy consumption. The challenges facing adaptive processing lie in achieving this greater efficiency with reasonable hardware and software overhead, and doing so without incurring undue performance loss. Unlike reconfigurable computing, which typically uses very different technology such as FPGAs, adaptive processing exploits the dynamic superscalar design approach that developers have used successfully in many generations of general-purpose processors. Whereas reconfigurable processors must demonstrate performance or energy savings large enough to overcome very large clock frequency and circuit density disadvantages, adaptive processors typically have baseline overheads of only a few percent

    Implementation and evaluation of the sensornet protocol for Contiki

    Get PDF
    Sensornet Protocol (SP) is a link abstraction layer between the network layer and the link layer for sensor networks. SP was proposed as the core of a future-oriented sensor node architecture that allows flexible and optimized combination between multiple coexisting protocols. This thesis implements the SP sensornet protocol on the Contiki operating system in order to: evaluate the effectiveness of the original SP services; explore further requirements and implementation trade-offs uncovered by the original proposal. We analyze the original SP design and the TinyOS implementation of SP to design the Contiki port. We implement the data sending and receiving part of SP using Contiki processes, and the neighbor management part as a group of global routines. The evaluation consists of a single-hop traffic throughput test and a multihop convergecast test. Both tests are conducted using both simulation and experimentation. We conclude from the evaluation results that SP's link-level abstraction effectively improves modularity in protocol construction without sacrificing performance, and our SP implementation on Contiki lays a good foundation for future protocol innovations in wireless sensor networks

    A portable real-time operating system for embedded platforms

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2004Includes bibliographical references (leaves: 55)Text in English; Abstract: Turkish and Englishix, 74 leavesIn today's world, from TV sets to washing machines or cars, almost every electronic device is controlled by an embedded system. These systems are handling many tasks simultaneously. By using an operating system, handling of different tasks simultaneously is done in a more standardized fashion. The purpose of this thesis is to design and write a portable real-time operating system for embedded systems, which can be compiled with any application by using an ANSI C compiler. The main target is to design it as small as possible to fit the smallest microcontrollers. Other targets are high flexibility, optimal modularity, high readability and maintainability of the source code

    Power reduction techniques for TFT LCD displays

    Get PDF

    High-level services for networks-on-chip

    Get PDF
    Future technology trends envision that next-generation Multiprocessors Systems-on- Chip (MPSoCs) will be composed of a combination of a large number of processing and storage elements interconnected by complex communication architectures. Communication and interconnection between these basic blocks play a role of crucial importance when the number of these elements increases. Enabling reliable communication channels between cores becomes therefore a challenge for system designers. Networks-on-Chip (NoCs) appeared as a strategy for connecting and managing the communication between several design elements and IP blocks, as required in complex Systems-on-Chip (SoCs). The topic can be considered as a multidisciplinary synthesis of multiprocessing, parallel computing, networking, and on- chip communication domains. Networks-on-Chip, in addition to standard communication services, can be employed for providing support for the implementation of system-level services. This dissertation will demonstrate how high-level services can be added to an MPSoC platform by embedding appropriate hardware/software support in the network interfaces (NIs) of the NoC. In this dissertation, the implementation of innovative modules acting in parallel with protocol translation and data transmission in NIs is proposed and evaluated. The modules can support the execution of the high-level services in the NoC at a relatively low cost in terms of area and energy consumption. Three types of services will be addressed and discussed: security, monitoring, and fault tolerance. With respect to the security aspect, this dissertation will discuss the implementation of an innovative data protection mechanism for detecting and preventing illegal accesses to protected memory blocks and/or memory mapped peripherals. The second aspect will be addressed by proposing the implementation of a monitoring system based on programmable multipurpose monitoring probes aimed at detecting NoC internal events and run-time characteristics. As last topic, new architectural solutions for the design of fault tolerant network interfaces will be presented and discussed

    Energy Concerns with HPC Systems and Applications

    Full text link
    For various reasons including those related to climate changes, {\em energy} has become a critical concern in all relevant activities and technical designs. For the specific case of computer activities, the problem is exacerbated with the emergence and pervasiveness of the so called {\em intelligent devices}. From the application side, we point out the special topic of {\em Artificial Intelligence}, who clearly needs an efficient computing support in order to succeed in its purpose of being a {\em ubiquitous assistant}. There are mainly two contexts where {\em energy} is one of the top priority concerns: {\em embedded computing} and {\em supercomputing}. For the former, power consumption is critical because the amount of energy that is available for the devices is limited. For the latter, the heat dissipated is a serious source of failure and the financial cost related to energy is likely to be a significant part of the maintenance budget. On a single computer, the problem is commonly considered through the electrical power consumption. This paper, written in the form of a survey, we depict the landscape of energy concerns in computer activities, both from the hardware and the software standpoints.Comment: 20 page
    • 

    corecore