584 research outputs found

    Low Power system Design techniques for mobile computers

    Get PDF
    Portable products are being used increasingly. Because these systems are battery powered, reducing power consumption is vital. In this report we give the properties of low power design and techniques to exploit them on the architecture of the system. We focus on: min imizing capacitance, avoiding unnecessary and wasteful activity, and reducing voltage and frequency. We review energy reduction techniques in the architecture and design of a hand-held computer and the wireless communication system, including error control, sys tem decomposition, communication and MAC protocols, and low power short range net works

    Design techniques for low-power systems

    Get PDF
    Portable products are being used increasingly. Because these systems are battery powered, reducing power consumption is vital. In this report we give the properties of low-power design and techniques to exploit them on the architecture of the system. We focus on: minimizing capacitance, avoiding unnecessary and wasteful activity, and reducing voltage and frequency. We review energy reduction techniques in the architecture and design of a hand-held computer and the wireless communication system including error control, system decomposition, communication and MAC protocols, and low-power short range networks

    The Design of a System Architecture for Mobile Multimedia Computers

    Get PDF
    This chapter discusses the system architecture of a portable computer, called Mobile Digital Companion, which provides support for handling multimedia applications energy efficiently. Because battery life is limited and battery weight is an important factor for the size and the weight of the Mobile Digital Companion, energy management plays a crucial role in the architecture. As the Companion must remain usable in a variety of environments, it has to be flexible and adaptable to various operating conditions. The Mobile Digital Companion has an unconventional architecture that saves energy by using system decomposition at different levels of the architecture and exploits locality of reference with dedicated, optimised modules. The approach is based on dedicated functionality and the extensive use of energy reduction techniques at all levels of system design. The system has an architecture with a general-purpose processor accompanied by a set of heterogeneous autonomous programmable modules, each providing an energy efficient implementation of dedicated tasks. A reconfigurable internal communication network switch exploits locality of reference and eliminates wasteful data copies

    Multi-engine packet classification hardware accelerator

    Get PDF
    As line rates increase, the task of designing high performance architectures with reduced power consumption for the processing of router traffic remains important. In this paper, we present a multi-engine packet classification hardware accelerator, which gives increased performance and reduced power consumption. It follows the basic idea of decision-tree based packet classification algorithms, such as HiCuts and HyperCuts, in which the hyperspace represented by the ruleset is recursively divided into smaller subspaces according to some heuristics. Each classification engine consists of a Trie Traverser which is responsible for finding the leaf node corresponding to the incoming packet, and a Leaf Node Searcher that reports the matching rule in the leaf node. The packet classification engine utilizes the possibility of ultra-wide memory word provided by FPGA block RAM to store the decision tree data structure, in an attempt to reduce the number of memory accesses needed for the classification. Since the clock rate of an individual engine cannot catch up to that of the internal memory, multiple classification engines are used to increase the throughput. The implementations in two different FPGAs show that this architecture can reach a searching speed of 169 million packets per second (mpps) with synthesized ACL, FW and IPC rulesets. Further analysis reveals that compared to state of the art TCAM solutions, a power savings of up to 72% and an increase in throughput of up to 27% can be achieved

    Evaluation of unidirectional background push content download services for the delivery of television programs

    Full text link
    Este trabajo de tesis presenta los servicios de descarga de contenido en modo push como un mecanismo eficiente para el envío de contenido de televisión pre-producido sobre redes de difusión. Hoy en día, los operadores de red dedican una cantidad considerable de recursos de red a la entrega en vivo de contenido televisivo, tanto sobre redes de difusión como sobre conexiones unidireccionales. Esta oferta de servicios responde únicamente a requisitos comerciales: disponer de los contenidos televisivos en cualquier momento y lugar. Sin embargo, desde un punto de vista estrictamente académico, el envío en vivo es únicamente un requerimiento para el contenido en vivo, no para contenidos que ya han sido producidos con anterioridad a su emisión. Más aún, la difusión es solo eficiente cuando el contenido es suficientemente popular. Los servicios bajo estudio en esta tesis utilizan capacidad residual en redes de difusión para enviar contenido pre-producido para que se almacene en los equipos de usuario. La propuesta se justifica únicamente por su eficiencia. Por un lado, genera valor de recursos de red que no se aprovecharían de otra manera. Por otro lado, realiza la entrega de contenidos pre-producidos y populares de la manera más eficiente: sobre servicios de descarga de contenidos en difusión. Los resultados incluyen modelos para la popularidad y la duración de contenidos, valiosos para cualquier trabajo de investigación basados en la entrega de contenidos televisivos. Además, la tesis evalúa la capacidad residual disponible en redes de difusión, por medio de estudios empíricos. Después, estos resultados son utilizados en simulaciones que evalúan las prestaciones de los servicios propuestos en escenarios diferentes y para aplicaciones diferentes. La evaluación demuestra que este tipo de servicios son un recurso muy útil para la entrega de contenido televisivo.This thesis dissertation presents background push Content Download Services as an efficient mechanism to deliver pre-produced television content through existing broadcast networks. Nowadays, network operators dedicate a considerable amount of network resources to live streaming live, through both broadcast and unicast connections. This service offering responds solely to commercial requirements: Content must be available anytime and anywhere. However, from a strictly academic point of view, live streaming is only a requirement for live content and not for pre-produced content. Moreover, broadcasting is only efficient when the content is sufficiently popular. The services under study in this thesis use residual capacity in broadcast networks to push popular, pre-produced content to storage capacity in customer premises equipment. The proposal responds only to efficiency requirements. On one hand, it creates value from network resources otherwise unused. On the other hand, it delivers popular pre-produced content in the most efficient way: through broadcast download services. The results include models for the popularity and the duration of television content, valuable for any research work dealing with file-based delivery of television content. Later, the thesis evaluates the residual capacity available in broadcast networks through empirical studies. These results are used in simulations to evaluate the performance of background push content download services in different scenarios and for different applications. The evaluation proves that this kind of services can become a great asset for the delivery of television contentFraile Gil, F. (2013). Evaluation of unidirectional background push content download services for the delivery of television programs [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31656TESI

    Cost-effective Information and Communication Technology (ICT) infrastructure for Tanziania

    Get PDF
    The research conducted an Information and Communication Technology (ICT) field survey, the results revealed that Tanzania is still lagging behind in the ICT sector due to the lack of an internationally connected terrestrial ICT infrastructure; Internet connectivity to the rest of the world is via expensive satellite links, thus leaving the majority of the population unable to access the Internet services due to its high cost. Therefore, an ICT backbone infrastructure is designed that exploits optical DWDM network technology, which un-locks bandwidth bottlenecks and provides higher capacity which will provide ICT services such as Internet, voice, videos and other multimedia interactions at an affordable cost to the majority of the people who live in the urban and rural areas of Tanzania. The research analyses and compares the performance, and system impairments, in a DWDM system at data transmission rates of 2.5 Gb/s and 10 Gb/s per wavelength channel. The simulation results show that a data transmission rate of 2.5 Gb/s can be successfully transmitted over a greater distance than 10 Gb/s with minimum system impairments. Also operating at the lower data rate delivers a good system performance for the required ICT services. A forty-channel DWDM system will provide a bandwidth of 100 Gb/s. A cost analysis demonstrates the economic worth of incorporating existing optical fibre installations into an optical DWDM network for the creation of an affordable ICT backbone infrastructure; this approach is compared with building a completely new optical fibre DWDM network or a SONET/SDH network. The results show that the ICT backbone infrastructure built with existing SSMF DWDM network technology is a good investment, in terms of profitability, even if the Internet charges are reduced to half current rates. The case for building a completely new optical fibre DWDM network or a SONET/SDH network is difficult to justify using current financial data

    Ontwerp en evaluatie van content distributie netwerken voor multimediale streaming diensten.

    Get PDF
    Traditionele Internetgebaseerde diensten voor het verspreiden van bestanden, zoals Web browsen en het versturen van e-mails, worden aangeboden via één centrale server. Meer recente netwerkdiensten zoals interactieve digitale televisie of video-op-aanvraag vereisen echter hoge kwaliteitsgaranties (QoS), zoals een lage en constante netwerkvertraging, en verbruiken een aanzienlijke hoeveelheid bandbreedte op het netwerk. Architecturen met één centrale server kunnen deze garanties moeilijk bieden en voldoen daarom niet meer aan de hoge eisen van de volgende generatie multimediatoepassingen. In dit onderzoek worden daarom nieuwe netwerkarchitecturen bestudeerd, die een dergelijke dienstkwaliteit kunnen ondersteunen. Zowel peer-to-peer mechanismes, zoals bij het uitwisselen van muziekbestanden tussen eindgebruikers, als servergebaseerde oplossingen, zoals gedistribueerde caches en content distributie netwerken (CDN's), komen aan bod. Afhankelijk van de bestudeerde dienst en de gebruikte netwerktechnologieën en -architectuur, worden gecentraliseerde algoritmen voor netwerkontwerp voorgesteld. Deze algoritmen optimaliseren de plaatsing van de servers of netwerkcaches en bepalen de nodige capaciteit van de servers en netwerklinks. De dynamische plaatsing van de aangeboden bestanden in de verschillende netwerkelementen wordt aangepast aan de heersende staat van het netwerk en aan de variërende aanvraagpatronen van de eindgebruikers. Serverselectie, herroutering van aanvragen en het verspreiden van de belasting over het hele netwerk komen hierbij ook aan bod

    Complexity in Spanish optical fiber and SDH transport networks

    Get PDF
    Complex networks are important instances of technology-related complex systems. In this work we apply tools from complexity science to characterise two Telefónica España transport network systems: the optical fiber network and the SDH transport network. We compare both cases and derive its most important properties. Remarkably, our results show that in both cases several features of heterogeneous, hierarchical complex networks arise

    Towards adaptive balanced computing (ABC) using reconfigurable functional caches (RFCs)

    Get PDF
    The general-purpose computing processor performs a wide range of functions. Although the performance of general-purpose processors has been steadily increasing, certain software technologies like multimedia and digital signal processing applications demand ever more computing power. Reconfigurable computing has emerged to combine the versatility of general-purpose processors with the customization ability of ASICs. The basic premise of reconfigurability is to provide better performance and higher computing density than fixed configuration processors. Most of the research in reconfigurable computing is dedicated to on-chip functional logic. If computing resources are adaptable to the computing requirement, the maximum performance can be achieved. To overcome the gap between processor and memory technology, the size of on-chip cache memory has been consistently increasing. The larger cache memory capacity, though beneficial in general, does not guarantee a higher performance for all the applications as they may not utilize all of the cache efficiently. To utilize on-chip resources effectively and to accelerate the performance of multimedia applications specifically, we propose a new architecture---Adaptive Balanced Computing (ABC). ABC uses dynamic resource configuration of on-chip cache memory by integrating Reconfigurable Functional Caches (RFC). RFC can work as a conventional cache or as a specialized computing unit when necessary. In order to convert a cache memory to a computing unit, we include additional logic to embed multi-bit output LUTs into the cache structure. We add the reconfigurability of cache memory to a conventional processor with minimal modification to the load/store microarchitecture and with minimal compiler assistance. ABC architecture utilizes resources more efficiently by reconfiguring the cache memory to computing units dynamically. The area penalty for this reconfiguration is about 50--60% of the memory cell cache array-only area with faster cache access time. In a base array cache (parallel decoding caches), the area penalty is 10--20% of the data array with 1--2% increase in the cache access time. However, we save 27% for FIR and 44% for DCT/IDCT in area with respect to memory cell array cache and about 80% for both applications with respect to base array cache if we were to implement all these units separately (such as ASICs). The simulations with multimedia and DSP applications (DCT/IDCT and FIR/IIR) show that the resource configuration with the RFC speedups ranging from 1.04X to 3.94X in overall applications and from 2.61X to 27.4X in the core computations. The simulations with various parameters indicate that the impact of reconfiguration can be minimized if an appropriate cache organization is selected
    corecore