49,476 research outputs found

    Improved multimedia server I/O subsystems

    Get PDF
    This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.---- Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.The main function of a continuous media server is to concurrently stream data from storage to multiple clients over a network. The resulting streams will congest the host CPU bus, reducing access to the system's main memory, which degrades CPU performance. The purpose of this paper is to investigate ways of improving I/O subsystems of continuous media sewers. Several improved I/O subsystem architectures are presented and their performances evaluated. The proposed architectures use an existing device, namely the Intel i960RP processor. The objective of using an I/O processor is to move the stream and its control from the host processor and the main memory. The ultimate aim is to identify the requirements for an integrated I/O subsystem for a high performance scalable media-on-demand server

    VXA: A Virtual Architecture for Durable Compressed Archives

    Full text link
    Data compression algorithms change frequently, and obsolete decoders do not always run on new hardware and operating systems, threatening the long-term usability of content archived using those algorithms. Re-encoding content into new formats is cumbersome, and highly undesirable when lossy compression is involved. Processor architectures, in contrast, have remained comparatively stable over recent decades. VXA, an archival storage system designed around this observation, archives executable decoders along with the encoded content it stores. VXA decoders run in a specialized virtual machine that implements an OS-independent execution environment based on the standard x86 architecture. The VXA virtual machine strictly limits access to host system services, making decoders safe to run even if an archive contains malicious code. VXA's adoption of a "native" processor architecture instead of type-safe language technology allows reuse of existing "hand-optimized" decoders in C and assembly language, and permits decoders access to performance-enhancing architecture features such as vector processing instructions. The performance cost of VXA's virtualization is typically less than 15% compared with the same decoders running natively. The storage cost of archived decoders, typically 30-130KB each, can be amortized across many archived files sharing the same compression method.Comment: 14 pages, 7 figures, 2 table

    Interposing Flash between Disk and DRAM to Save Energy for Streaming Workloads

    Get PDF
    In computer systems, the storage hierarchy, composed of a disk drive and a DRAM, is responsible for a large portion of the total energy consumed. This work studies the energy merit of interposing flash memory as a streaming buffer between the disk drive and the DRAM. Doing so, we extend the spin-off period of the disk drive and cut down on the DRAM capacity at the cost of (extra) flash.\ud \ud We study two different streaming applications: mobile multimedia players and media servers. Our simulated results show that for light workloads, a system with a flash as a buffer between the disk and the DRAM consumes up to 40% less energy than the same system without a flash buffer. For heavy workloads savings of at least 30% are possible. We also address the wear-out of flash and present a simple solution to extend its lifetime

    Implementing and Evaluating Jukebox Schedulers Using JukeTools

    Get PDF
    Scheduling jukebox resources is important to build efficient and flexible hierarchical storage systems. JukeTools is a toolbox that helps in the complex tasks of implementing and evaluating jukebox schedulers. It allows the fast development of jukebox schedulers. The schedulers can be tested in numerous environments, both real and simulated types. JukeTools helps the developer to easily detect errors in the schedules. Analyzer tools create detailed reports on the behavior and performance of any of the scheduler, and provide comparisons between different schedulers. This paper describes the functionality offered by JukeTools, with special emphasis on how the toolbox can be used to develop jukebox schedulers

    Analysis and implementation of the Large Scale Video-on-Demand System

    Full text link
    Next Generation Network (NGN) provides multimedia services over broadband based networks, which supports high definition TV (HDTV), and DVD quality video-on-demand content. The video services are thus seen as merging mainly three areas such as computing, communication, and broadcasting. It has numerous advantages and more exploration for the large-scale deployment of video-on-demand system is still needed. This is due to its economic and design constraints. It's need significant initial investments for full service provision. This paper presents different estimation for the different topologies and it require efficient planning for a VOD system network. The methodology investigates the network bandwidth requirements of a VOD system based on centralized servers, and distributed local proxies. Network traffic models are developed to evaluate the VOD system's operational bandwidth requirements for these two network architectures. This paper present an efficient estimation of the of the bandwidth requirement for the different architectures.Comment: 9 pages, 8 figure

    Energy challenges for ICT

    Get PDF
    The energy consumption from the expanding use of information and communications technology (ICT) is unsustainable with present drivers, and it will impact heavily on the future climate change. However, ICT devices have the potential to contribute signi - cantly to the reduction of CO2 emission and enhance resource e ciency in other sectors, e.g., transportation (through intelligent transportation and advanced driver assistance systems and self-driving vehicles), heating (through smart building control), and manu- facturing (through digital automation based on smart autonomous sensors). To address the energy sustainability of ICT and capture the full potential of ICT in resource e - ciency, a multidisciplinary ICT-energy community needs to be brought together cover- ing devices, microarchitectures, ultra large-scale integration (ULSI), high-performance computing (HPC), energy harvesting, energy storage, system design, embedded sys- tems, e cient electronics, static analysis, and computation. In this chapter, we introduce challenges and opportunities in this emerging eld and a common framework to strive towards energy-sustainable ICT

    Optical memory disks in optical information processing

    Get PDF
    We describe the use of optical memory disks as elements in optical information processing architectures. The optical disk is an optical memory devicew ith a storage capacity approaching 1010b its which is naturally suited to parallel access. We discuss optical disk characteristics which are important in optical computing systems such as contrast, diffraction efficiency, and phase uniformity. We describe techniques for holographic storage on optical disks and present reconstructions of several types of computer-generated holograms. Various optical information processing architectures are described for applications such as database retrieval, neural network implementation, and image correlation. Selected systems are experimentally demonstrated

    The Design of a System Architecture for Mobile Multimedia Computers

    Get PDF
    This chapter discusses the system architecture of a portable computer, called Mobile Digital Companion, which provides support for handling multimedia applications energy efficiently. Because battery life is limited and battery weight is an important factor for the size and the weight of the Mobile Digital Companion, energy management plays a crucial role in the architecture. As the Companion must remain usable in a variety of environments, it has to be flexible and adaptable to various operating conditions. The Mobile Digital Companion has an unconventional architecture that saves energy by using system decomposition at different levels of the architecture and exploits locality of reference with dedicated, optimised modules. The approach is based on dedicated functionality and the extensive use of energy reduction techniques at all levels of system design. The system has an architecture with a general-purpose processor accompanied by a set of heterogeneous autonomous programmable modules, each providing an energy efficient implementation of dedicated tasks. A reconfigurable internal communication network switch exploits locality of reference and eliminates wasteful data copies
    • …
    corecore