507 research outputs found

    Designing an Effective Web-Based Coding Environment for Novice Learners

    Get PDF

    Better Admission Control and Disk Scheduling for Multimedia Applications

    Get PDF
    General purpose operating systems have been designed to provide fast, loss-free disk service to all applications. However, multimedia applications are capable of tolerating some data loss, but are very sensitive to variation in disk service timing. Present research efforts to handle multimedia applications assume pessimistic disk behaviour when deciding to admit new multimedia connections so as not to violate the real-time application constraints. However, since multimedia applications are ``soft\u27 real-time applications that can tolerate some loss, we propose an optimistic scheme for admission control which uses average case values for disk access. Typically, disk scheduling mechanisms for multimedia applications reduce disk access times by only trying to minimize movement to subsequent blocks after sequencing based on Earliest Deadline First. We propose to implement a disk scheduling algorithm that uses knowledge of the media stored and permissible loss and jitter for each client, in addition to the physical parameters used by the other scheduling algorithms. We will evaluate our approach by implementing our admission control policy and disk scheduling algorithm in Linux and measuring the quality of various multimedia streams. If successful, the contributions of this thesis are the development of new admission control and flexible disk scheduling algorithm for improved multimedia quality of service

    A distributed network architecture for video-on-demand

    Get PDF
    The objective of this thesis is to design a distributed network architecture that provides video - on - demand services to public subscribers. This architecture is proposed as an alternative to a centralized video service system. The latter system is currently being developed by Oracle Corporation and NCube Corporation. A simulator is developed to compare the performance of both the distributed and centralized video server architectures. Moreover, an estimate of the cost of both systems is derived using current price data. It is shown that the distributed video server architecture offers a better cost / performance trade-off than the centralized system. In addition, the distributed system can be scaled up in an incremental fashion to increase the system capacity and throughput. Finally, the distributed system is a more robust system: in the presence of component failure, it can be configured to isolate or bypass failed components. Thus, it allows for graceful performance degradation, which is difficult to achieve in a centralized system

    Synthetic movies

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Architecture, 1989.Includes bibliographical references (leaves 67-70).by John A. Watlington.M.S

    Klausurtagung des Instituts für Telematik. Schloss Dagstuhl, 29. März bis 1. April 2000

    Get PDF
    Der vorliegende Bericht gibt einen Überblick über aktuelle Forschungsarbeiten des Instituts für Telematik an der Universität Karlsruhe (TH). Das Institut für Telematik ist in einem Teilgebiet der Informatik tätig, welches durch das Zusammenwachsen von Informatik und Kommunikationstechnik zur Telematik geprägt ist. Es gliedert sich in die Forschungsbereiche Telematik, Telecooperation Office (TecO), Cooperation & Management, Hochleistungsnetze und Netzwerkmanagement sowie dezentrale Systeme und Netzdienste. Die Schwerpunkte des Forschungsbereichs "Telematik" (Prof. Dr. Dr. h.c. mult. G. Krüger) liegen in den Bereichen "Dienstgüte", "Mobilkommunikation" und "Verteilte Systeme". Gemeinsames Ziel ist die Integration heterogener Netze (Festnetze und Funknetze), Rechnersysteme (von Workstations bis zu PDAs) und Softwarekomponenten, um damit den Anwendern eine Vielzahl von integrierten Diensten effizient und mit größtmöglicher Qualität zu erbringen. Das "Telecooperation Office" (TecO, Prof. Dr. Dr. h.c. mult. G. Krüger) ist ein Institutsbereich, der in Zusammenarbeit mit der Industrie anwendungsnahe Forschungsthemen der Telematik aufgreift. Im Mittelpunkt steht die innovative Nutzung von Kommunikationsinfrastrukturen mit den Schwerpunkten Softwaretechnik für Web-Anwendungen, neue Formen der Telekooperation sowie tragbare und allgegenwärtige Technologien (Ubiquitous Computing). Die Kernkompetenz des Forschungsbereichs "Cooperation & Management" (Prof. Dr. S. Abeck) liegt im prozessorientierten Netz-, System- und Anwendungsmanagement. Es werden werkzeuggestützte Managementlösungen für Betriebsprozesse entwickelt und in realen Szenarien erprobt. Ein wichtiges Szenario stellt das multimediale Informationssystem "NEXUS" dar, das als Plattform eines europaweit verteilten Lehr- und Lernsystems genutzt wird. Der Forschungsbereich "Hochleistungsnetze & Netzwerkmanagement" (Prof. Dr. W. Juling) befasst sich mit Technologie und Konzepten moderner leistungsfähiger Netzwerke sowie darüber hinaus mit sämtlichen Aspekten des Managements dieser zumeist ausgedehnten Netze. Um eine enge Abstimmung zwischen Forschungsaktivitäten und betrieblicher Praxis zu erzielen, werden insbesondere auch Synergien zwischen Institut und Rechenzentrum angestrebt. Die Arbeiten des Forschungsbereichs "Dezentrale Systeme und Netzdienste" (Prof. Dr. L. Wolf) befassen sich mit der Unterstützung verteilter Multimedia-Systeme, auch unter Berücksichtigung von Komponenten mit drahtlosem Zugang und den dafür geeigneten Architekturen und Infrastrukturen. Dabei werden vor allem Aspekte der Kommunikationssysteme wie Protokollmechanismen, Ressourcenverwaltung und adaptive und heterogene Systeme untersucht

    Design and analysis of an accelerated seed generation stage for BLASTP on the Mercury system - Master\u27s Thesis, August 2006

    Get PDF
    NCBI BLASTP is a popular sequence analysis tool used to study the evolutionary relationship between two protein sequences. Protein databases continue to grow exponentially as entire genomes of organisms are sequenced, making sequence analysis a computationally demanding task. For example, a search of the E. coli. k12 proteome against the GenBank Non-Redundant database takes 36 hours on a standard workstation. In this thesis, we look to address the problem by accelerating protein searching using Field Programmable Gate Arrays. We focus our attention on the BLASTP heuristic, building on work done earlier to accelerate DNA searching on the Mercury platform. We analyze the performance characteristics of the BLASTP algorithm and explore the design space of the seed generation stage in detail. We propose a hardware/software architecture and evaluate the performance of the individual stage, and its effect on the overall BLASTP pipeline running on the Mercury system. The seed generation stage is 13x faster than the software equivalent, and the integrated BLASTP pipeline is predicted to yield a speedup of 50x over NCBI BLASTP. Mercury BLASTP also shows a 2.5x speed improvement over the only other BLASTP-like accelerator for FPGAs while consuming far fewer logic resources

    Accelerating Pattern Recognition Algorithms On Parallel Computing Architectures

    Get PDF
    The move to more parallel computing architectures places more responsibility on the programmer to achieve greater performance. The programmer must now have a greater understanding of the underlying architecture and the inherent algorithmic parallelism. Using parallel computing architectures for exploiting algorithmic parallelism can be a complex task. This dissertation demonstrates various techniques for using parallel computing architectures to exploit algorithmic parallelism. Specifically, three pattern recognition (PR) approaches are examined for acceleration across multiple parallel computing architectures, namely field programmable gate arrays (FPGAs) and general purpose graphical processing units (GPGPUs). Phase-only filter correlation for fingerprint identification was studied as the first PR approach. This approach\u27s sensitivity to angular rotations, scaling, and missing data was surveyed. Additionally, a novel FPGA implementation of this algorithm was created using fixed point computations, deep pipelining, and four computation phases. Communication and computation were overlapped to efficiently process large fingerprint galleries. The FPGA implementation showed approximately a 47 times speedup over a central processing unit (CPU) implementation with negligible impact on precision. For the second PR approach, a spiking neural network (SNN) algorithm for a character recognition application was examined. A novel FPGA implementation of the approach was developed incorporating a scalable modular SNN processing element (PE) to efficiently perform neural computations. The modular SNN PE incorporated streaming memory, fixed point computation, and deep pipelining. This design showed speedups of approximately 3.3 and 8.5 times over CPU implementations for 624 and 9,264 sized neural networks, respectively. Results indicate that the PE design could scale to process larger sized networks easily. Finally for the third PR approach, cellular simultaneous recurrent networks (CSRNs) were investigated for GPGPU acceleration. Particularly, the applications of maze traversal and face recognition were studied. Novel GPGPU implementations were developed employing varying quantities of task-level, data-level, and instruction-level parallelism to achieve efficient runtime performance. Furthermore, the performance of the face recognition application was examined across a heterogeneous cluster of multi-core and GPGPU architectures. A combination of multi-core processors and GPGPUs achieved roughly a 996 times speedup over a single-core CPU implementation. From examining these PR approaches for acceleration, this dissertation presents useful techniques and insight applicable to other algorithms to improve performance when designing a parallel implementation
    corecore