8 research outputs found

    Exploring Discrete Cosine Transform for Multi-resolution Analysis

    Get PDF
    Multi-resolution analysis has been a very popular technique in the recent years. Wavelets have been used extensively to perform multi resolution image expansion and analysis. DCT, however, has been used to compress image but not for multi resolution image analysis. This thesis is an attempt to explore the possibilities of using DCT for multi-resolution image analysis. Naive implementation of block DCT for multi-resolution expansion has many difficulties that lead to signal distortion. One of the main causes of distortion is the blocking artifacts that appear when reconstructing images transformed by DCT. The new algorithm is based on line DCT which eliminates the need for block processing. The line DCT is one dimensional array based on cascading the image rows and columns in one transform operation. Several images have been used to test the algorithm at various resolution levels. The reconstruction mean square error rate is used as an indication to the success of the method. The proposed algorithm has also been tested against the traditional block DCT

    A layered multicast packet video system

    Get PDF
    Due to the character of the original source materials and the nature of batch digitization, quality control issues may be present in this document. Please report any quality issues you encounter to [email protected], referencing the URI of the item.Includes bibliographical references.Issued also on microfiche from Lange Micrographics.Software based desktop videoconferencing tools are developed to demonstrate techniques necessary for video delivery in heterogeneous packet networks. Using the current network infrastructure and no network resource reservation, a one-to-many implementation is designed around a two-layer pyramidal video coder. During periods of congestion, the network routers give priority to the base layer, which by itself allows reconstruction of reasonable quality video. Receiver feedback is used to lower the output rate of the encoder's low priority pyramidal layer when all receivers are suffering high packet loss. Each of the two layers is transmitted on a separate multicast channel. Under persistent congestion, an individual receiver will discard the low priority pyramidal layer, which allows the network to prune the multicast tree 'd congestion. A new scheme is examined where if the other receivers back and avoi are agreeable, the source will respond to a receiver pruning its pyramidal layer by lowering its rate and allowing the receiver to quickly rejoin the pyramidal layer at a quality level higher than what the high priority base layer can provide by itself. Another new scheme is described where an agent on the receiver's local router provides spare capacity information to assist the receiver in its decision to rejoin the pyramidal layer

    Low-complexity video coding for receiver-driven layered multicast

    Get PDF
    In recent years, the “Internet Multicast Backbone,” or MBone, has risen from a small, research curiosity to a large- scale and widely used communications infrastructure. A driving force behind this growth was the development of multipoint audio, video, and shared whiteboard conferencing applications. Because these real-time media are transmitted at a uniform rate to all of the receivers in the network, a source must either run at the bottleneck rate or overload portions of its multicast distribution tree. We overcome this limitation by moving the burden of rate adaptation from the source to the receivers with a scheme we call receiver-driven layered multicast, or RLM. In RLM, a source distributes a hierarchical signal by striping the different layers across multiple multicast groups, and receivers adjust their reception rate by simply joining and leaving multicast groups. In this paper, we describe a layered video compression algorithm which, when combined with RLM, provides a comprehensive solution for scalable multicast video transmission in heterogeneous networks. In addition to a layered representation, our coder has low complexity (admitting an effi- cient software implementation) and high loss resilience (admitting robust operation in loosely controlled environments like the Inter- net). Even with these constraints, our hybrid DCT/wavelet-based coder exhibits good compression performance. It outperforms all publicly available Internet video codecs while maintaining comparable run-time performance. We have implemented our coder in a “real” application—the UCB/LBL videoconferencing tool vic. Unlike previous work on layered video compression and transmission, we have built a fully operational system that is currently being deployed on a very large scale over the MBone

    Verbesserung der Dateiverarbeitungskette in Betriebssystemen durch Nutzung der Skalierbarkeit moderner Kompressionsverfahren

    Get PDF
    Motivated by the current challenges in the field of computerized processing of multimedia information, this work contributes to the field of research on data processing and file management within computer systems. It presents novel techniques that enhance existing file- and operating systems by utilizing the scalability of modern media formats. For this purpose, the compression formats JPEG 2000 and H.264 SVC will be presented with a focus on how they achieve scalability. An analysis of the limiting hard- and software components in a computer system for the application area is presented. In particular, the restrictions of the utilized storage-devices, data interfaces and file systems are laid out and workarounds to compensate the performance bottlenecks are depicted. According to the observation that compensating the defiles requires extra efforts, new solution statements utilizing scalable media are derived and examined, subsequently. The present work reveals new concepts for managing scalable media files comprising a new rights management as well as a use-case-optimized storage technique. The rights management allows for granting access to certain parts of a file by which the scalability of the media files can be exploited in a way that users get access to various variants depending on their access rights. The use-case-optimized storage technique increases the throughput of hard disk drives when the subsequent access pattern to the media content is known a-priori. In addition, enhancements for existing data workflows are proposed by taking advantage of scalable media. Based on the Substitution Strategy, where missing data from a scalable file is compensated, a real-time capable procedure for reading files is shown. Using this strategy, image-sequences comprising a video can be shown at a given frame rate without interruptions caused by insufficient throughput of the storage device or low-speed interfaces used to connect the storage device. Adapted caching-strategies facilitate an increase of images residing in cache compared to non-scalable-variants. Additionally, a concept called Parameterizable File-Access is introduced which allows users to request a certain variant of a scalable file directly from the file system by adding side-information to a virtual file name.Motiviert durch die aktuellen Herausforderungen im Bereich der computergestützten Bearbeitung vom Multimediadaten, leistet die vorliegende Arbeit einen Beitrag zum Forschungsgebiet der Datenverarbeitung und Dateiverwaltung innerhalb von Computersystemen durch neuartige Verfahren zur Nutzung skalierbarer Medien unter Verwendung vorhandener Datei- und Betriebssysteme. Hierzu werden die Kompressionsformate JPEG 2000 und H.264 SVC vorgestellt und gezeigt, wie die Eigenschaft der Skalierbarkeit innerhalb der verschiedenen Verfahren erreicht wird. Es folgt eine Analyse der limitierenden Hard- und Softwarekomponenten in einem Computersystem für das o.g. Einsatzgebiet. Ausgehend vom hohen Aufwand zur Kompensation der Leistungsengpässe werden anschließend neue Lösungsansätze unter Nutzung skalierbarer Medienformate abgeleitet, die nachfolgend untersucht werden. Die vorliegende Arbeit zeigt hierzu neue Konzepte zur Verwaltung skalierbarer Mediendaten, die durch ein neues Rechtemanagement sowie durch eine speicheradaptive Ablagestrategie abgedeckt werden. Das Rechtemanagement erlaubt die Vergabe von Zugriffsrechten auf verschiedene Abschnitte einer Datei, wodurch die Skalierbarkeit der Medien derart abgebildet werden kann, dass verschiedene Benutzer unterschiedliche Varianten einer Datei angezeigt bekommen. Die speicheradaptive Ablagestrategie erreicht Durchsatzsteigerungen der verwendeten Datenträger, wenn das spätere Zugriffsverhalten auf die gespeicherten Medien vorab bekannt ist. Weiter werden Verbesserungen der Verarbeitungsabläufe unter Ausnutzung skalierbarer Medien gezeigt. Auf Basis der entwickelten Substitutionsmethode zur Kompensation fehlender Daten einer skalierbaren Datei wird eine echtzeitfähige Einlesestrategie vorgestellt, die unzureichende Durchsatzraten von Speichermedien bzw. langsamen Schnittstellen derart kompensieren kann, dass eine unterbrechungsfreie Ausspielung von Bildsequenzen bei einer vorgegebenen Bildwiederholrate gewährleistet werden kann. Angepasste Cache-Strategien ermöglichen eine Steigerung der im Cache vorhaltbaren Einzelbilder im Vergleich zu nicht skalierbaren Varianten. Darüber hinaus wird das Konzept eines parametrisierbaren Dateiaufrufes eingeführt, wodurch mittels Zusatzinformationen im virtuellen Dateinamen eine gewünschte Variante einer skalierbaren Datei vom Datenspeicher angefragt werden kann

    Verbesserung der Dateiverarbeitungskette in Betriebssystemen durch Nutzung der Skalierbarkeit moderner Kompressionsverfahren

    Get PDF
    Motivated by the current challenges in the field of computerized processing of multimedia information, this work contributes to the field of research on data processing and file management within computer systems. It presents novel techniques that enhance existing file- and operating systems by utilizing the scalability of modern media formats. For this purpose, the compression formats JPEG 2000 and H.264 SVC will be presented with a focus on how they achieve scalability. An analysis of the limiting hard- and software components in a computer system for the application area is presented. In particular, the restrictions of the utilized storage-devices, data interfaces and file systems are laid out and workarounds to compensate the performance bottlenecks are depicted. According to the observation that compensating the defiles requires extra efforts, new solution statements utilizing scalable media are derived and examined, subsequently. The present work reveals new concepts for managing scalable media files comprising a new rights management as well as a use-case-optimized storage technique. The rights management allows for granting access to certain parts of a file by which the scalability of the media files can be exploited in a way that users get access to various variants depending on their access rights. The use-case-optimized storage technique increases the throughput of hard disk drives when the subsequent access pattern to the media content is known a-priori. In addition, enhancements for existing data workflows are proposed by taking advantage of scalable media. Based on the Substitution Strategy, where missing data from a scalable file is compensated, a real-time capable procedure for reading files is shown. Using this strategy, image-sequences comprising a video can be shown at a given frame rate without interruptions caused by insufficient throughput of the storage device or low-speed interfaces used to connect the storage device. Adapted caching-strategies facilitate an increase of images residing in cache compared to non-scalable-variants. Additionally, a concept called Parameterizable File-Access is introduced which allows users to request a certain variant of a scalable file directly from the file system by adding side-information to a virtual file name.Motiviert durch die aktuellen Herausforderungen im Bereich der computergestützten Bearbeitung vom Multimediadaten, leistet die vorliegende Arbeit einen Beitrag zum Forschungsgebiet der Datenverarbeitung und Dateiverwaltung innerhalb von Computersystemen durch neuartige Verfahren zur Nutzung skalierbarer Medien unter Verwendung vorhandener Datei- und Betriebssysteme. Hierzu werden die Kompressionsformate JPEG 2000 und H.264 SVC vorgestellt und gezeigt, wie die Eigenschaft der Skalierbarkeit innerhalb der verschiedenen Verfahren erreicht wird. Es folgt eine Analyse der limitierenden Hard- und Softwarekomponenten in einem Computersystem für das o.g. Einsatzgebiet. Ausgehend vom hohen Aufwand zur Kompensation der Leistungsengpässe werden anschließend neue Lösungsansätze unter Nutzung skalierbarer Medienformate abgeleitet, die nachfolgend untersucht werden. Die vorliegende Arbeit zeigt hierzu neue Konzepte zur Verwaltung skalierbarer Mediendaten, die durch ein neues Rechtemanagement sowie durch eine speicheradaptive Ablagestrategie abgedeckt werden. Das Rechtemanagement erlaubt die Vergabe von Zugriffsrechten auf verschiedene Abschnitte einer Datei, wodurch die Skalierbarkeit der Medien derart abgebildet werden kann, dass verschiedene Benutzer unterschiedliche Varianten einer Datei angezeigt bekommen. Die speicheradaptive Ablagestrategie erreicht Durchsatzsteigerungen der verwendeten Datenträger, wenn das spätere Zugriffsverhalten auf die gespeicherten Medien vorab bekannt ist. Weiter werden Verbesserungen der Verarbeitungsabläufe unter Ausnutzung skalierbarer Medien gezeigt. Auf Basis der entwickelten Substitutionsmethode zur Kompensation fehlender Daten einer skalierbaren Datei wird eine echtzeitfähige Einlesestrategie vorgestellt, die unzureichende Durchsatzraten von Speichermedien bzw. langsamen Schnittstellen derart kompensieren kann, dass eine unterbrechungsfreie Ausspielung von Bildsequenzen bei einer vorgegebenen Bildwiederholrate gewährleistet werden kann. Angepasste Cache-Strategien ermöglichen eine Steigerung der im Cache vorhaltbaren Einzelbilder im Vergleich zu nicht skalierbaren Varianten. Darüber hinaus wird das Konzept eines parametrisierbaren Dateiaufrufes eingeführt, wodurch mittels Zusatzinformationen im virtuellen Dateinamen eine gewünschte Variante einer skalierbaren Datei vom Datenspeicher angefragt werden kann

    Unificación de los protocolos de multipunto fiable optimizando la escalabilidad y el retardo

    Get PDF
    Las aplicaciones distribuidas que precisan de un servicio multipunto fiable son muy numerosas, y entre otras es posible citar las siguientes: bases de datos distribuidas, sistemas operativos distribuidos, sistemas de simulación interactiva distribuida y aplicaciones de distribución de software, publicaciones o noticias. Aunque en sus orígenes el dominio de aplicación de tales sistemas distribuidos estaba reducido a una única subred (por ejemplo una Red de Área Local) posteriormente ha surgido la necesidad de ampliar su aplicabilidad a interredes. La aproximación tradicional al problema del multipunto fiable en interredes se ha basado principalmente en los dos siguientes puntos: (1) proporcionar en un mismo protocolo muchas garantías de servicio (por ejemplo fiabilidad, atomicidad y ordenación) y a su vez algunas de éstas en distintos grados, sin tener en cuenta que muchas aplicaciones multipunto que precisan fiabilidad no necesitan otras garantías; y (2) extender al entorno multipunto las soluciones ya adoptadas en el entorno punto a punto sin considerar las características diferenciadoras; y de aquí, que se haya tratado de resolver el problema de la fiabilidad multipunto con protocolos extremo a extremo (protocolos de transporte) y utilizando esquemas de recuperación de errores, centralizados (las retransmisiones se hacen desde un único punto, normalmente la fuente) y globales (los paquetes solicitados se vuelven a enviar al grupo completo). En general, estos planteamientos han dado como resultado protocolos que son ineficientes en tiempo de ejecución, tienen problemas de escalabilidad, no hacen un uso óptimo de los recursos de red y no son adecuados para aplicaciones sensibles al retardo. En esta Tesis se investiga el problema de la fiabilidad multipunto en interredes operando en modo datagrama y se presenta una forma novedosa de enfocar el problema: es más óptimo resolver el problema de la fiabilidad multipunto a nivel de red y separar la fiabilidad de otras garantías de servicio, que pueden ser proporcionadas por un protocolo de nivel superior o por la propia aplicación. Siguiendo este nuevo enfoque se ha diseñado un protocolo multipunto fiable que opera a nivel de red (denominado RMNP). Las características más representativas del RMNP son las siguientes; (1) sigue una aproximación orientada al emisor, lo cual permite lograr un grado muy alto de fiabilidad; (2) plantea un esquema de recuperación de errores distribuido (las retransmisiones se hacen desde ciertos encaminadores intermedios que siempre estarán más cercanos a los miembros que la propia fuente) y de ámbito restringido (el alcance de las retransmisiones está restringido a un cierto número de miembros). Este esquema hace posible optimizar el retardo medio de distribución y disminuir la sobrecarga introducida por las retransmisiones; (3) incorpora en ciertos encaminadores funciones de agregación y filtrado de paquetes de control, que evitan problemas de implosión y reducen el tráfico que fluye hacia la fuente. Con el fin de evaluar el comportamiento del protocolo diseñado, se han realizado pruebas de simulación obteniéndose como principales conclusiones que, el RMNP escala correctamente con el tamaño del grupo, hace un uso óptimo de los recursos de red y es adecuado para aplicaciones sensibles al retardo.---ABSTRACT---There are many distributed applications that require a reliable multicast service, including: distributed databases, distributed operating systems, distributed interactive simulation systems and distribution applications of software, publications or news. Although the application domain of distributed systems of this type was originally confíned to a single subnetwork (for example, a Local Área Network), it later became necessary extend their applicability to internetworks. The traditional approach to the reliable multicast problem in internetworks is based mainly on the following two points: (1) provide a lot of service guarantees in one and the same protocol (for example, reliability, atomicity and ordering) and different levéis of guarantee in some cases, without taking into account that many multicast applications that require reliability do not need other guarantees, and (2) extend solutions adopted in the unicast environment to the multicast environment without taking into account their distinctive characteristics. So, the attempted solutions to the multicast reliability problem were end-to-end protocols (transport protocols) and centralized error recovery schemata (retransmissions made from a single point, normally the source) and global error retrieval schemata (the requested packets are retransmitted to the whole group). Generally, these approaches have resulted in protocols that are inefficient in execution time, have scaling problems, do not make optimum use of network resources and are not suitable for delay-sensitive applications. Here, the multicast reliability problem is investigated in internetworks operating in datagram mode and a new way of approaching the problem is presented: it is better to solve to the multicast reliability problem at network level and sepárate reliability from other service guarantees that can be supplied by a higher protocol or the application itself. A reliable multicast protocol that operates at network level (called RMNP) has been designed on the basis of this new approach. The most representative characteristics of the RMNP are as follows: (1) it takes a transmitter-oriented approach, which provides for a very high reliability level; (2) it provides for an error retrieval schema that is distributed (the retransmissions are made from given intermedíate routers that will always be closer to the members than the source itself) and of restricted scope (the scope of the retransmissions is confined to a given number of members), and this schema makes it possible to optimize the mean distribution delay and reduce the overload caused by retransmissions; (3) some routers include control packet aggregation and filtering functions that prevent implosión problems and reduce the traffic flowing towards the source. Simulation test have been performed in order to evalúate the behaviour of the protocol designed. The main conclusions are that the RMNP scales correctly with group size, makes optimum use of network resources and is suitable for delay-sensitive applications
    corecore