5 research outputs found

    Performance studies of file system design choices for two concurrent processing paradigms

    Get PDF

    A architecture for MHEG objects

    Get PDF
    Hypermedia applications are one of the most recent and most demanding computer uses. It is accepted that one of the main impediments to their widespread use is the lack of standards, and the lack of Open Systems with the possibility of having documents interchangeable between different hardware and software platforms. Several standards are emerging, one of which is the one being developed by the ISO/IEC WG12 known as the Multimedia and Hypermedia Information Coding Expert Group (MHEG). As desktop systems become more powerful, one of the main users of hypermedia applications is the home market. Therefore it is important to have standards and applications suitable for those platforms. This work reviews existing proposals for hypermedia architectures and interchange standards. It then assesses the suitability of the MHEG standard for use in open, distributed, and extensible hypermedia systems. An architecture for the implementa­tion of MHEG objects taking into account the limitations imposed by current desktop computers is also proposed. To assess the suitability of the proposed architecture, a prototype has been imple­mented. An analysis of the performance obtained in the prototype is presented and conclusions on the requirements for future implementations drawn. Finally, some suggestions to improve the MHEG standard are made

    Handling Audio and Video Streams in a Distributed Environment

    No full text
    Handling audio and video in a digital environment requires timely delivery of data. This paper describes the principles adopted in the design of the Pandora networked multi-media system. They attempt to give the user the best possible service while dealing with error and overload conditions. Pandora uses a sub-system to handle the multi-media peripherals. It uses transputers and associated Occam code to implement the time critical functions. Stream implementation is based on self-contained segments of data containing information for delivery, synchronisation and error recovery. Decoupling buffers are used to allow concurrent operation of multiple processing elements. Clawback buffers are used to resynchronise streams at their destinations with minimum latency. The system has proved robust in normal use, under overload, and in the presence of errors. It has been in use for a number of years. The principles involved in this design are now being used in the development of two complemen..

    Handling audio and video streams in a distributed environment

    No full text
    corecore