61 research outputs found

    ATOM : a distributed system for video retrieval via ATM networks

    Get PDF
    The convergence of high speed networks, powerful personal computer processors and improved storage technology has led to the development of video-on-demand services to the desktop that provide interactive controls and deliver Client-selected video information on a Client-specified schedule. This dissertation presents the design of a video-on-demand system for Asynchronous Transfer Mode (ATM) networks, incorporating an optimised topology for the nodes in the system and an architecture for Quality of Service (QoS). The system is called ATOM which stands for Asynchronous Transfer Mode Objects. Real-time video playback over a network consumes large bandwidth and requires strict bounds on delay and error in order to satisfy the visual and auditory needs of the user. Streamed video is a fundamentally different type of traffic to conventional IP (Internet Protocol) data since files are viewed in real-time, not downloaded and then viewed. This streaming data must arrive at the Client decoder when needed or it loses its interactive value. Characteristics of multimedia data are investigated including the use of compression to reduce the excessive bit rates and storage requirements of digital video. The suitability of MPEG-1 for video-on-demand is presented. Having considered the bandwidth, delay and error requirements of real-time video, the next step in designing the system is to evaluate current models of video-on-demand. The distributed nature of four such models is considered, focusing on how Clients discover Servers and locate videos. This evaluation eliminates a centralized approach in which Servers have no logical or physical connection to any other Servers in the network and also introduces the concept of a selection strategy to find alternative Servers when Servers are fully loaded. During this investigation, it becomes clear that another entity (called a Broker) could provide a central repository for Server information. Clients have logical access to all videos on every Server simply by connecting to a Broker. The ATOM Model for distributed video-on-demand is then presented by way of a diagram of the topology showing the interconnection of Servers, Brokers and Clients; a description of each node in the system; a list of the connectivity rules; a description of the protocol; a description of the Server selection strategy and the protocol if a Broker fails. A sample network is provided with an example of video selection and design issues are raised and solved including how nodes discover each other, a justification for using a mesh topology for the Broker connections, how Connection Admission Control (CAC) is achieved, how customer billing is achieved and how information security is maintained. A calculation of the number of Servers and Brokers required to service a particular number of Clients is presented. The advantages of ATOM are described. The underlying distributed connectivity is abstracted away from the Client. Redundant Server/Broker connections are eliminated and the total number of connections in the system are minimized by the rule stating that Clients and Servers may only connect to one Broker at a time. This reduces the total number of Switched Virtual Circuits (SVCs) which are a performance hindrance in ATM. ATOM can be easily scaled by adding more Servers which increases the total system capacity in terms of storage and bandwidth. In order to transport video satisfactorily, a guaranteed end-to-end Quality of Service architecture must be in place. The design methodology for such an architecture is investigated starting with a review of current QoS architectures in the literature which highlights important definitions including a flow, a service contract and flow management. A flow is a single media source which traverses resource modules between Server and Client. The concept of a flow is important because it enables the identification of the areas requiring consideration when designing a QoS architecture. It is shown that ATOM adheres to the principles motivating the design of a QoS architecture, namely the Integration, Separation and Transparency principles. The issue of mapping human requirements to network QoS parameters is investigated and the action of a QoS framework is introduced, including several possible causes of QoS degradation. The design of the ATOM Quality of Service Architecture (AQOSA) is then presented. AQOSA consists of 11 modules which interact to provide end-to-end QoS guarantees for each stream. Several important results arise from the design. It is shown that intelligent choice of stored videos in respect of peak bandwidth can improve overall system capacity. The concept of disk striping over a disk array is introduced and a Data Placement Strategy is designed which eliminates disk hot spots (i.e. Overuse of some disks whilst others lie idle.) A novel parameter (the B-P Ratio) is presented which can be used by the Server to predict future bursts from each video stream. The use of Traffic Shaping to decrease the load on the network from each stream is presented. Having investigated four algorithms for rewind and fast-forward in the literature, a rewind and fast-forward algorithm is presented. The method produces a significant decrease in bandwidth, and the resultant stream is very constant, reducing the chance that the stream will add to network congestion. The C++ classes of the Server, Broker and Client are described emphasizing the interaction between classes. The use of ATOM in the Virtual Private Network and the multimedia teaching laboratory is considered. Conclusions and recommendations for future work are presented. It is concluded that digital video applications require high bandwidth, low error, low delay networks; a video-on-demand system to support large Client volumes must be distributed, not centralized; control and operation (transport) must be separated; the number of ATM Switched Virtual Circuits (SVCs) must be minimized; the increased connections caused by the Broker mesh is justified by the distributed information gain; a Quality of Service solution must address end-to-end issues. It is recommended that a web front-end for Brokers be developed; the system be tested in a wide area A TM network; the Broker protocol be tested by forcing failure of a Broker and that a proprietary file format for disk striping be implemented

    Distributed multimedia systems

    Get PDF
    A distributed multimedia system (DMS) is an integrated communication, computing, and information system that enables the processing, management, delivery, and presentation of synchronized multimedia information with quality-of-service guarantees. Multimedia information may include discrete media data, such as text, data, and images, and continuous media data, such as video and audio. Such a system enhances human communications by exploiting both visual and aural senses and provides the ultimate flexibility in work and entertainment, allowing one to collaborate with remote participants, view movies on demand, access on-line digital libraries from the desktop, and so forth. In this paper, we present a technical survey of a DMS. We give an overview of distributed multimedia systems, examine the fundamental concept of digital media, identify the applications, and survey the important enabling technologies.published_or_final_versio

    Scalable on-demand streaming of stored complex multimedia

    Get PDF
    Previous research has developed a number of efficient protocols for streaming popular multimedia files on-demand to potentially large numbers of concurrent clients. These protocols can achieve server bandwidth usage that grows much slower than linearly with the file request rate, and with the inverse of client start-up delay. This hesis makes the following three main contributions to the design and performance evaluation of such protocols. The first contribution is an investigation of the network bandwidth requirements for scalable on-demand streaming. The results suggest that the minimum required network bandwidth for scalable on-demand streaming typically scales as K/ln(K) as the number of client sites K increases for fixed request rate per client site, and as ln(N/(ND+1)) as the total file request rate N increases or client start-up delay D decreases, for a fixed number of sites. Multicast delivery trees configured to minimize network bandwidth usage rather than latency are found to only modestly reduce the minimum required network bandwidth. Furthermore, it is possible to achieve close to the minimum possible network and server bandwidth usage simultaneously with practical scalable delivery protocols. Second, the thesis addresses the problem of scalable on-demand streaming of a more complex type of media than is typically considered, namely variable bit rate (VBR) media. A lower bound on the minimum required server bandwidth for scalable on-demand streaming of VBR media is derived. The lower bound analysis motivates the design of a new immediate service protocol termed VBR bandwidth skimming (VBRBS) that uses constant bit rate streaming, when sufficient client storage space is available, yet fruitfully exploits the knowledge of a VBR profile. Finally, the thesis proposes non-linear media containing parallel sequences of data frames, among which clients can dynamically select at designated branch points, and investigates the design and performance issues in scalable on-demand streaming of such media. Lower bounds on the minimum required server bandwidth for various non-linear media scalable on-demand streaming approaches are derived, practical non-linear media scalable delivery protocols are developed, and, as a proof-of-concept, a simple scalable delivery protocol is implemented in a non-linear media streaming prototype system

    An optimal bandwidth allocation strategy for the delivery of compressed prerecorded video

    Full text link
    The transportation of prerecorded, compressed video data without loss of picture quality requires the network and video servers to support large fluctuations in bandwidth requirements. Fully utilizing a client-side buffer for smoothing bandwidth requirements can limit the fluctuations in bandwidth required from the underlying network and the video-on-demand servers. This paper shows that, for a fixed-size buffer constraint, the critical bandwidth allocation technique results in plans for continuous playback of stored video that have (1) the minimum number of bandwidth increases, (2) the smallest peak bandwidth requirements, and (3) the largest minimum bandwidth requirements. In addition, this paper introduces an optimal bandwidth allocation algorithm which, in addition to the three critical bandwidth allocation properties, minimizes the total number of bandwidth changes necessary for continuous playback. A comparison between the optimal bandwidth allocation algorithm and other critical bandwidth-based algorithms using 17 full-length movie videos and 3 seminar videos is also presented.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/42314/1/530-5-5-297_70050297.pd

    Quality of Service based Retrieval Strategy for Distributed Video on Demand on Multiple Servers

    Get PDF
    The recent advances and development of inexpensive computers and high speed networking technology have enabled the Video on Demand (VoD) application to connect to shared-computing servers, replacing the traditional computing environments where each application was having its own dedicated computing hardware. The VoD application enables the viewer to select, from a list of video files, his favorite video file and watch its reproduction at will. Early video on demand applications were based on single video server where video streams are initiated from a single server, then with the increase in the number of the clients who became interested in VoD services, the focus became on Distributed VoD architectures (DVoD) where the context of distribution may be distributed system components, distributed streaming servers, distributed media content etc.The VoD server must handle several issues in order to be able to present a successful service. It has to receive the clients’ requests and analyze them, calculate the necessary resources for each request, and decide whether a request can be admitted or not. Once the request is admitted, the server must schedule the request, retrieve the required video data and send the video data in a timely manner so that the client does not suffer data starvation in his buffer during the video reproduction. So, the overall objective of a VoD service provider is to provide a better Quality of Service (QoS). Some issues related to QoS are-efficient use of bandwidth, providing better throughput etc.One of the important issues is to retrieve the video data from the servers in minimum time and to start the playback of the video at client side with a minimum waiting time. The overall time elapsed in retrieving the video data and starting the playback is known as access time. The thesis presents an efficient retrieval strategy for a distributed VoD environment where the basic objective is to minimize the access time by maintaining the presentation continuity at the client side. We have neglected some of the network parameters which may affect the access time, by assuming a high speed network between the servers and the client. The performance of the strategy has been analyzed and is compared with the referred PAR (Play After Retrieval) strategy. Further, the strategy is also analyzed under availability condition which is a more realistic approach

    A cross-layer quality-oriented energy-efficient scheme for multimedia delivery in wireless local area networks

    Get PDF
    Wireless communication technologies, although emerged only a few decades ago, have grown fast in both popularity and technical maturity. As a result, mobile devices such as Personal Digital Assistants (PDA) or smart phones equipped with embedded wireless cards have seen remarkable growth in popularity and are quickly becoming one of the most widely used communication tools. This is mainly determined by the flexibility, convenience and relatively low costs associated with these devices and wireless communications. Multimedia applications have become by far one of the most popular applications among mobile users. However this type of application has very high bandwidth requirements, seriously restricting the usage of portable devices. Moreover, the wireless technology involves increased energy consumption and consequently puts huge pressure on the limited battery capacity which presents many design challenges in the context of battery powered devices. As a consequence, power management has raised awareness in both research and industrial communities and huge efforts have been invested into energy conservation techniques and strategies deployed within different components of the mobile devices. Our research presented in this thesis focuses on energy efficient data transmission in wireless local networks, and mainly contributes in the following aspects: 1. Static STELA, which is a Medium Access Control (MAC) layer solution that adapts the sleep/wakeup state schedule of the radio transceiver according to the bursty nature of data traffic and real time observation of data packets in terms of arrival time. The algorithm involves three phases– slow start phase, exponential increase phase, and linear increase phase. The initiation and termination of each phase is self-adapted to real time traffic and user configuration. It is designed to provide either maximum energy efficiency or best Quality of Service (QoS) according to user preference. 2. Dynamic STELA, which is a MAC layer solution deployed on the mobile devices and provides balanced performance between energy efficiency and QoS. Dynamic STELA consists of the three phase algorithm used in static STELA, and additionally employs a traffic modeling algorithm to analyze historical traffic data and estimate the arrival time of the next burst. Dynamic STELA achieves energy saving through intelligent and adaptive increase of Wireless Network Interface Card (WNIC) sleeping interval in the second and the third phase and at the same time guarantees delivery performance through optimal WNIC waking timing before the estimated arrival of new data burst. 3. Q-PASTE, which is a quality-oriented cross-layer solution with two components employed at different network layers, designed for multimedia content delivery. First component, the Packet/ApplicaTion manager (PAT) is deployed at the application layer of both service gateway and client host. The gateway level PAT utilizes fast start, as a widely supported technique for multimedia content delivery, to achieve high QoS and shapes traffic into bursts to reduce the wireless transceiver’s duty cycle. Additionally, gateway-side PAT informs client host the starting and ending time of fast start to assist parameter tuning. The client-side PAT monitors each active session and informs the MAC layer about their traffic-related behavior. The second component, dynamic STELA, deployed at MAC layer, adaptively adjusts the sleep/wake-up behavior of mobile device wireless interfaces in order to reduce energy consumption while also maintaining high Quality of Service (QoS) levels. 4. A comprehensive survey on energy efficient standards and some of the most important state-of-the-art energy saving technologies is also provided as part of the work

    Some aspects of traffic control and performance evaluation of ATM networks

    Get PDF
    The emerging high-speed Asynchronous Transfer Mode (ATM) networks are expected to integrate through statistical multiplexing large numbers of traffic sources having a broad range of statistical characteristics and different Quality of Service (QOS) requirements. To achieve high utilisation of network resources while maintaining the QOS, efficient traffic management strategies have to be developed. This thesis considers the problem of traffic control for ATM networks. The thesis studies the application of neural networks to various ATM traffic control issues such as feedback congestion control, traffic characterization, bandwidth estimation, and Call Admission Control (CAC). A novel adaptive congestion control approach based on a neural network that uses reinforcement learning is developed. It is shown that the neural controller is very effective in providing general QOS control. A Finite Impulse Response (FIR) neural network is proposed to adaptively predict the traffic arrival process by learning the relationship between the past and future traffic variations. On the basis of this prediction, a feedback flow control scheme at input access nodes of the network is presented. Simulation results demonstrate significant performance improvement over conventional control mechanisms. In addition, an accurate yet computationally efficient approach to effective bandwidth estimation for multiplexed connections is investigated. In this method, a feed forward neural network is employed to model the nonlinear relationship between the effective bandwidth and the traffic situations and a QOS measure. Applications of this approach to admission control, bandwidth allocation and dynamic routing are also discussed. A detailed investigation has indicated that CAC schemes based on effective bandwidth approximation can be very conservative and prevent optimal use of network resources. A modified effective bandwidth CAC approach is therefore proposed to overcome the drawback of conventional methods. Considering statistical multiplexing between traffic sources, we directly calculate the effective bandwidth of the aggregate traffic which is modelled by a two-state Markov modulated Poisson process via matching four important statistics. We use the theory of large deviations to provide a unified description of effective bandwidths for various traffic sources and the associated ATM multiplexer queueing performance approximations, illustrating their strengths and limitations. In addition, a more accurate estimation method for ATM QOS parameters based on the Bahadur-Rao theorem is proposed, which is a refinement of the original effective bandwidth approximation and can lead to higher link utilisation

    Contribution to the improvement of the performance of wireless mesh networks providing real time services

    Get PDF
    Nowadays, people expectations for ubiquitous connectivity is continuously growing. Cities are now moving towards the smart city paradigm. Electricity companies aims to become part of smart grids. Internet is no longer exclusive for humans, we now assume the Internet of everything. We consider that Wireless Mesh Networks (WMNs) have a set of valuable features that will make it an important part of such environments. WMNs can also be use in less favored areas thanks to their low-cost deployment. This is socially relevant since it facilitates the digital divide reduction and could help to improve the population quality of life. Research and industry have been working these years in open or proprietary mesh solutions. Standardization efforts and real deployments establish a solid starting point.We expect that WMNs will be a supporting part for an unlimited number of new applications from a variety of fields: community networking, intelligent transportation systems, health systems, public safety, disaster management, advanced metering, etc. For all these cases, the growing needs of users for real-time and multimedia information is currently evident. On this basis, this thesis proposes a set of contributions to improve the performance of an application service of such type and to promote the better use of two critical resources (memory and energy) of WMNs.For the offered service, this work focuses on a Video on Demand (VoD) system. One of the requirements of this system is the high capacity support. This is mainly achieved by distributing the video contents among various distribution points which in turn consist of several video servers. Each client request that arrives to such video server cluster must be handled by a specific server in a way that the load is balanced. For such task, this thesis proposes a mechanism to appropriately select a specific video server such that the transfer time at the cluster could be minimized.On the other hand, mesh routers that creates the mesh backbone are equipped with multiple interfaces from different technologies and channel types. An important resource is the amount of memory intended for buffers. The quality of service perceived by the users are largely affected by the size of such buffers. This is because important network performance parameters such as packet loss probability, delay, and channel utilization are highly affected by the buffer sizes. An efficient use of memory for buffering, in addition to facilitate the mesh devices scalability, also prevents the problems associated with excessively large buffers. Most of the current works associate the buffer sizing problem with the dynamics of TCP congestion control mechanism. Since this work focuses on real time services, in which the use of TCP is unfeasible, this thesis proposes a dynamic buffer sizing mechanism mainly dedicated for such real time flows. The approach is based on the maximum entropy principle and allows that each device be able to dynamically self-configure its buffers to achieve more efficient memory utilization. The proper performance of the proposal has been extensively evaluated in wired and wireless interfaces. Classical infrastructure-based wireless and multi-hop mesh interfaces have been considered. Finally, when the WMN is built by the interconnection of user hand-helds, energy is a limited and scarce resource, and therefore any approach to optimize its use is valuable. For this case, this thesis proposes a topology control mechanism based on centrality metrics. The main idea is that, instead of having all the devices executing routing functionalities, just a subset of nodes are selected for this task. We evaluate different centralities, form both centralized and distributed perspectives. In addition to the common random mobility models we include the analysis of the proposal with a socially-aware mobility model that generates networks with a community structure.Actualmente las expectativas de las personas de una conectividad ubicua están creciendo. Las ciudades están trabajando para alcanzar el paradigma de ciudades inteligentes. Internet ha dejado de ser exclusivo de las personas y ahora se asume el Internet de todo. Las redes inalámbricas de malla (WMNs) poseen un valioso conjunto de características que las harán parte importante de tales entornos. Las WMNs pueden utilizarse en zonas menos favorecidas debido a su despliegue económico. Esto es socialmente relevante ya que facilita la reducción de la brecha digital y puede ayudar a mejorar la calidad de vida de la población. Los esfuerzos de estandarización y los despliegues de redes reales establecen un punto de partida sólido.Se espera entonces, que las WMNs den soporte a un número importante de nuevas aplicaciones y servicios, de una variedad de campos: redes comunitarias, sistemas de transporte inteligente, sistemas de salud y seguridad, operaciones de rescate y de emergencia, etc. En todos estos casos, es evidente la necesidad de disponer de información multimedia y en tiempo real. En base a estos precedentes, esta tesis propone un conjunto de contribuciones para mejorar el funcionamiento de un servicio de este tipo y promover un uso eficiente de dos recursos críticos (memoria y energía) de las WMNs.Para el servicio ofrecido, este trabajo se centra en un sistema de video bajo demanda. Uno de los requisitos de estos sistemas es el de soportar capacidades elevadas. Esto se consigue principalmente distribuyendo los contenidos de video entre diferentes puntos de distribución, los cuales a su vez están formados por varios servidores. Cada solicitud de un cliente que llega a dicho conjunto de servidores debe ser manejada por un servidor específico, de tal forma que la carga sea balanceada. Para esta tarea, esta tesis propone un mecanismo que selecciona apropiadamente un servidor de tal manera que el tiempo de transferencia del sistema sea minimizado.Por su parte, los enrutadores de malla que crean la red troncal están equipados con múltiples interfaces de diferentes tecnologías y tipos de canal. Un recurso muy importante para éstos es la memoria destinada a sus colas. La calidad de servicio percibida por los usuarios está altamente influenciada por el tamaño de las colas. Esto porque parámetros importantes del rendimiento de la red como la probabilidad de pérdida de paquetes, el retardo, y la utilización del canal se ven afectados por dicho tamaño. Un uso eficiente de tal memoria, a más de facilitar la escalabilidad de los equipos, también evita los problemas asociados a colas muy largas. La mayoría de los trabajos actuales asocian el problema de dimensionamiento de las colas con la dinámica del mecanismo de control de congestión de TCP. Debido a que este trabajo se enfoca en servicios en tiempo real, en los cuales no es factible usar TCP, esta tesis propone un mecanismo de dimensionamiento dinámico de colas dedicado principalmente a flujos en tiempo real. La propuesta está basada en el principio de máxima entropía y permite que los dispositivos sean capaces de auto-configurar sus colas y así lograr un uso más eficiente de la memoria. Finalmente, cuando la WMN se construye a través de la interconexión de los dispositivos portátiles, la energía es un recurso limitado y escaso, y cualquier propuesta para optimizar su uso es muy valorada. Para esto, esta tesis propone un mecanismo de control de topología basado en métricas de centralidad. La idea principal es que en lugar de que todos los dispositivos realicen funciones de enrutamiento, solo un subconjunto de nodos es seleccionado para esta tarea. Se evalúan diferentes métricas, desde una perspectiva centralizada y otra distribuida. A más de los modelos aleatorios clásicos de movilidad, se incluye el análisis de la propuesta con modelos de movilidad basados en información social que toman en cuenta el comportamiento humano y generan redes con una clara estructura de comunidade
    corecore