41 research outputs found

    Analysis of generic discrete-time buffer models with irregular packet arrival patterns

    Get PDF
    De kwaliteit van de multimediadiensten die worden aangeboden over de huidige breedband-communicatienetwerken, wordt in hoge mate bepaald door de performantie van de buffers die zich in de diverse netwerkele-menten (zoals schakelknooppunten, routers, modems, toegangsmultiplexers, netwerkinter- faces, ...) bevinden. In dit proefschrift bestuderen we de performantie van een dergelijke buffer met behulp van een geschikt stochastisch discrete-tijd wachtlijnmodel, waarbij we het geval van meerdere uitgangskanalen en (niet noodzakelijk identieke) pakketbronnen beschouwen, en de pakkettransmissietijden in eerste instantie één slot bedragen. De grillige, of gecorreleerde, aard van een pakketstroom die door een bron wordt gegenereerd, wordt gekarakteriseerd aan de hand van een algemeen D-BMAP (discrete-batch Markovian arrival process), wat een generiek kader creëert voor het beschrijven van een superpositie van dergelijke informatiestromen. In een later stadium breiden we onze studie uit tot het geval van transmissietijden met een algemene verdeling, waarbij we ons beperken tot een buffer met één enkel uitgangskanaal. De analyse van deze wachtlijnmodellen gebeurt hoofdzakelijk aan de hand van een particuliere wiskundig-analytische aanpak waarbij uitvoerig gebruik gemaakt wordt van probabiliteitsgenererende functies, die er toe leidt dat de diverse performantiematen (min of meer expliciet) kunnen worden uitgedrukt als functie van de systeemparameters. Dit resul-teert op zijn beurt in efficiënte en accurate berekeningsalgoritmen voor deze grootheden, die op relatief eenvoudige wijze geïmplementeerd kunnen worden

    From burstiness characterisation to traffic control strategy : a unified approach to integrated broadbank networks

    Full text link
    The major challenge in the design of an integrated network is the integration and support of a wide variety of applications. To provide the requested performance guarantees, a traffic control strategy has to allocate network resources according to the characteristics of input traffic. Specifically, the definition of traffic characterisation is significant in network conception. In this thesis, a traffic stream is characterised based on a virtual queue principle. This approach provides the necessary link between network resources allocation and traffic control. It is difficult to guarantee performance without prior knowledge of the worst behaviour in statistical multiplexing. Accordingly, we investigate the worst case scenarios in a statistical multiplexer. We evaluate the upper bounds on the probabilities of buffer overflow in a multiplexer, and data loss of an input stream. It is found that in networks without traffic control, simply controlling the utilisation of a multiplexer does not improve the ability to guarantee performance. Instead, the availability of buffer capacity and the degree of correlation among the input traffic dominate the effect on the performance of loss. The leaky bucket mechanism has been proposed to prevent ATM networks from performance degradation due to congestion. We study the leaky bucket mechanism as a regulation element that protects an input stream. We evaluate the optimal parameter settings and analyse the worst case performance. To investigate its effectiveness, we analyse the delay performance of a leaky bucket regulated multiplexer. Numerical results show that the leaky bucket mechanism can provide well-behaved traffic with guaranteed delay bound in the presence of misbehaving traffic. Using the leaky bucket mechanism, a general strategy based on burstiness characterisation, called the LB-Dynamic policy, is developed for packet scheduling. This traffic control strategy is closely related to the allocation of both bandwidth and buffer in each switching node. In addition, the LB-Dynamic policy monitors the allocated network resources and guarantees the network performance of each established connection, irrespective of the traffic intensity and arrival patterns of incoming packets. Simulation studies demonstrate that the LB-Dynamic policy is able to provide the requested service quality for heterogeneous traffic in integrated broadband networks

    Driving the Network-on-Chip Revolution to Remove the Interconnect Bottleneck in Nanoscale Multi-Processor Systems-on-Chip

    Get PDF
    The sustained demand for faster, more powerful chips has been met by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SoC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MP-SoC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NoCs) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the onchip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation performs a design space exploration of network-on-chip architectures, in order to point-out the trade-offs associated with the design of each individual network building blocks and with the design of network topology overall. The design space exploration is preceded by a comparative analysis of state-of-the-art interconnect fabrics with themselves and with early networkon- chip prototypes. The ultimate objective is to point out the key advantages that NoC realizations provide with respect to state-of-the-art communication infrastructures and to point out the challenges that lie ahead in order to make this new interconnect technology come true. Among these latter, technologyrelated challenges are emerging that call for dedicated design techniques at all levels of the design hierarchy. In particular, leakage power dissipation, containment of process variations and of their effects. The achievement of the above objectives was enabled by means of a NoC simulation environment for cycleaccurate modelling and simulation and by means of a back-end facility for the study of NoC physical implementation effects. Overall, all the results provided by this work have been validated on actual silicon layout

    Designing new network adaptation and ATM adaptation layers for interactive multimedia applications

    Get PDF
    Multimedia services, audiovisual applications composed of a combination of discrete and continuous data streams, will be a major part of the traffic flowing in the next generation of high speed networks. The cornerstones for multimedia are Asynchronous Transfer Mode (ATM) foreseen as the technology for the future Broadband Integrated Services Digital Network (B-ISDN) and audio and video compression algorithms such as MPEG-2 that reduce applications bandwidth requirements. Powerful desktop computers available today can integrate seamlessly the network access and the applications and thus bring the new multimedia services to home and business users. Among these services, those based on multipoint capabilities are expected to play a major role.    Interactive multimedia applications unlike traditional data transfer applications have stringent simultaneous requirements in terms of loss and delay jitter due to the nature of audiovisual information. In addition, such stream-based applications deliver data at a variable rate, in particular if a constant quality is required.    ATM, is able to integrate traffic of different nature within a single network creating interactions of different types that translate into delay jitter and loss. Traditional protocol layers do not have the appropriate mechanisms to provide the required network quality of service (QoS) for such interactive variable bit rate (VBR) multimedia multipoint applications. This lack of functionalities calls for the design of protocol layers with the appropriate functions to handle the stringent requirements of multimedia.    This thesis contributes to the solution of this problem by proposing new Network Adaptation and ATM Adaptation Layers for interactive VBR multimedia multipoint services.    The foundations to build these new multimedia protocol layers are twofold; the requirements of real-time multimedia applications and the nature of compressed audiovisual data.    On this basis, we present a set of design principles we consider as mandatory for a generic Multimedia AAL capable of handling interactive VBR multimedia applications in point-to-point as well as multicast environments. These design principles are then used as a foundation to derive a first set of functions for the MAAL, namely; cell loss detection via sequence numbering, packet delineation, dummy cell insertion and cell loss correction via RSE FEC techniques.    The proposed functions, partly based on some theoretical studies, are implemented and evaluated in a simulated environment. Performances are evaluated from the network point of view using classic metrics such as cell and packet loss. We also study the behavior of the cell loss process in order to evaluate the efficiency to be expected from the proposed cell loss correction method. We also discuss the difficulties to map network QoS parameters to user QoS parameters for multimedia applications and especially for video information. In order to present a complete performance evaluation that is also meaningful to the end-user, we make use of the MPQM metric to map the obtained network performance results to a user level. We evaluate the impact that cell loss has onto video and also the improvements achieved with the MAAL.    All performance results are compared to an equivalent implementation based on AAL5, as specified by the current ITU-T and ATM Forum standards.    An AAL has to be by definition generic. But to fully exploit the functionalities of the AAL layer, it is necessary to have a protocol layer that will efficiently interface the network and the applications. This role is devoted to the Network Adaptation Layer.    The network adaptation layer (NAL) we propose, aims at efficiently interface the applications to the underlying network to achieve a reliable but low overhead transmission of video streams. Since this requires an a priori knowledge of the information structure to be transmitted, we propose the NAL to be codec specific.    The NAL targets interactive multimedia applications. These applications share a set of common requirements independent of the encoding scheme used. This calls for the definition of a set of design principles that should be shared by any NAL even if the implementation of the functions themselves is codec specific. On the basis of the design principles, we derive the common functions that NALs have to perform which are mainly two; the segmentation and reassembly of data packets and the selective data protection.    On this basis, we develop an MPEG-2 specific NAL. It provides a perceptual syntactic information protection, the PSIP, which results in an intelligent and minimum overhead protection of video information. The PSIP takes advantage of the hierarchical organization of the compressed video data, common to the majority of the compression algorithms, to perform a selective data protection based on the perceptual relevance of the syntactic information.    The transmission over the combined NAL-MAAL layers shows significant improvement in terms of CLR and perceptual quality compared to equivalent transmissions over AAL5 with the same overhead.    The usage of the MPQM as a performance metric, which is one of the main contributions of this thesis, leads to a very interesting observation. The experimental results show that for unexpectedly high CLRs, the average perceptual quality remains close to the original value. The economical potential of such an observation is very important. Given that the data flows are VBR, it is possible to improve network utilization by means of statistical multiplexing. It is therefore possible to reduce the cost per communication by increasing the number of connections with a minimal loss in quality.    This conclusion could not have been derived without the combined usage of perceptual and network QoS metrics, which have been able to unveil the economic potential of perceptually protected streams.    The proposed concepts are finally tested in a real environment where a proof-of-concept implementation of the MAAL has shown a behavior close to the simulated results therefore validating the proposed multimedia protocol layers

    Improving Large-Scale Network Traffic Simulation with Multi-Resolution Models

    Get PDF
    Simulating a large-scale network like the Internet is a challenging undertaking because of the sheer volume of its traffic. Packet-oriented representation provides high-fidelity details but is computationally expensive; fluid-oriented representation offers high simulation efficiency at the price of losing packet-level details. Multi-resolution modeling techniques exploit the advantages of both representations by integrating them in the same simulation framework. This dissertation presents solutions to the problems regarding the efficiency, accuracy, and scalability of the traffic simulation models in this framework. The ``ripple effect\u27\u27 is a well-known problem inherent in event-driven fluid-oriented traffic simulation, causing explosion of fluid rate changes. Integrating multi-resolution traffic representations requires estimating arrival rates of packet-oriented traffic, calculating the queueing delay upon a packet arrival, and computing packet loss rate under buffer overflow. Real time simulation of a large or ultra-large network demands efficient background traffic simulation. The dissertation includes a rate smoothing technique that provably mitigates the ``ripple effect\u27\u27, an accurate and efficient approach that integrates traffic models at multiple abstraction levels, a sequential algorithm that achieves real time simulation of the coarse-grained traffic in a network with 3 tier-1 ISP (Internet Service Provider) backbones using an ordinary PC, and a highly scalable parallel algorithm that simulates network traffic at coarse time scales

    A Priority-based Fair Queuing (PFQ) Model for Wireless Healthcare System

    Get PDF
    Healthcare is a very active research area, primarily due to the increase in the elderly population that leads to increasing number of emergency situations that require urgent actions. In recent years some of wireless networked medical devices were equipped with different sensors to measure and report on vital signs of patient remotely. The most important sensors are Heart Beat Rate (ECG), Pressure and Glucose sensors. However, the strict requirements and real-time nature of medical applications dictate the extreme importance and need for appropriate Quality of Service (QoS), fast and accurate delivery of a patient’s measurements in reliable e-Health ecosystem. As the elderly age and older adult population is increasing (65 years and above) due to the advancement in medicine and medical care in the last two decades; high QoS and reliable e-health ecosystem has become a major challenge in Healthcare especially for patients who require continuous monitoring and attention. Nevertheless, predictions have indicated that elderly population will be approximately 2 billion in developing countries by 2050 where availability of medical staff shall be unable to cope with this growth and emergency cases that need immediate intervention. On the other side, limitations in communication networks capacity, congestions and the humongous increase of devices, applications and IOT using the available communication networks add extra layer of challenges on E-health ecosystem such as time constraints, quality of measurements and signals reaching healthcare centres. Hence this research has tackled the delay and jitter parameters in E-health M2M wireless communication and succeeded in reducing them in comparison to current available models. The novelty of this research has succeeded in developing a new Priority Queuing model ‘’Priority Based-Fair Queuing’’ (PFQ) where a new priority level and concept of ‘’Patient’s Health Record’’ (PHR) has been developed and integrated with the Priority Parameters (PP) values of each sensor to add a second level of priority. The results and data analysis performed on the PFQ model under different scenarios simulating real M2M E-health environment have revealed that the PFQ has outperformed the results obtained from simulating the widely used current models such as First in First Out (FIFO) and Weight Fair Queuing (WFQ). PFQ model has improved transmission of ECG sensor data by decreasing delay and jitter in emergency cases by 83.32% and 75.88% respectively in comparison to FIFO and 46.65% and 60.13% with respect to WFQ model. Similarly, in pressure sensor the improvements were 82.41% and 71.5% and 68.43% and 73.36% in comparison to FIFO and WFQ respectively. Data transmission were also improved in the Glucose sensor by 80.85% and 64.7% and 92.1% and 83.17% in comparison to FIFO and WFQ respectively. However, non-emergency cases data transmission using PFQ model was negatively impacted and scored higher rates than FIFO and WFQ since PFQ tends to give higher priority to emergency cases. Thus, a derivative from the PFQ model has been developed to create a new version namely “Priority Based-Fair Queuing-Tolerated Delay” (PFQ-TD) to balance the data transmission between emergency and non-emergency cases where tolerated delay in emergency cases has been considered. PFQ-TD has succeeded in balancing fairly this issue and reducing the total average delay and jitter of emergency and non-emergency cases in all sensors and keep them within the acceptable allowable standards. PFQ-TD has improved the overall average delay and jitter in emergency and non-emergency cases among all sensors by 41% and 84% respectively in comparison to PFQ model

    The specification and design of a prototype 2-D MPEG-4 authoring tool

    Get PDF
    The purpose of this project was the specification, design and implementation of a prototype 2-D MPEG-4 authoring tool. A literature study was conducted of the MPEG-4 standard and multimedia authoring tools to determine the specification and design of a prototype 2- D MPEG-4 authoring tool. The specification and design was used as a basis for the implementation of a prototype 2-D MPEG-4 authoring tool that complies with the Complete 2-D Scene Graph Profile. The need for research into MPEG-4 authoring tools arose from the reported lack of knowledge of the MPEG-4 standard and the limited implementations of MPEG-4 authoring tools available to content authors. In order for MPEG-4 to reach its full potential, it will require authoring tools and content players that satisfy the needs of its users. The theoretical component of this dissertation included a literature study of the MPEG-4 standard and an investigation of relevant multimedia authoring systems. MPEG-4 was introduced as a standard that allows for the creation and streaming of interactive multimedia content at variable bit rates over high and low bandwidth connections. The requirements for the prototype 2-D MPEG-4 authoring system were documented and a prototype system satisfying the requirements was designed, implemented and evaluated. The evaluation of the prototype system showed that the system successfully satisfied all its requirements and that it provides the user with an easy to use and intuitive authoring tool. MPEG-4 has the potential to satisfy the increasing demand for innovative multimedia content on low bandwidth networks, including the Internet and mobile networks, as well as the need expressed by users to interact with multimedia content. This dissertation makes an important contribution to the understanding of the MPEG-4 standard, its functionality and the design of a 2-D MPEG-4 Authoring tool. Keywords: MPEG-4; MPEG-4 authoring; Binary Format for Scenes

    An Improved Active Network Concept and Architecture for Distributed and Dynamic Streaming Multimedia Environments with Heterogeneous Bandwidths

    Get PDF
    A problem in todays Internet infrastructure may occur when a streaming multimedia application is to take place. The information content of video and audio signals that contain moving or changing scenes may simply be too great for Internet clients with low bandwidth capacity if no adaptation is performed. In order to satisfactorily reach clients with various bandwidth capacities some works such as receiver-driven multicast and resilient overlay networks (RON) have been developed. However these efforts mainly call for modification on router level management or place additional layer to the Internet structure, which is not recommended in the nearest future due to the highly acceptance level and widely utilization of the current Internet structure, and the lengthy and tiring standardization process for a new structure or modification to be accepted. We have developed an improved active network approach for distributed and dynamic streaming multimedia environment with heterogeneous bandwidth, such as the case of the Internet. Friendly active network system (FANS) is a sample of our approach. Adopting application level active network (ALAN) mechanism, FANS participants and available media are referred through its universal resource locator (url). The system intercepts traffic flowing from source to destination and performs media post-processing at an intermediate peer. The process is performed at the application level instead of at the router level, which was the original approach of active networks. FANS requires no changes in router level management and puts no additional requirement to the current Internet architecture and, hence, instantly applicable. In comparison with ALAN, FANS possesses two significant differences. From the system overview, ALAN requires three minimum elements: clients, servers, and dynamic proxy servers. FANS, on the other hand, unifies the functionalities of those three elements. Each of peers in FANS is a client, an intermediate peer, and a media server as well. Secondly, FANS members tracking system dynamically detects the existence of a newly joined computers or mobile device, given its url is available and announced. In ALAN, the servers and the middle nodes are priori known and, hence, static. The application level approach and better performance characteristics distinguished also our work with another similar work in this field, which uses router level approach. The approach offers, in general, the following improvements: FANS promotes QoS fairness, in which clients with lower bandwidth are accommodated and receive better quality of service FANS introduces a new algorithm to determine whether or not the involvement of intermediate peer(s) to perform media post-processing enhancement services is necessary. This mechanism is important and advantageous due to the fact that intermediate post-processing increases the delay and, therefore, should only be employed selectively. FANS considers the size of media data and the capacity of clients bandwidth as network parameters that determine the level of quality of service offered. By employing the above techniques, our experiments with the Internet emulator show that our approach improves the reliability of streaming media applications in such environment

    Proceedings of the 5th International Workshop on Reconfigurable Communication-centric Systems on Chip 2010 - ReCoSoC\u2710 - May 17-19, 2010 Karlsruhe, Germany. (KIT Scientific Reports ; 7551)

    Get PDF
    ReCoSoC is intended to be a periodic annual meeting to expose and discuss gathered expertise as well as state of the art research around SoC related topics through plenary invited papers and posters. The workshop aims to provide a prospective view of tomorrow\u27s challenges in the multibillion transistor era, taking into account the emerging techniques and architectures exploring the synergy between flexible on-chip communication and system reconfigurability
    corecore