21 research outputs found

    Dynamic bandwidth allocation in multi-class IP networks using utility functions.

    Get PDF
    PhDAbstact not availableFujitsu Telecommunications Europe Lt

    Quality of service technologies for multimedia applications in next generation networks

    Get PDF
    Next Generation Networks are constantly evolving towards solutions that allow the operator to provide advanced multimedia applications with QoS guarantees in heterogeneous, multi-domain and multi-services networks. Other than the unquestionable advantages inherent the ability to simultaneously handle traffic flows at different QoS levels, these architectures require management systems to efficiently perform quality guarantees and network resource utilization. These issues have been addressed in this thesis. DiffServ-aware Traffic Engineering (DS-TE) has been considered as reference architecture for the deployment of the quality management systems. It represents the most advanced technology to accomplish either network scalability and service granularity goals. On the basis of DS-TE features, a methodology for traffic and network resource management has been defined. It provides some rules for QoS service characterization and allows to implement Traffic Engineering policies with a class-based approach. A set of basic parameters for quality evaluation has been defined, that are the Key Performance Indicators; some mathematical model to derive the statistical nature of traffic have been analyzed and an algorithm to improve the fulfillment of quality of service targets and to optimize network resource utilization. It is aimed at reducing the complexity inherent the setting of some of the key parameters in the NGN architectures. Multidomain scenarios with technologies different from DS-TE have been also evaluated, defining some methodologies for network interoperability. Simulations with Opnet Modeler confirmed the efficacy of the proposed system in computing network configurations with QoS targets. With regard to QoS performance at the application level, video streaming applications in wireless domains have been particularly addressed. A rate control algorithm to adjust the rate on a per-window basis has been defined, making use of a short-term prediction of the network delay to keep the probability of playback buffer starvation lower than a desired threshold during each window. Finally, a framework for mutual authentication in web applications has been proposed and evaluated. It integrates an IBA password technique with a challenge-response scheme based on a shared secret key for image scrambling. The wireless environment is mainly addressed by the proposed system, which tries to overcome the severe constraints on security, data transmission capability and user friendliness imposed by such environment

    Performance of the transmission control protocol (TCP) over wireless with quality of service.

    Get PDF
    Thesis (M.Sc.Eng.)-University of Natal, Durban, 2001.The Transmission Control Protocol (TCP) is the most widely used transport protocol in the Internet. TCP is a reliable transport protocol that is tuned to perform well in wired networks where packet losses are mainly due to congestion. Wireless channels are characterized by losses due to transmission errors and handoffs. TCP interprets these losses as congestion and invokes congestion control mechanisms resulting in degradation of performance. TCP is usually layered over the Internet protocol (lP) at the network layer. JP is not reliable and does not provide for any Quality of Service (QoS). The Internet Engineering Task Force (IETF) has provided two techniques for providing QoS in the Internet. These include Integrated Services (lntServ) and Differentiated Services (DiffServ). IntServ provides flow based quality of service and thus it is not scalable on connections with large flows. DiffServ has grown in popularity since it is scalable. A packet in a DiffServ domain is classified into a class of service according to its contract profile and treated differently by its class. To provide end-to-end QoS there is a strong interaction between the transport protocol and the network protocol. In this dissertation we consider the performance of the TCP over a wireless channel. We study whether the current TCP protocols can deliver the desired quality of service faced with the challenges they have on wireless channel. The dissertation discusses the methods of providing for QoS in the Internet. We derive an analytical model for TCP protocol. It is extended to cater for the wireless channel and then further differentiated services. The model is shown to be accurate when compared to simulation. We then conclude by deducing to what degree you can provide the desired QoS with TCP on a wireless channel

    Quality of service schemes for mobile ad-hoc networks

    Get PDF
    To achieve QoS, independently of the routing protocol, each mobile node participating in the network must implement traffic conditioning, traffic marking and buffer management (Random Early Drop with in- out dropping) or queue scheduling (Priority Queuing) schemes. In MANETs, since the mobile nodes can have simultaneous multiple roles (ingress, interior and destination), it was found that traffic conditioning and marking must be implemented in all mobile nodes acting as source (ingress) nodes. Buffer management and queue scheduling schemes must be performed by all mobile nodes. By utilizing the Network Simulator (NS2) tool, this thesis focused on the empirical performance evaluation of the QoS schemes for different types of traffic (FTP/TCP, CBR/UDP and VBRI/UDP, geographical areas of different sizes and various mobility levels. Key metrics, such as throughput, end-to-end delay and packet loss rates, were used to measure the relative improvements of QoS- enabled traffic sessions. The results indicate that in the presence of congestion, service differentiation can be achieved under different scenarios and for different types of traffic, whenever a physical connection between two nodes is realizable.http://archive.org/details/qualityofservice109451082

    An Introduction to Computer Networks

    Get PDF
    An open textbook for undergraduate and graduate courses on computer networks

    Reactive traffic control mechanisms for communication networks with self-similar bandwidth demands

    Get PDF
    Communication network architectures are in the process of being redesigned so that many different services are integrated within the same network. Due to this integration, traffic management algorithms need to balance the requirements of the traffic which the algorithms are directly controlling with Quality of Service (QoS) requirements of other classes of traffic which will be encountered in the network. Of particular interest is one class of traffic, termed elastic traffic, that responds to dynamic feedback from the network regarding the amount of available resources within the network. Examples of this type of traffic include the Available Bit Rate (ABR) service in Asynchronous Transfer Mode (ATM) networks and connections using Transmission Control Protocol (TCP) in the Internet. Both examples aim to utilise available bandwidth within a network. Reactive traffic management, like that which occurs in the ABR service and TCP, depends explicitly on the dynamic bandwidth requirements of other traffic which is currently using the network. In particular, there is significant evidence that a wide range of network traffic, including Ethernet, World Wide Web, Varible Bit Rate video and signalling traffic, is self-similar. The term self-similar refers to the particular characteristic of network traffic to remain bursty over a wide range of time scales. A closely associated characteristic of self-similar traffic is its long-range dependence (LRD), which refers to the significant correlations that occur with the traffic. By utilising these correlations, greater predictability of network traffic can be achieved, and hence the performance of reactive traffic management algorithms can be enhanced. A predictive rate control algorithm, called PERC (Predictive Explicit Rate Control), is proposed in this thesis which is targeted to the ABR service in ATM networks. By incorporating the LRD stochastic structure of background traffic, measurements of the bandwidth requirements of background traffic, and the delay associated with a particular ABR connection, a predictive algorithm is defined which provides explicit rate information that is conveyed to ABR sources. An enhancement to PERC is also described. This algorithm, called PERC+, uses previous control information to correct prediction errors that occur for connections with larger round-trip delay. These algorithms have been extensively analysed with regards to their network performance, and simulation results show that queue lengths and cell loss rates are significantly reduced when these algorithms are deployed. An adaptive version of PERC has also been developed using real-time parameter estimates of self-similar traffic. This has excellent performance compared with standard ABR rate control algorithms such as ERICA. Since PERC and its enhancement PERC+ have explicitly utilised the index of self-similarity, known as the Hurst parameter, the sensitivity of these algorithms to this parameter can be determined analytically. Research work described in this thesis shows that the algorithms have an asymmetric sensitivity to the Hurst parameter, with significant sensitivity in the region where the parameter is underestimated as being close to 0.5. Simulation results reveal the same bias in the performance of the algorithm with regards to the Hurst parameter. In contrast, PERC is insensitive to estimates of the mean, using the sample mean estimator, and estimates of the traffic variance, which is due to the algorithm primarily utilising the correlation structure of the traffic to predict future bandwidth requirements. Sensitivity analysis falls into the area of investigative research, but it naturally leads to the area of robust control, where algorithms are designed so that uncertainty in traffic parameter estimation or modelling can be accommodated. An alternative robust design approach, to the standard maximum entropy approach, is proposed in this thesis that uses the maximum likelihood function to develop the predictive rate controller. The likelihood function defines the proximity of a specific traffic model to the traffic data, and hence gives a measure of the performance of a chosen model. Maximising the likelihood function leads to optimising robust performance, and it is shown, through simulations, that the system performance is close to the optimal performance as compared with maximising the spectral entropy. There is still debate regarding the influence of LRD on network performance. This thesis also considers the question of the influence of LRD on traffic predictability, and demonstrates that predictive rate control algorithms that only use short-term correlations have close performance to algorithms that utilise long-term correlations. It is noted that predictors based on LRD still out-perform ones which use short-term correlations, but that there is Potential simplification in the design of predictors, since traffic predictability can be achieved using short-term correlations. This thesis forms a substantial contribution to the understanding of control in the case where self-similar processes form part of the overall system. Rather than doggedly pursuing self-similar control, a broader view has been taken where the performance of algorithms have been considered from a number of perspectives. A number of different research avenues lead on from this work, and these are outlined

    Treatment-Based Classi?cation in Residential Wireless Access Points

    Get PDF
    IEEE 802.11 wireless access points (APs) act as the central communication hub inside homes, connecting all networked devices to the Internet. Home users run a variety of network applications with diverse Quality-of-Service requirements (QoS) through their APs. However, wireless APs are often the bottleneck in residential networks as broadband connection speeds keep increasing. Because of the lack of QoS support and complicated configuration procedures in most off-the-shelf APs, users can experience QoS degradation with their wireless networks, especially when multiple applications are running concurrently. This dissertation presents CATNAP, Classification And Treatment iN an AP , to provide better QoS support for various applications over residential wireless networks, especially timely delivery for real-time applications and high throughput for download-based applications. CATNAP consists of three major components: supporting functions, classifiers, and treatment modules. The supporting functions collect necessary flow level statistics and feed it into the CATNAP classifiers. Then, the CATNAP classifiers categorize flows along three-dimensions: response-based/non-response-based, interactive/non-interactive, and greedy/non-greedy. Each CATNAP traffic category can be directly mapped to one of the following treatments: push/delay, limited advertised window size/drop, and reserve bandwidth. Based on the classification results, the CATNAP treatment module automatically applies the treatment policy to provide better QoS support. CATNAP is implemented with the NS network simulator, and evaluated against DropTail and Strict Priority Queue (SPQ) under various network and traffic conditions. In most simulation cases, CATNAP provides better QoS supports than DropTail: it lowers queuing delay for multimedia applications such as VoIP, games and video, fairly treats FTP flows with various round trip times, and is even functional when misbehaving UDP traffic is present. Unlike current QoS methods, CATNAP is a plug-and-play solution, automatically classifying and treating flows without any user configuration, or any modification to end hosts or applications

    Telecommunications Networks

    Get PDF
    This book guides readers through the basics of rapidly emerging networks to more advanced concepts and future expectations of Telecommunications Networks. It identifies and examines the most pressing research issues in Telecommunications and it contains chapters written by leading researchers, academics and industry professionals. Telecommunications Networks - Current Status and Future Trends covers surveys of recent publications that investigate key areas of interest such as: IMS, eTOM, 3G/4G, optimization problems, modeling, simulation, quality of service, etc. This book, that is suitable for both PhD and master students, is organized into six sections: New Generation Networks, Quality of Services, Sensor Networks, Telecommunications, Traffic Engineering and Routing

    INTERNAL AUDIT CHARACTERISTICS AND QUALITY OF ACCOUNTING INFORMATION IN NIGERIA

    Get PDF
    The basic goal of Accounting is to provide enabling accounting information for reliable decision-making. The quality level of this accounting information comes from the company's governance practices, thereby emphasizing the importance of corporate governance in companies. Recently, following the financial crises resulting in accounting scandals, attention has been moving towards Internal Audit Function as an important factor in the structure of Corporate Governance. This paper therefore examined the extent of the relationship between internal audit function and the quality of accounting information of companies. The study adopted the Survey research design. The research instrument employed was Questionnaire which was administered to internal auditors of the “Big Four”. Linear regression analysis was employed in the analysis of the data collected with the use of Statistical Packages for Social Sciences (SPSS). The results revealed that there is a significant relationship between the internal audit characteristics and the quality of accounting information. It was recommended that in order to provide credibility to the financial statement, there should be a law in place mandating attachment of internal auditors report to the financial statemen
    corecore