472 research outputs found

    FBM Model Based Network-Wide Performance Analysis with Service Differentiation

    Get PDF
    ABSTRACT In this paper, we demonstrate that traffic modeling with the fractional Brownian motion (FBM) process is an efficient tool for end-to-end performance analysis over a network provisioning differentiated services (DiffServ). The FBM process is a parsimonious model involving only three parameters to describe the Internet traffic showing the property of selfsimilarity or long-range dependence (LRD). As a foundation for network-wide performance analysis, the FBM modeling can significantly facilitate the single-hop performance analysis. While accurate FBM based queueing analysis for an infinite/finite first-in-first-out (FIFO) buffer is available in the existing literature, we develop a generic FBM based analysis for multiclass single-hop analysis where both inter-buffer priority and intra-buffer priority are used for service differentiation. Moreover, we present both theoretical and simulation studies to reveal the preservation of the self-similarity, when the traffic process is multiplexed or randomly split, or goes through a queueing system. It is such self-similar preservation that enables the concatenation of FBM based single-hop analysis into a network-wide performance analysis

    Resource dimensioning through buffer sampling

    Get PDF
    Link dimensioning, i.e., selecting a (minimal) link capacity such that the users’ performance requirements are met, is a crucial component of network design. It requires insight into the interrelationship between the traffic offered (in terms of the mean offered load M, but also its fluctuation around the mean, i.e., ‘burstiness’), the envisioned performance level, and the capacity needed. We first derive, for different performance criteria, theoretical dimensioning formulae that estimate the required capacity C as a function of the input traffic and the performance target. For the special case of Gaussian input traffic these formulae reduce to C = M+V , where directly relates to the performance requirement (as agreed upon in a service level agreement) and V reflects the burstiness (at the timescale of interest). We also observe that Gaussianity applies for virtually all realistic scenarios; notably, already for a relatively low aggregation level the Gaussianity assumption is justified.\ud As estimating M is relatively straightforward, the remaining open issue concerns the estimation of V . We argue that, particularly if V corresponds to small time-scales, it may be inaccurate to estimate it directly from the traffic traces. Therefore, we propose an indirect method that samples the buffer content, estimates the buffer content distribution, and ‘inverts’ this to the variance. We validate the inversion through extensive numerical experiments (using a sizeable collection of traffic traces from various representative locations); the resulting estimate of V is then inserted in the dimensioning formula. These experiments show that both the inversion and the dimensioning formula are remarkably accurate

    Resource dimensioning through buffer sampling

    Get PDF
    Link dimensioning, i.e., selecting a (minimal) link capacity such that the users’ performance requirements are met, is a crucial component of network design. It requires insight into the interrelationship among the traffic offered (in terms of the mean offered load , but also its fluctuation around the mean, i.e., ‘burstiness’), the envisioned performance level, and the capacity needed. We first derive, for different performance criteria, theoretical dimensioning formulas that estimate the required capacity cc as a function of the input traffic and the performance target. For the special case of Gaussian input traffic, these formulas reduce to c=M+αVc = M + \alpha V, where directly relates to the performance requirement (as agreed upon in a service level agreement) and VV reflects the burstiness (at the timescale of interest). We also observe that Gaussianity applies for virtually all realistic scenarios; notably, already for a relatively low aggregation level, the Gaussianity assumption is justified.\ud As estimating MM is relatively straightforward, the remaining open issue concerns the estimation of VV. We argue that particularly if corresponds to small time-scales, it may be inaccurate to estimate it directly from the traffic traces. Therefore, we propose an indirect method that samples the buffer content, estimates the buffer content distribution, and ‘inverts’ this to the variance. We validate the inversion through extensive numerical experiments (using a sizeable collection of traffic traces from various representative locations); the resulting estimate of VV is then inserted in the dimensioning formula. These experiments show that both the inversion and the dimensioning formula are remarkably accurate

    A Self-Similar Traffic Model for Network-on-Chip Performance Analysis Using Network Calculus

    Get PDF

    Wavelet q-Fisher Information for Scaling Signal Analysis

    Get PDF
    This article first introduces the concept of wavelet q-Fisher information and then derives a closed-form expression of this quantifier for scaling signals of parameter α. It is shown that this information measure appropriately describes the complexities of scaling signals and provides further analysis flexibility with the parameter q. In the limit of q→1, wavelet q-Fisher information reduces to the standard wavelet Fisher information and for q > 2 it reverses its behavior. Experimental results on synthesized fGn signals validates the level-shift detection capabilities of wavelet q-Fisher information. A comparative study also shows that wavelet q-Fisher information locates structural changes in correlated and anti-correlated fGn signals in a way comparable with standard breakpoint location techniques but at a fraction of the time. Finally, the application of this quantifier to H.263 encoded video signals is presented.Consejo Nacional de Ciencia y TecnologíaFOMIX-COQCY

    DECISION SUPPORT MODEL IN FAILURE-BASED COMPUTERIZED MAINTENANCE MANAGEMENT SYSTEM FOR SMALL AND MEDIUM INDUSTRIES

    Get PDF
    Maintenance decision support system is crucial to ensure maintainability and reliability of equipments in production lines. This thesis investigates a few decision support models to aid maintenance management activities in small and medium industries. In order to improve the reliability of resources in production lines, this study introduces a conceptual framework to be used in failure-based maintenance. Maintenance strategies are identified using the Decision-Making Grid model, based on two important factors, including the machines’ downtimes and their frequency of failures. The machines are categorized into three downtime criterions and frequency of failures, which are high, medium and low. This research derived a formula based on maintenance cost, to re-position the machines prior to Decision-Making Grid analysis. Subsequently, the formula on clustering analysis in the Decision-Making Grid model is improved to solve multiple-criteria problem. This research work also introduced a formula to estimate contractor’s response and repair time. The estimates are used as input parameters in the Analytical Hierarchy Process model. The decisions were synthesized using models based on the contractors’ technical skills such as experience in maintenance, skill to diagnose machines and ability to take prompt action during troubleshooting activities. Another important criteria considered in the Analytical Hierarchy Process is the business principles of the contractors, which includes the maintenance quality, tools, equipments and enthusiasm in problem-solving. The raw data collected through observation, interviews and surveys in the case studies to understand some risk factors in small and medium food processing industries. The risk factors are analysed with the Ishikawa Fishbone diagram to reveal delay time in machinery maintenance. The experimental studies are conducted using maintenance records in food processing industries. The Decision Making Grid model can detect the top ten worst production machines on the production lines. The Analytical Hierarchy Process model is used to rank the contractors and their best maintenance practice. This research recommends displaying the results on the production’s indicator boards and implements the strategies on the production shop floor. The proposed models can be used by decision makers to identify maintenance strategies and enhance competitiveness among contractors in failure-based maintenance. The models can be programmed as decision support sub-procedures in computerized maintenance management systems

    A study of self-similar traffic generation for ATM networks

    Get PDF
    This thesis discusses the efficient and accurate generation of self-similar traffic for ATM networks. ATM networks have been developed to carry multiple service categories. Since the traffic on a number of existing networks is bursty, much research focuses on how to capture the characteristics of traffic to reduce the impact of burstiness. Conventional traffic models do not represent the characteristics of burstiness well, but self-similar traffic models provide a closer approximation. Self-similar traffic models have two fundamental properties, long-range dependence and infinite variance, which have been found in a large number of measurements of real traffic. Therefore, generation of self-similar traffic is vital for the accurate simulation of ATM networks. The main starting point for self-similar traffic generation is the production of fractional Brownian motion (FBM) or fractional Gaussian noise (FGN). In this thesis six algorithms are brought together so that their efficiency and accuracy can be assessed. It is shown that the discrete FGN (dPGN) algorithm and the Weierstrass-Mandelbrot (WM) function are the best in terms of accuracy while the random midpoint displacement (RMD) algorithm, successive random addition (SRA) algorithm, and the WM function are superior in terms of efficiency. Three hybrid approaches are suggested to overcome the inefficiency or inaccuracy of the six algorithms. The combination of the dFGN and RMD algorithm was found to be the best in that it can generate accurate samples efficiently and on-the-fly. After generating FBM sample traces, a further transformation needs to be conducted with either the marginal distribution model or the storage model to produce self-similar traffic. The storage model is a better transformation because it provides a more rigorous mathematical derivation and interpretation of physical meaning. The suitability of using selected Hurst estimators, the rescaled adjusted range (R/S) statistic, the variance-time (VT) plot, and Whittle's approximate maximum likelihood estimator (MLE), is also covered. Whittle's MLE is the better estimator, the R/S statistic can only be used as a reference, and the VT plot might misrepresent the actual Hurst value. An improved method for the generation of self-similar traces and their conversion to traffic has been proposed. This, combined with the identification of reliable methods for the estimators of the Hurst parameter, significantly advances the use of self-similar traffic models in ATM network simulation

    ASIdE: Using Autocorrelation-Based Size Estimation for Scheduling Bursty Workloads.

    Get PDF
    Temporal dependence in workloads creates peak congestion that can make service unavailable and reduce system performance. To improve system performability under conditions of temporal dependence, a server should quickly process bursts of requests that may need large service demands. In this paper, we propose and evaluateASIdE, an Autocorrelation-based SIze Estimation, that selectively delays requests which contribute to the workload temporal dependence. ASIdE implicitly approximates the shortest job first (SJF) scheduling policy but without any prior knowledge of job service times. Extensive experiments show that (1) ASIdE achieves good service time estimates from the temporal dependence structure of the workload to implicitly approximate the behavior of SJF; and (2) ASIdE successfully counteracts peak congestion in the workload and improves system performability under a wide variety of settings. Specifically, we show that system capacity under ASIdE is largely increased compared to the first-come first-served (FCFS) scheduling policy and is highly-competitive with SJF. © 2012 IEEE
    corecore