210,992 research outputs found

    CoMoM: Efficient Class-Oriented Evaluation of Multiclass Performance Models

    No full text
    We introduce the Class-oriented Method of Moments (CoMoM), a new exact algorithm to compute performance indexes in closed multiclass queueing networks. Closed models are important for performance evaluation of multi-tier applications, but when the number of service classes is large they become too expensive to solve with exact methods such as Mean Value Analysis (MVA). CoMoM addresses this limitation by a new recursion that scales efficiently with the number of classes. Compared to the MVA algorithm, which recursively computes mean queue-lengths, CoMoM carries on in the recursion also information on higher-order moments of queue-lengths. We show that this additional information greatly reduces the number of operations needed to solve the model and makes CoMoM the best-available algorithm for networks with several classes. We conclude the paper by generalizing CoMoM to the efficient computation of marginal queue-length probabilities, which finds application in the evaluation of state-dependent attributes such as energy consumption or quality-of-service metrics

    An efficient approximation to the correlated Nakagami-m sums and its application in equal gain diversity receivers

    Full text link
    There are several cases in wireless communications theory where the statistics of the sum of independent or correlated Nakagami-m random variables (RVs) is necessary to be known. However, a closed-form solution to the distribution of this sum does not exist when the number of constituent RVs exceeds two, even for the special case of Rayleigh fading. In this paper, we present an efficient closed-form approximation for the distribution of the sum of arbitrary correlated Nakagami-m envelopes with identical and integer fading parameters. The distribution becomes exact for maximal correlation, while the tightness of the proposed approximation is validated statistically by using the Chi-square and the Kolmogorov-Smirnov goodness-of-fit tests. As an application, the approximation is used to study the performance of equal-gain combining (EGC) systems operating over arbitrary correlated Nakagami-m fading channels, by utilizing the available analytical results for the error-rate performance of an equivalent maximal-ratio combining (MRC) system

    Don't Just Go with the Flow: Cautionary Tales of Fluid Flow Approximation

    Get PDF
    Fluid flow approximation allows efficient analysis of large scale PEPA models. Given a model, this method outputs how the mean, variance, and any other moment of the model's stochastic behaviour evolves as a function of time. We investigate whether the method's results, i.e. moments of the behaviour, are sufficient to capture system's actual dynamics. We ran a series of experiments on a client-server model. For some parametrizations of the model, the model's behaviour can accurately be characterized by the fluid flow approximations of its moments. However, the experiments show that for some other parametrizations, these moments are not sufficient to capture the model's behaviour, highlighting a pitfall of relying only on the results of fluid flow analysis. The results suggest that the sufficiency of the fluid flow method for the analysis of a model depends on the model's concrete parametrization. They also make it clear that the existing criteria for deciding on the sufficiency of the fluid flow method are not robust

    Weak gravitational lensing with DEIMOS

    Get PDF
    We introduce a novel method for weak-lensing measurements, which is based on a mathematically exact deconvolution of the moments of the apparent brightness distribution of galaxies from the telescope's PSF. No assumptions on the shape of the galaxy or the PSF are made. The (de)convolution equations are exact for unweighted moments only, while in practice a compact weight function needs to be applied to the noisy images to ensure that the moment measurement yields significant results. We employ a Gaussian weight function, whose centroid and ellipticity are iteratively adjusted to match the corresponding quantities of the source. The change of the moments caused by the application of the weight function can then be corrected by considering higher-order weighted moments of the same source. Because of the form of the deconvolution equations, even an incomplete weighting correction leads to an excellent shear estimation if galaxies and PSF are measured with a weight function of identical size. We demonstrate the accuracy and capabilities of this new method in the context of weak gravitational lensing measurements with a set of specialized tests and show its competitive performance on the GREAT08 challenge data. A complete C++ implementation of the method can be requested from the authors.Comment: 7 pages, 3 figures, fixed typo in Eq. 1

    A new tool for the performance analysis of massively parallel computer systems

    Full text link
    We present a new tool, GPA, that can generate key performance measures for very large systems. Based on solving systems of ordinary differential equations (ODEs), this method of performance analysis is far more scalable than stochastic simulation. The GPA tool is the first to produce higher moment analysis from differential equation approximation, which is essential, in many cases, to obtain an accurate performance prediction. We identify so-called switch points as the source of error in the ODE approximation. We investigate the switch point behaviour in several large models and observe that as the scale of the model is increased, in general the ODE performance prediction improves in accuracy. In the case of the variance measure, we are able to justify theoretically that in the limit of model scale, the ODE approximation can be expected to tend to the actual variance of the model

    Filtering and Smoothing with Score-Driven Models

    Full text link
    We propose a methodology for filtering, smoothing and assessing parameter and filtering uncertainty in misspecified score-driven models. Our technique is based on a general representation of the well-known Kalman filter and smoother recursions for linear Gaussian models in terms of the score of the conditional log-likelihood. We prove that, when data are generated by a nonlinear non-Gaussian state-space model, the proposed methodology results from a first-order expansion of the true observation density around the optimal filter. The error made by such approximation is assessed analytically. As shown in extensive Monte Carlo analyses, our methodology performs very similarly to exact simulation-based methods, while remaining computationally extremely simple. We illustrate empirically the advantages in employing score-driven models as misspecified filters rather than purely predictive processes.Comment: 33 pages, 5 figures, 6 table

    A Tensor Approach to Learning Mixed Membership Community Models

    Get PDF
    Community detection is the task of detecting hidden communities from observed interactions. Guaranteed community detection has so far been mostly limited to models with non-overlapping communities such as the stochastic block model. In this paper, we remove this restriction, and provide guaranteed community detection for a family of probabilistic network models with overlapping communities, termed as the mixed membership Dirichlet model, first introduced by Airoldi et al. This model allows for nodes to have fractional memberships in multiple communities and assumes that the community memberships are drawn from a Dirichlet distribution. Moreover, it contains the stochastic block model as a special case. We propose a unified approach to learning these models via a tensor spectral decomposition method. Our estimator is based on low-order moment tensor of the observed network, consisting of 3-star counts. Our learning method is fast and is based on simple linear algebraic operations, e.g. singular value decomposition and tensor power iterations. We provide guaranteed recovery of community memberships and model parameters and present a careful finite sample analysis of our learning method. As an important special case, our results match the best known scaling requirements for the (homogeneous) stochastic block model
    • …
    corecore