17,809 research outputs found

    Multiplexing regulated traffic streams: design and performance

    Get PDF
    The main network solutions for supporting QoS rely on traf- fic policing (conditioning, shaping). In particular, for IP networks the IETF has developed Intserv (individual flows regulated) and Diffserv (only ag- gregates regulated). The regulator proposed could be based on the (dual) leaky-bucket mechanism. This explains the interest in network element per- formance (loss, delay) for leaky-bucket regulated traffic. This paper describes a novel approach to the above problem. Explicitly using the correlation structure of the sources’ traffic, we derive approxi- mations for both small and large buffers. Importantly, for small (large) buffers the short-term (long-term) correlations are dominant. The large buffer result decomposes the traffic stream in a stream of constant rate and a periodic impulse stream, allowing direct application of the Brownian bridge approximation. Combining the small and large buffer results by a concave majorization, we propose a simple, fast and accurate technique to statistically multiplex homogeneous regulated sources. To address heterogeneous inputs, we present similarly efficient tech- niques to evaluate the performance of multiple classes of traffic, each with distinct characteristics and QoS requirements. These techniques, applica- ble under more general conditions, are based on optimal resource (band- width and buffer) partitioning. They can also be directly applied to set GPS (Generalized Processor Sharing) weights and buffer thresholds in a shared resource system

    On-the-fly adaptivity for nonlinear twoscale simulations using artificial neural networks and reduced order modeling

    Get PDF
    A multi-fidelity surrogate model for highly nonlinear multiscale problems is proposed. It is based on the introduction of two different surrogate models and an adaptive on-the-fly switching. The two concurrent surrogates are built incrementally starting from a moderate set of evaluations of the full order model. Therefore, a reduced order model (ROM) is generated. Using a hybrid ROM-preconditioned FE solver, additional effective stress-strain data is simulated while the number of samples is kept to a moderate level by using a dedicated and physics-guided sampling technique. Machine learning (ML) is subsequently used to build the second surrogate by means of artificial neural networks (ANN). Different ANN architectures are explored and the features used as inputs of the ANN are fine tuned in order to improve the overall quality of the ML model. Additional ANN surrogates for the stress errors are generated. Therefore, conservative design guidelines for error surrogates are presented by adapting the loss functions of the ANN training in pure regression or pure classification settings. The error surrogates can be used as quality indicators in order to adaptively select the appropriate -- i.e. efficient yet accurate -- surrogate. Two strategies for the on-the-fly switching are investigated and a practicable and robust algorithm is proposed that eliminates relevant technical difficulties attributed to model switching. The provided algorithms and ANN design guidelines can easily be adopted for different problem settings and, thereby, they enable generalization of the used machine learning techniques for a wide range of applications. The resulting hybrid surrogate is employed in challenging multilevel FE simulations for a three-phase composite with pseudo-plastic micro-constituents. Numerical examples highlight the performance of the proposed approach

    On Multiscale Methods in Petrov-Galerkin formulation

    Full text link
    In this work we investigate the advantages of multiscale methods in Petrov-Galerkin (PG) formulation in a general framework. The framework is based on a localized orthogonal decomposition of a high dimensional solution space into a low dimensional multiscale space with good approximation properties and a high dimensional remainder space{, which only contains negligible fine scale information}. The multiscale space can then be used to obtain accurate Galerkin approximations. As a model problem we consider the Poisson equation. We prove that a Petrov-Galerkin formulation does not suffer from a significant loss of accuracy, and still preserve the convergence order of the original multiscale method. We also prove inf-sup stability of a PG Continuous and a Discontinuous Galerkin Finite Element multiscale method. Furthermore, we demonstrate that the Petrov-Galerkin method can decrease the computational complexity significantly, allowing for more efficient solution algorithms. As another application of the framework, we show how the Petrov-Galerkin framework can be used to construct a locally mass conservative solver for two-phase flow simulation that employs the Buckley-Leverett equation. To achieve this, we couple a PG Discontinuous Galerkin Finite Element method with an upwind scheme for a hyperbolic conservation law

    Agnostic notes on regression adjustments to experimental data: Reexamining Freedman's critique

    Full text link
    Freedman [Adv. in Appl. Math. 40 (2008) 180-193; Ann. Appl. Stat. 2 (2008) 176-196] critiqued ordinary least squares regression adjustment of estimated treatment effects in randomized experiments, using Neyman's model for randomization inference. Contrary to conventional wisdom, he argued that adjustment can lead to worsened asymptotic precision, invalid measures of precision, and small-sample bias. This paper shows that in sufficiently large samples, those problems are either minor or easily fixed. OLS adjustment cannot hurt asymptotic precision when a full set of treatment-covariate interactions is included. Asymptotically valid confidence intervals can be constructed with the Huber-White sandwich standard error estimator. Checks on the asymptotic approximations are illustrated with data from Angrist, Lang, and Oreopoulos's [Am. Econ. J.: Appl. Econ. 1:1 (2009) 136--163] evaluation of strategies to improve college students' achievement. The strongest reasons to support Freedman's preference for unadjusted estimates are transparency and the dangers of specification search.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS583 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore