19,096 research outputs found
Multi-agent quality of experience control
In the framework of the Future Internet, the aim of the Quality of Experience (QoE) Control functionalities is to track the personalized desired QoE level of the applications. The paper proposes to perform such a task by dynamically selecting the most appropriate Classes of Service (among the ones supported by the network), this selection being driven by a novel heuristic Multi-Agent Reinforcement Learning (MARL) algorithm. The paper shows that such an approach offers the opportunity to cope with some practical implementation problems: in particular, it allows to face the so-called “curse of dimensionality” of MARL algorithms, thus achieving satisfactory performance results even in the presence of several hundreds of Agents
A methodical approach to performance measurement experiments : measure and measurement specification
This report describes a methodical approach to performance measurement experiments. This approach gives a blueprint for the whole trajectory from the notion of performance measures and how to define them via planning, instrumentation and execution of the experiments to interpretation of the results. The first stage of the approach, Measurement Initialisation, has been worked out completely. It is shown that a well-defined system description allows a procedural approach to defining performance measures and to identifying parameters that might affect it. For the second stage of the approach, Measurement Planning, concepts are defined that enable a clear experiment description or specification. It is highlighted what actually is being measured when executing an experiment. A brief example that illustrates the value of the method and a comparison with an existing method - that of Jain - complete this report
Recommended from our members
Evaluating Government's Policies on Promoting Smart Metering in Retail Electricity Markets via Agent Based Simulation
Recommended from our members
Preliminary Interdependency Analysis: An Approach to Support Critical Infrastructure Risk Assessment
We present a methodology, Preliminary Interdependency Analysis (PIA), for analysing interdependencies between critical infrastructure (CI). Consisting of two phases – qualitative analysis followed by quantitative analysis – an application of PIA progresses from a relatively quick elicitation of CI-interdependencies to the building of representative CI models, and the subsequent estimation of any resilience, risk or criticality measures an assessor might be interested in. By design, stages in the methodology are both flexible and iterative, resulting in interacting CI models that are scalable and may vary significantly in complexity and fidelity, depending on the needs and requirements of an assessor. For model parameterisation, one relies on a combination of field data, sensitivity analysis and expert judgement. Facilitated by dedicated software tool support, we illustrate PIA by applying it to a complex case-study of interacting Power (distribution and transmission) and Telecommunications networks in the Rome area. A number of studies are carried out, including: 1) an investigation of how “strength of dependence” between the CIs’ components affects various measures of risk and uncertainty, 2) for resource allocation, an exploration of different, but related, notions of CI component importance, and 3) highlighting the impact of model fidelity on the estimated risk of cascades
Cross-layer system reliability assessment framework for hardware faults
System reliability estimation during early design phases facilitates informed decisions for the integration of effective protection mechanisms against different classes of hardware faults. When not all system abstraction layers (technology, circuit, microarchitecture, software) are factored in such an estimation model, the delivered reliability reports must be excessively pessimistic and thus lead to unacceptably expensive, over-designed systems. We propose a scalable, cross-layer methodology and supporting suite of tools for accurate but fast estimations of computing systems reliability. The backbone of the methodology is a component-based Bayesian model, which effectively calculates system reliability based on the masking probabilities of individual hardware and software components considering their complex interactions. Our detailed experimental evaluation for different technologies, microarchitectures, and benchmarks demonstrates that the proposed model delivers very accurate reliability estimations (FIT rates) compared to statistically significant but slow fault injection campaigns at the microarchitecture level.Peer ReviewedPostprint (author's final draft
- …