142,364 research outputs found

    Quality of experience aware adaptive hypermedia system

    Get PDF
    The research reported in this thesis proposes, designs and tests a novel Quality of Experience Layer (QoE-layer) for the classic Adaptive Hypermedia Systems (AHS) architecture. Its goal is to improve the end-user perceived Quality of Service in different operational environments suitable for residential users. While the AHS’ main role of delivering personalised content is not altered, its functionality and performance is improved and thus the user satisfaction with the service provided. The QoE Layer takes into account multiple factors that affect Quality of Experience (QoE), such as Web components and network connection. It uses a novel Perceived Performance Model that takes into consideration a variety of performance metrics, in order to learn about the Web user operational environment characteristics, about changes in network connection and the consequences of these changes on the user’s quality of experience. This model also considers the user’s subjective opinion about his/her QoE, increasing its effectiveness and suggests strategies for tailoring Web content in order to improve QoE. The user related information is modelled using a stereotype-based technique that makes use of probability and distribution theory. The QoE-Layer has been assessed through both simulations and qualitative evaluation in the educational area (mainly distance learning), when users interact with the system in a low bit rate operational environment. The simulations have assessed “learning” and “adaptability” behaviour of the proposed layer in different and variable home connections when a learning task is performed. The correctness of Perceived Performance Model (PPM) suggestions, access time of the learning process and quantity of transmitted data were analysed. The results show that the QoE layer significantly improves the performance in terms of the access time of the learning process with a reduction in the quantity of data sent by using image compression and/or elimination. A visual quality assessment confirmed that this image quality reduction does not significantly affect the viewers’ perceived quality that was close to “good” perceptual level. For qualitative evaluation the QoE layer has been deployed on the open-source AHA! system. The goal of this evaluation was to compare the learning outcome, system usability and user satisfaction when AHA! and QoE-ware AHA systems were used. The assessment was performed in terms of learner achievement, learning performance and usability assessment. The results indicate that QoE-aware AHA system did not affect the learning outcome (the students have similar-learning achievements) but the learning performance was improved in terms of study time. Most significantly, QoE-aware AHA provides an important improvement in system usability as indicated by users’ opinion about their satisfaction related to QoE

    Flexible provisioning of Web service workflows

    No full text
    Web services promise to revolutionise the way computational resources and business processes are offered and invoked in open, distributed systems, such as the Internet. These services are described using machine-readable meta-data, which enables consumer applications to automatically discover and provision suitable services for their workflows at run-time. However, current approaches have typically assumed service descriptions are accurate and deterministic, and so have neglected to account for the fact that services in these open systems are inherently unreliable and uncertain. Specifically, network failures, software bugs and competition for services may regularly lead to execution delays or even service failures. To address this problem, the process of provisioning services needs to be performed in a more flexible manner than has so far been considered, in order to proactively deal with failures and to recover workflows that have partially failed. To this end, we devise and present a heuristic strategy that varies the provisioning of services according to their predicted performance. Using simulation, we then benchmark our algorithm and show that it leads to a 700% improvement in average utility, while successfully completing up to eight times as many workflows as approaches that do not consider service failures

    Final report on the evaluation of RRM/CRRM algorithms

    Get PDF
    Deliverable public del projecte EVERESTThis deliverable provides a definition and a complete evaluation of the RRM/CRRM algorithms selected in D11 and D15, and evolved and refined on an iterative process. The evaluation will be carried out by means of simulations using the simulators provided at D07, and D14.Preprin

    An Effective Strategy for the Flexible Provisioning of Service Workflows

    No full text
    Recent advances in service-oriented frameworks and semantic Web technologies have enabled software agents to discover and invoke resources over large distributed systems, in order to meet their high-level objectives. However, most work has failed to acknowledge that such systems are complex and dynamic multi-agent systems, where service providers act autonomously and follow their own decision-making procedures. Hence, the behaviour of these providers is inherently uncertain - services may fail or take uncertain amounts of time to complete. In this work, we address this uncertainty and take an agent-oriented approach to the problem of provisioning service providers for the constituent tasks of abstract workflows. Specifically, we describe an algorithm that uses redundancy to deal with unreliable providers, and we demonstrate that it achieves an 8-14% improvement in average utility over previous work, while performing up to 6 times as well as approaches that do not consider service uncertainty. We also show that our algorithm performs well in the presence of inaccurate service performance information

    Low latency via redundancy

    Full text link
    Low latency is critical for interactive networked applications. But while we know how to scale systems to increase capacity, reducing latency --- especially the tail of the latency distribution --- can be much more difficult. In this paper, we argue that the use of redundancy is an effective way to convert extra capacity into reduced latency. By initiating redundant operations across diverse resources and using the first result which completes, redundancy improves a system's latency even under exceptional conditions. We study the tradeoff with added system utilization, characterizing the situations in which replicating all tasks reduces mean latency. We then demonstrate empirically that replicating all operations can result in significant mean and tail latency reduction in real-world systems including DNS queries, database servers, and packet forwarding within networks

    An Intelligent QoS Identification for Untrustworthy Web Services Via Two-phase Neural Networks

    Full text link
    QoS identification for untrustworthy Web services is critical in QoS management in the service computing since the performance of untrustworthy Web services may result in QoS downgrade. The key issue is to intelligently learn the characteristics of trustworthy Web services from different QoS levels, then to identify the untrustworthy ones according to the characteristics of QoS metrics. As one of the intelligent identification approaches, deep neural network has emerged as a powerful technique in recent years. In this paper, we propose a novel two-phase neural network model to identify the untrustworthy Web services. In the first phase, Web services are collected from the published QoS dataset. Then, we design a feedforward neural network model to build the classifier for Web services with different QoS levels. In the second phase, we employ a probabilistic neural network (PNN) model to identify the untrustworthy Web services from each classification. The experimental results show the proposed approach has 90.5% identification ratio far higher than other competing approaches.Comment: 8 pages, 5 figure

    ASIdE: Using Autocorrelation-Based Size Estimation for Scheduling Bursty Workloads.

    Get PDF
    Temporal dependence in workloads creates peak congestion that can make service unavailable and reduce system performance. To improve system performability under conditions of temporal dependence, a server should quickly process bursts of requests that may need large service demands. In this paper, we propose and evaluateASIdE, an Autocorrelation-based SIze Estimation, that selectively delays requests which contribute to the workload temporal dependence. ASIdE implicitly approximates the shortest job first (SJF) scheduling policy but without any prior knowledge of job service times. Extensive experiments show that (1) ASIdE achieves good service time estimates from the temporal dependence structure of the workload to implicitly approximate the behavior of SJF; and (2) ASIdE successfully counteracts peak congestion in the workload and improves system performability under a wide variety of settings. Specifically, we show that system capacity under ASIdE is largely increased compared to the first-come first-served (FCFS) scheduling policy and is highly-competitive with SJF. Š 2012 IEEE
    • …
    corecore