234 research outputs found

    Outlier-Resilient Web Service QoS Prediction

    Get PDF
    The proliferation of Web services makes it difficult for users to select the most appropriate one among numerous functionally identical or similar service candidates. Quality-of-Service (QoS) describes the non-functional characteristics of Web services, and it has become the key differentiator for service selection. However, users cannot invoke all Web services to obtain the corresponding QoS values due to high time cost and huge resource overhead. Thus, it is essential to predict unknown QoS values. Although various QoS prediction methods have been proposed, few of them have taken outliers into consideration, which may dramatically degrade the prediction performance. To overcome this limitation, we propose an outlier-resilient QoS prediction method in this paper. Our method utilizes Cauchy loss to measure the discrepancy between the observed QoS values and the predicted ones. Owing to the robustness of Cauchy loss, our method is resilient to outliers. We further extend our method to provide time-aware QoS prediction results by taking the temporal information into consideration. Finally, we conduct extensive experiments on both static and dynamic datasets. The results demonstrate that our method is able to achieve better performance than state-of-the-art baseline methods.Comment: 12 pages, to appear at the Web Conference (WWW) 202

    Gaussian-based Probabilistic Deep Supervision Network for Noise-Resistant QoS Prediction

    Full text link
    Quality of Service (QoS) prediction is an essential task in recommendation systems, where accurately predicting unknown QoS values can improve user satisfaction. However, existing QoS prediction techniques may perform poorly in the presence of noise data, such as fake location information or virtual gateways. In this paper, we propose the Probabilistic Deep Supervision Network (PDS-Net), a novel framework for QoS prediction that addresses this issue. PDS-Net utilizes a Gaussian-based probabilistic space to supervise intermediate layers and learns probability spaces for both known features and true labels. Moreover, PDS-Net employs a condition-based multitasking loss function to identify objects with noise data and applies supervision directly to deep features sampled from the probability space by optimizing the Kullback-Leibler distance between the probability space of these objects and the real-label probability space. Thus, PDS-Net effectively reduces errors resulting from the propagation of corrupted data, leading to more accurate QoS predictions. Experimental evaluations on two real-world QoS datasets demonstrate that the proposed PDS-Net outperforms state-of-the-art baselines, validating the effectiveness of our approach

    TPMCF: Temporal QoS Prediction using Multi-Source Collaborative Features

    Full text link
    Recently, with the rapid deployment of service APIs, personalized service recommendations have played a paramount role in the growth of the e-commerce industry. Quality-of-Service (QoS) parameters determining the service performance, often used for recommendation, fluctuate over time. Thus, the QoS prediction is essential to identify a suitable service among functionally equivalent services over time. The contemporary temporal QoS prediction methods hardly achieved the desired accuracy due to various limitations, such as the inability to handle data sparsity and outliers and capture higher-order temporal relationships among user-service interactions. Even though some recent recurrent neural-network-based architectures can model temporal relationships among QoS data, prediction accuracy degrades due to the absence of other features (e.g., collaborative features) to comprehend the relationship among the user-service interactions. This paper addresses the above challenges and proposes a scalable strategy for Temporal QoS Prediction using Multi-source Collaborative-Features (TPMCF), achieving high prediction accuracy and faster responsiveness. TPMCF combines the collaborative-features of users/services by exploiting user-service relationship with the spatio-temporal auto-extracted features by employing graph convolution and transformer encoder with multi-head self-attention. We validated our proposed method on WS-DREAM-2 datasets. Extensive experiments showed TPMCF outperformed major state-of-the-art approaches regarding prediction accuracy while ensuring high scalability and reasonably faster responsiveness.Comment: 10 Pages, 7 figure

    A Dual Latent State Learning Approach: Exploiting Regional Network Similarities for QoS Prediction

    Full text link
    Individual objects, whether users or services, within a specific region often exhibit similar network states due to their shared origin from the same city or autonomous system (AS). Despite this regional network similarity, many existing techniques overlook its potential, resulting in subpar performance arising from challenges such as data sparsity and label imbalance. In this paper, we introduce the regional-based dual latent state learning network(R2SL), a novel deep learning framework designed to overcome the pitfalls of traditional individual object-based prediction techniques in Quality of Service (QoS) prediction. Unlike its predecessors, R2SL captures the nuances of regional network behavior by deriving two distinct regional network latent states: the city-network latent state and the AS-network latent state. These states are constructed utilizing aggregated data from common regions rather than individual object data. Furthermore, R2SL adopts an enhanced Huber loss function that adjusts its linear loss component, providing a remedy for prevalent label imbalance issues. To cap off the prediction process, a multi-scale perception network is leveraged to interpret the integrated feature map, a fusion of regional network latent features and other pertinent information, ultimately accomplishing the QoS prediction. Through rigorous testing on real-world QoS datasets, R2SL demonstrates superior performance compared to prevailing state-of-the-art methods. Our R2SL approach ushers in an innovative avenue for precise QoS predictions by fully harnessing the regional network similarities inherent in objects

    Edge Computing for Internet of Things

    Get PDF
    The Internet-of-Things is becoming an established technology, with devices being deployed in homes, workplaces, and public areas at an increasingly rapid rate. IoT devices are the core technology of smart-homes, smart-cities, intelligent transport systems, and promise to optimise travel, reduce energy usage and improve quality of life. With the IoT prevalence, the problem of how to manage the vast volumes of data, wide variety and type of data generated, and erratic generation patterns is becoming increasingly clear and challenging. This Special Issue focuses on solving this problem through the use of edge computing. Edge computing offers a solution to managing IoT data through the processing of IoT data close to the location where the data is being generated. Edge computing allows computation to be performed locally, thus reducing the volume of data that needs to be transmitted to remote data centres and Cloud storage. It also allows decisions to be made locally without having to wait for Cloud servers to respond

    Resilient and Trustworthy Dynamic Data-driven Application Systems (DDDAS) Services for Crisis Management Environments

    Get PDF
    Future crisis management systems needresilient and trustworthy infrastructures to quickly develop reliable applications and processes, andensure end-to-end security, trust, and privacy. Due to the multiplicity and diversity of involved actors, volumes of data, and heterogeneity of shared information;crisis management systems tend to be highly vulnerable and subjectto unforeseen incidents. As a result, the dependability of crisis management systems can be at risk. This paper presents a cloud-based resilient and trustworthy infrastructure (known as rDaaS) to quickly develop secure crisis management systems. The rDaaS integrates the Dynamic Data-Driven Application Systems (DDDAS) paradigm into a service-oriented architecture over cloud technology and provides a set of resilient DDDAS-As-A Service (rDaaS) components to build secure and trusted adaptable crisis processes. The rDaaS also ensures resilience and security by obfuscating the execution environment and applying Behavior Software Encryption and Moving Technique Defense. A simulation environment for a nuclear plant crisis management case study is illustrated to build resilient and trusted crisis response processes

    Trust Management for Context-Aware Composite Services

    Get PDF
    In the areas of cloud computing, big data and internet of things, composite services are designed to effectively address complex levels of user requirements. A major challenge for composite services management is the dynamic and continuously changing run-time environments that could raise several exceptional situations such as service execution time that may have greatly increased or a service that may become unavailable. Composite services in this environmental context have difficulty securing an acceptable quality of service (QoS). The need for dynamic adaptations to be triggered becomes then urgent for service-based systems. These systems also require trust management to ensure service level agreement (SLA) compliance. To face this dynamism and volatility, context-aware composite services (i.e., run-time self-adaptable services) are designed to continue offering their functionalities without compromising their operational efficiency to boost the added value of the composition. The literature on adaptation management for context-aware composite services mainly focuses on the closed world assumption that the boundary between the service and its run-time environment is known, which is impractical for dynamic services in the open world where environmental contexts are unexpected. Besides, the literature relies on centralized architectures that suffer from management overhead or distributed architectures that suffer from communication overhead to manage service adaptation. Moreover, the problem of encountering malicious constituent services at run-time still needs further investigation toward a more efficient solution. Such services take advantage of the environmental contexts for their benefit by providing unsatisfying QoS values or maliciously collaborate with other services. Furthermore, the literature overlooks the fact that composite services data is relational and relies on propositional data (i.e., flattened data containing the information without the structure). This contradicts with the fact that services are statistically dependent since QoS values of service are correlated with those of other services. This thesis aims to address these gaps by capitalizing on different methods from software engineering, computational intelligence and machine learning. To support context-aware composite services in the open world, dynamic adaptation mechanisms are carried out at design-time to guide the running services. To this end, this thesis proposes an adaptation solution based on a feature model that captures the variability of the composite service and deliberates the inter-dependency relations among QoS constraints. We apply the master-slaves adaptation pattern to enable coordination of the self-adaptation process based on the MAPE loop (Monitor-Analysis-Plan-Execute) at run time. We model the adaptation process as a multi-objective optimization problem and solve it using a meta-heuristic search technique constrained by SLA and feature model constraints. This enables the master to resolve conflicting QoS goals of the service adaptation. In the slave side, we propose an adaptation solution that immediately substitutes failed constituent services with no need for complex and costly global adaptation. To support the decision making at different levels of adaptation, we first propose an online SLA violation prediction model that requires small amounts of end-to-end QoS data. We then extend the model to comprehensively consider service dependency that exists in the real business world at run time by leveraging the relational dependency network, thus enhancing the prediction accuracy. In addition, we propose a trust management model for services based on the dependency network. Particularly, we predict the probability of delivering a satisfactory QoS under changing environmental contexts by leveraging the cyclic dependency relations among QoS metrics and environmental context variables. Moreover, we develop a service reputation evaluation technique based on the power of mass collaboration where we explicitly detect collusion attacks. As another contribution of this thesis, we introduce for the newcomer services a trust bootstrapping mechanism resilient to the white-washing attack using the concept of social adoption. The thesis reports simulation results using real datasets showing the efficiency of the proposed solutions

    A stochastic Reputation System Architecture to support the Partner Selection in Virtual Organisations

    Get PDF
    In recent business environments, collaborations among organisations raise an increased demand for swift establishment. Such collaborations are increasingly formed without prior experience of the other partner\u27s previous performance. The STochastic REputation system (STORE) is designed to provide swift, automated decision support for selecting partner organisations. STORE is based on a stochastic trust model and evaluated by means of multi agent simulations in Virtual Organisation scenarios
    • …
    corecore