1,530 research outputs found

    An Intelligent QoS Identification for Untrustworthy Web Services Via Two-phase Neural Networks

    Full text link
    QoS identification for untrustworthy Web services is critical in QoS management in the service computing since the performance of untrustworthy Web services may result in QoS downgrade. The key issue is to intelligently learn the characteristics of trustworthy Web services from different QoS levels, then to identify the untrustworthy ones according to the characteristics of QoS metrics. As one of the intelligent identification approaches, deep neural network has emerged as a powerful technique in recent years. In this paper, we propose a novel two-phase neural network model to identify the untrustworthy Web services. In the first phase, Web services are collected from the published QoS dataset. Then, we design a feedforward neural network model to build the classifier for Web services with different QoS levels. In the second phase, we employ a probabilistic neural network (PNN) model to identify the untrustworthy Web services from each classification. The experimental results show the proposed approach has 90.5% identification ratio far higher than other competing approaches.Comment: 8 pages, 5 figure

    A survey of QoS-aware web service composition techniques

    Get PDF
    Web service composition can be briefly described as the process of aggregating services with disparate functionalities into a new composite service in order to meet increasingly complex needs of users. Service composition process has been accurate on dealing with services having disparate functionalities, however, over the years the number of web services in particular that exhibit similar functionalities and varying Quality of Service (QoS) has significantly increased. As such, the problem becomes how to select appropriate web services such that the QoS of the resulting composite service is maximized or, in some cases, minimized. This constitutes an NP-hard problem as it is complicated and difficult to solve. In this paper, a discussion of concepts of web service composition and a holistic review of current service composition techniques proposed in literature is presented. Our review spans several publications in the field that can serve as a road map for future research

    Qos-Based Web Service Discovery And Selection Using Machine Learning

    Full text link
    In service computing, the same target functions can be achieved by multiple Web services from different providers. Due to the functional similarities, the client needs to consider the non-functional criteria. However, Quality of Service provided by the developer suffers from scarcity and lack of reliability. In addition, the reputation of the service providers is an important factor, especially those with little experience, to select a service. Most of the previous studies were focused on the user's feedbacks for justifying the selection. Unfortunately, not all the users provide the feedback unless they had extremely good or bad experience with the service. In this vision paper, we propose a novel architecture for the web service discovery and selection. The core component is a machine learning based methodology to predict the QoS properties using source code metrics. The credibility value and previous usage count are used to determine the reputation of the service.Comment: 8 Pages, 3 Figure

    Challenges to describe QoS requirements for web services quality prediction to support web services interoperability in electronic commerce

    Get PDF
    Quality of service (QoS) is significant and necessary for web service applications quality assurance. Furthermore, web services quality has contributed to the successful implementation of Electronic Commerce (EC) applications. However, QoS is still the big issue for web services research and remains one of the main research questions that need to be explored. We believe that QoS should not only be measured but should also be predicted during the development and implementation stages. However, there are challenges and constraints to determine and choose QoS requirements for high quality web services. Therefore, this paper highlights the challenges for the QoS requirements prediction as they are not easy to identify. Moreover, there are many different perspectives and purposes of web services, and various prediction techniques to describe QoS requirements. Additionally, the paper introduces a metamodel as a concept of what makes a good web service

    Towards personalised and adaptive QoS assessments via context awareness

    Get PDF
    Quality of Service (QoS ) properties play an important role in distinguishing between functionally-equivalent services and accommodating the different expectations of users. However, the subjective nature of some properties and the dynamic and unreliable nature of service environments may result in cases where the quality values advertised by the service provider are either missing or untrustworthy. To tackle this, a number of QoS estimation approaches have been proposed, utilising the observation history available on a service to predict its performance. Although the context underlying such previous observations (and corresponding to both user and service related factors) could provide an important source of information for the QoS estimation process, it has only been utilised to a limited extent by existing approaches. In response, we propose a context-aware quality learning model, realised via a learning-enabled service agent, exploiting the contextual characteristics of the domain in order to provide more personalised, accurate and relevant quality estimations for the situation at hand. The experiments conducted demonstrate the effectiveness of the proposed approach, showing promising results (in terms of prediction accuracy) in different types of changing service environments

    Trustee: A Trust Management System for Fog-enabled Cyber Physical Systems

    Get PDF
    In this paper, we propose a lightweight trust management system (TMS) for fog-enabled cyber physical systems (Fog-CPS). Trust computation is based on multi-factor and multi-dimensional parameters, and formulated as a statistical regression problem which is solved by employing random forest regression model. Additionally, as the Fog-CPS systems could be deployed in open and unprotected environments, the CPS devices and fog nodes are vulnerable to numerous attacks namely, collusion, self-promotion, badmouthing, ballot-stuffing, and opportunistic service. The compromised entities can impact the accuracy of trust computation model by increasing/decreasing the trust of other nodes. These challenges are addressed by designing a generic trust credibility model which can countermeasures the compromise of both CPS devices and fog nodes. The credibility of each newly computed trust value is evaluated and subsequently adjusted by correlating it with a standard deviation threshold. The standard deviation is quantified by computing the trust in two configurations of hostile environments and subsequently comparing it with the trust value in a legitimate/normal environment. Our results demonstrate that credibility model successfully countermeasures the malicious behaviour of all Fog-CPS entities i.e. CPS devices and fog nodes. The multi-factor trust assessment and credibility evaluation enable accurate and precise trust computation and guarantee a dependable Fog-CPS system

    Outlier-Resilient Web Service QoS Prediction

    Get PDF
    The proliferation of Web services makes it difficult for users to select the most appropriate one among numerous functionally identical or similar service candidates. Quality-of-Service (QoS) describes the non-functional characteristics of Web services, and it has become the key differentiator for service selection. However, users cannot invoke all Web services to obtain the corresponding QoS values due to high time cost and huge resource overhead. Thus, it is essential to predict unknown QoS values. Although various QoS prediction methods have been proposed, few of them have taken outliers into consideration, which may dramatically degrade the prediction performance. To overcome this limitation, we propose an outlier-resilient QoS prediction method in this paper. Our method utilizes Cauchy loss to measure the discrepancy between the observed QoS values and the predicted ones. Owing to the robustness of Cauchy loss, our method is resilient to outliers. We further extend our method to provide time-aware QoS prediction results by taking the temporal information into consideration. Finally, we conduct extensive experiments on both static and dynamic datasets. The results demonstrate that our method is able to achieve better performance than state-of-the-art baseline methods.Comment: 12 pages, to appear at the Web Conference (WWW) 202
    • …
    corecore