3 research outputs found

    A Hybrid Approach to Web Service Recommendation Based on QoS-Aware Rating and Ranking

    Full text link
    As the number of Web services with the same or similar functions increases steadily on the Internet, nowadays more and more service consumers pay great attention to the non-functional properties of Web services, also known as quality of service (QoS), when finding and selecting appropriate Web services. For most of the QoS-aware Web service recommendation systems, the list of recommended Web services is generally obtained based on a rating-oriented prediction approach, aiming at predicting the potential ratings that an active user may assign to the unrated services as accurately as possible. However, in some application scenarios, high accuracy of rating prediction may not necessarily lead to a satisfactory recommendation result. In this paper, we propose a ranking-oriented hybrid approach by combining the item-based collaborative filtering and latent factor models to address the problem of Web services ranking. In particular, the similarity between two Web services is measured in terms of the correlation coefficient between their rankings instead of between the traditional QoS ratings. Besides, we also improve the measure NDCG (Normalized Discounted Cumulative Gain) for evaluating the accuracy of the top K recommendations returned in ranked order. Comprehensive experiments on the QoS data set composed of real-world Web services are conducted to test our approach, and the experimental results demonstrate that our approach outperforms other competing approaches.Comment: 23 pages, 9 figures, and 2 table

    Characterizing Data Dependence Constraints for Dynamic Reliability Using n-Queens Attack Domains

    Get PDF
    As data centers attempt to cope with the exponential growth of data, new techniques for intelligent, software-defined data centers (SDDC) are being developed to confront the scale and pace of changing resources and requirements. For cost-constrained environments, like those increasingly present in scientific research labs, SDDCs also may provide better reliability and performability with no additional hardware through the use of dynamic syndrome allocation. To do so, the middleware layers of SDDCs must be able to calculate and account for complex dependence relationships to determine an optimal data layout. This challenge is exacerbated by the growth of constraints on the dependence problem when available resources are both large (due to a higher number of syndromes that can be stored) and small (due to the lack of available space for syndrome allocation). We present a quantitative method for characterizing these challenges using an analysis of attack domains for high-dimension variants of the nn-queens problem that enables performable solutions via the SMT solver Z3. We demonstrate correctness of our technique, and provide experimental evidence of its efficacy; our implementation is publicly available

    Aleph: Efficient Atomic Broadcast in Asynchronous Networks with Byzantine Nodes

    Full text link
    The spectacular success of Bitcoin and Blockchain Technology in recent years has provided enough evidence that a widespread adoption of a common cryptocurrency system is not merely a distant vision, but a scenario that might come true in the near future. However, the presence of Bitcoin's obvious shortcomings such as excessive electricity consumption, unsatisfying transaction throughput, and large validation time (latency) makes it clear that a new, more efficient system is needed. We propose a protocol in which a set of nodes maintains and updates a linear ordering of transactions that are being submitted by users. Virtually every cryptocurrency system has such a protocol at its core, and it is the efficiency of this protocol that determines the overall throughput and latency of the system. We develop our protocol on the grounds of the well-established field of Asynchronous Byzantine Fault Tolerant (ABFT) systems. This allows us to formally reason about correctness, efficiency, and security in the strictest possible model, and thus convincingly prove the overall robustness of our solution. Our protocol improves upon the state-of-the-art HoneyBadgerBFT by Miller et al. by reducing the asymptotic latency while matching the optimal communication complexity. Furthermore, in contrast to the above, our protocol does not require a trusted dealer thanks to a novel implementation of a trustless ABFT Randomness Beacon.Comment: Accepted for presentation at AFT'1
    corecore