189,123 research outputs found

    An Intelligent QoS Identification for Untrustworthy Web Services Via Two-phase Neural Networks

    Full text link
    QoS identification for untrustworthy Web services is critical in QoS management in the service computing since the performance of untrustworthy Web services may result in QoS downgrade. The key issue is to intelligently learn the characteristics of trustworthy Web services from different QoS levels, then to identify the untrustworthy ones according to the characteristics of QoS metrics. As one of the intelligent identification approaches, deep neural network has emerged as a powerful technique in recent years. In this paper, we propose a novel two-phase neural network model to identify the untrustworthy Web services. In the first phase, Web services are collected from the published QoS dataset. Then, we design a feedforward neural network model to build the classifier for Web services with different QoS levels. In the second phase, we employ a probabilistic neural network (PNN) model to identify the untrustworthy Web services from each classification. The experimental results show the proposed approach has 90.5% identification ratio far higher than other competing approaches.Comment: 8 pages, 5 figure

    Identification of a reversible quantum gate: assessing the resources

    Full text link
    We assess the resources needed to identify a reversible quantum gate among a finite set of alternatives, including in our analysis both deterministic and probabilistic strategies. Among the probabilistic strategies we consider unambiguous gate discrimination, where errors are not tolerated but inconclusive outcomes are allowed, and we prove that parallel strategies are sufficient to unambiguously identify the unknown gate with minimum number of queries. This result is used to provide upper and lower bounds on the query complexity and on the minimum ancilla dimension. In addition, we introduce the notion of generalized t-designs, which includes unitary t-designs and group representations as special cases. For gates forming a generalized t-design we give an explicit expression for the maximum probability of correct gate identification and we prove that there is no gap between the performances of deterministic strategies an those of probabilistic strategies. Hence, evaluating of the query complexity of perfect deterministic discrimination is reduced to the easier problem of evaluating the query complexity of unambiguous discrimination. Finally, we consider discrimination strategies where the use of ancillas is forbidden, providing upper bounds on the number of additional queries needed to make up for the lack of entanglement with the ancillas.Comment: 24 + 8 pages, published versio

    Group Testing with Probabilistic Tests: Theory, Design and Application

    Get PDF
    Identification of defective members of large populations has been widely studied in the statistics community under the name of group testing. It involves grouping subsets of items into different pools and detecting defective members based on the set of test results obtained for each pool. In a classical noiseless group testing setup, it is assumed that the sampling procedure is fully known to the reconstruction algorithm, in the sense that the existence of a defective member in a pool results in the test outcome of that pool to be positive. However, this may not be always a valid assumption in some cases of interest. In particular, we consider the case where the defective items in a pool can become independently inactive with a certain probability. Hence, one may obtain a negative test result in a pool despite containing some defective items. As a result, any sampling and reconstruction method should be able to cope with two different types of uncertainty, i.e., the unknown set of defective items and the partially unknown, probabilistic testing procedure. In this work, motivated by the application of detecting infected people in viral epidemics, we design non-adaptive sampling procedures that allow successful identification of the defective items through a set of probabilistic tests. Our design requires only a small number of tests to single out the defective items. In particular, for a population of size NN and at most KK defective items with activation probability pp, our results show that M=O(K2log(N/K)/p3)M = O(K^2\log{(N/K)}/p^3) tests is sufficient if the sampling procedure should work for all possible sets of defective items, while M=O(Klog(N)/p3)M = O(K\log{(N)}/p^3) tests is enough to be successful for any single set of defective items. Moreover, we show that the defective members can be recovered using a simple reconstruction algorithm with complexity of O(MN)O(MN).Comment: Full version of the conference paper "Compressed Sensing with Probabilistic Measurements: A Group Testing Solution" appearing in proceedings of the 47th Annual Allerton Conference on Communication, Control, and Computing, 2009 (arXiv:0909.3508). To appear in IEEE Transactions on Information Theor

    Efficiently Decodable Non-Adaptive Threshold Group Testing

    Full text link
    We consider non-adaptive threshold group testing for identification of up to dd defective items in a set of nn items, where a test is positive if it contains at least 2ud2 \leq u \leq d defective items, and negative otherwise. The defective items can be identified using t=O((du)u(ddu)du(ulogdu+log1ϵ)d2logn)t = O \left( \left( \frac{d}{u} \right)^u \left( \frac{d}{d - u} \right)^{d-u} \left(u \log{\frac{d}{u}} + \log{\frac{1}{\epsilon}} \right) \cdot d^2 \log{n} \right) tests with probability at least 1ϵ1 - \epsilon for any ϵ>0\epsilon > 0 or t=O((du)u(ddu)dud3lognlognd)t = O \left( \left( \frac{d}{u} \right)^u \left( \frac{d}{d -u} \right)^{d - u} d^3 \log{n} \cdot \log{\frac{n}{d}} \right) tests with probability 1. The decoding time is t×poly(d2logn)t \times \mathrm{poly}(d^2 \log{n}). This result significantly improves the best known results for decoding non-adaptive threshold group testing: O(nlogn+nlog1ϵ)O(n\log{n} + n \log{\frac{1}{\epsilon}}) for probabilistic decoding, where ϵ>0\epsilon > 0, and O(nulogn)O(n^u \log{n}) for deterministic decoding

    The supersingular Endomorphism Ring and One Endomorphism problems are equivalent

    Full text link
    The supersingular Endomorphism Ring problem is the following: given a supersingular elliptic curve, compute all of its endomorphisms. The presumed hardness of this problem is foundational for isogeny-based cryptography. The One Endomorphism problem only asks to find a single non-scalar endomorphism. We prove that these two problems are equivalent, under probabilistic polynomial time reductions. We prove a number of consequences. First, assuming the hardness of the endomorphism ring problem, the Charles--Goren--Lauter hash function is collision resistant, and the SQIsign identification protocol is sound. Second, the endomorphism ring problem is equivalent to the problem of computing arbitrary isogenies between supersingular elliptic curves, a result previously known only for isogenies of smooth degree. Third, there exists an unconditional probabilistic algorithm to solve the endomorphism ring problem in time O~(sqrt(p)), a result that previously required to assume the generalized Riemann hypothesis. To prove our main result, we introduce a flexible framework for the study of isogeny graphs with additional information. We prove a general and easy-to-use rapid mixing theorem
    corecore