11,497 research outputs found

    Fast Decoder for Overloaded Uniquely Decodable Synchronous Optical CDMA

    Full text link
    In this paper, we propose a fast decoder algorithm for uniquely decodable (errorless) code sets for overloaded synchronous optical code-division multiple-access (O-CDMA) systems. The proposed decoder is designed in a such a way that the users can uniquely recover the information bits with a very simple decoder, which uses only a few comparisons. Compared to maximum-likelihood (ML) decoder, which has a high computational complexity for even moderate code lengths, the proposed decoder has much lower computational complexity. Simulation results in terms of bit error rate (BER) demonstrate that the performance of the proposed decoder for a given BER requires only 1-2 dB higher signal-to-noise ratio (SNR) than the ML decoder.Comment: arXiv admin note: substantial text overlap with arXiv:1806.0395

    Robust 3-Dimensional Object Recognition using Stereo Vision and Geometric Hashing

    Get PDF
    We propose a technique that combines geometric hashing with stereo vision. The idea is to use the robustness of geometric hashing to spurious data to overcome the correspondence problem, while the stereo vision setup enables direct model matching using the 3-D object models. Furthermore, because the matching technique relies on the relative positions of local features, we should be able to perform robust recognition even with partially occluded objects. We tested this approach with simple geometric objects using a corner point detector. We successfully recognized objects even in scenes where the objects were partially occluded by other objects. For complicated scenes, however, the limited set of model features and required amount of computing time, sometimes became a proble

    Too good to be true: when overwhelming evidence fails to convince

    Get PDF
    Is it possible for a large sequence of measurements or observations, which support a hypothesis, to counterintuitively decrease our confidence? Can unanimous support be too good to be true? The assumption of independence is often made in good faith, however rarely is consideration given to whether a systemic failure has occurred. Taking this into account can cause certainty in a hypothesis to decrease as the evidence for it becomes apparently stronger. We perform a probabilistic Bayesian analysis of this effect with examples based on (i) archaeological evidence, (ii) weighing of legal evidence, and (iii) cryptographic primality testing. We find that even with surprisingly low systemic failure rates high confidence is very difficult to achieve and in particular we find that certain analyses of cryptographically-important numerical tests are highly optimistic, underestimating their false-negative rate by as much as a factor of 2802^{80}

    Probabilistic learning for selective dissemination of information

    Get PDF
    New methods and new systems are needed to filter or to selectively distribute the increasing volume of electronic information being produced nowadays. An effective information filtering system is one that provides the exact information that fulfills user's interests with the minimum effort by the user to describe it. Such a system will have to be adaptive to the user changing interest. In this paper we describe and evaluate a learning model for information filtering which is an adaptation of the generalized probabilistic model of information retrieval. The model is based on the concept of 'uncertainty sampling', a technique that allows for relevance feedback both on relevant and nonrelevant documents. The proposed learning model is the core of a prototype information filtering system called ProFile
    corecore