1,116 research outputs found

    Historical Handwritten Text Images Word Spotting through Sliding Window HOG Features

    Get PDF
    In this paper we present an innovative technique to semi-automatically index handwritten word images. The proposed method is based on HOG descriptors and exploits Dynamic Time Warping technique to compare feature vectors elaborated from single handwritten words. Our strategy is applied to a new challenging dataset extracted from Italian civil registries of the XIX century. Experimental results, compared with some previously developed word spotting strategies, confirmed that our method outperforms competitors

    Multi-resolution texture classification based on local image orientation

    Get PDF
    The aim of this paper is to evaluate quantitatively the discriminative power of the image orientation in the texture classification process. In this regard, we have evaluated the performance of two texture classification schemes where the image orientation is extracted using the partial derivatives of the Gaussian function. Since the texture descriptors are dependent on the observation scale, in this study the main emphasis is placed on the implementation of multi-resolution texture analysis schemes. The experimental results were obtained when the analysed texture descriptors were applied to standard texture databases

    An edge-based approach for robust foreground detection

    Get PDF
    Foreground segmentation is an essential task in many image processing applications and a commonly used approach to obtain foreground objects from the background. Many techniques exist, but due to shadows and changes in illumination the segmentation of foreground objects from the background remains challenging. In this paper, we present a powerful framework for detections of moving objects in real-time video processing applications under various lighting changes. The novel approach is based on a combination of edge detection and recursive smoothing techniques.We use edge dependencies as statistical features of foreground and background regions and define the foreground as regions containing moving edges. The background is described by short- and long-term estimates. Experiments prove the robustness of our method in the presence of lighting changes in sequences compared to other widely used background subtraction techniques

    Searching for Signatures of Cosmic Superstrings in the CMB

    Full text link
    Because cosmic superstrings generically form junctions and gauge theoretic strings typically do not, junctions may provide a signature to distinguish between cosmic superstrings and gauge theoretic cosmic strings. In cosmic microwave background anisotropy maps, cosmic strings lead to distinctive line discontinuities. String junctions lead to junctions in these line discontinuities. In turn, edge detection algorithms such as the Canny algorithm can be used to search for signatures of strings in anisotropy maps. We apply the Canny algorithm to simulated maps which contain the effects of cosmic strings with and without string junctions. The Canny algorithm produces edge maps. To distinguish between edge maps from string simulations with and without junctions, we examine the density distribution of edges and pixels crossed by edges. We find that in string simulations without Gaussian noise (such as produced by the dominant inflationary fluctuations) our analysis of the output data from the Canny algorithm can clearly distinguish between simulations with and without string junctions. In the presence of Gaussian noise at the level expected from the current bounds on the contribution of cosmic strings to the total power spectrum of density fluctuations, the distinction between models with and without junctions is more difficult. However, by carefully analyzing the data the models can still be differentiated.Comment: 15 page

    The Complexity of Nash Equilibria in Simple Stochastic Multiplayer Games

    Get PDF
    We analyse the computational complexity of finding Nash equilibria in simple stochastic multiplayer games. We show that restricting the search space to equilibria whose payoffs fall into a certain interval may lead to undecidability. In particular, we prove that the following problem is undecidable: Given a game G, does there exist a pure-strategy Nash equilibrium of G where player 0 wins with probability 1. Moreover, this problem remains undecidable if it is restricted to strategies with (unbounded) finite memory. However, if mixed strategies are allowed, decidability remains an open problem. One way to obtain a provably decidable variant of the problem is restricting the strategies to be positional or stationary. For the complexity of these two problems, we obtain a common lower bound of NP and upper bounds of NP and PSPACE respectively.Comment: 23 pages; revised versio

    Automatic summarization of rushes video using bipartite graphs

    Get PDF
    In this paper we present a new approach for automatic summarization of rushes, or unstructured video. Our approach is composed of three major steps. First, based on shot and sub-shot segmentations, we filter sub-shots with low information content not likely to be useful in a summary. Second, a method using maximal matching in a bipartite graph is adapted to measure similarity between the remaining shots and to minimize inter-shot redundancy by removing repetitive retake shots common in rushes video. Finally, the presence of faces and motion intensity are characterised in each sub-shot. A measure of how representative the sub-shot is in the context of the overall video is then proposed. Video summaries composed of keyframe slideshows are then generated. In order to evaluate the effectiveness of this approach we re-run the evaluation carried out by TRECVid, using the same dataset and evaluation metrics used in the TRECVid video summarization task in 2007 but with our own assessors. Results show that our approach leads to a significant improvement on our own work in terms of the fraction of the TRECVid summary ground truth included and is competitive with the best of other approaches in TRECVid 2007

    Peer-to-Peer Secure Multi-Party Numerical Computation Facing Malicious Adversaries

    Full text link
    We propose an efficient framework for enabling secure multi-party numerical computations in a Peer-to-Peer network. This problem arises in a range of applications such as collaborative filtering, distributed computation of trust and reputation, monitoring and other tasks, where the computing nodes is expected to preserve the privacy of their inputs while performing a joint computation of a certain function. Although there is a rich literature in the field of distributed systems security concerning secure multi-party computation, in practice it is hard to deploy those methods in very large scale Peer-to-Peer networks. In this work, we try to bridge the gap between theoretical algorithms in the security domain, and a practical Peer-to-Peer deployment. We consider two security models. The first is the semi-honest model where peers correctly follow the protocol, but try to reveal private information. We provide three possible schemes for secure multi-party numerical computation for this model and identify a single light-weight scheme which outperforms the others. Using extensive simulation results over real Internet topologies, we demonstrate that our scheme is scalable to very large networks, with up to millions of nodes. The second model we consider is the malicious peers model, where peers can behave arbitrarily, deliberately trying to affect the results of the computation as well as compromising the privacy of other peers. For this model we provide a fourth scheme to defend the execution of the computation against the malicious peers. The proposed scheme has a higher complexity relative to the semi-honest model. Overall, we provide the Peer-to-Peer network designer a set of tools to choose from, based on the desired level of security.Comment: Submitted to Peer-to-Peer Networking and Applications Journal (PPNA) 200
    corecore