3,780 research outputs found

    Face Detection with the Faster R-CNN

    Full text link
    The Faster R-CNN has recently demonstrated impressive results on various object detection benchmarks. By training a Faster R-CNN model on the large scale WIDER face dataset, we report state-of-the-art results on two widely used face detection benchmarks, FDDB and the recently released IJB-A.Comment: technical repor

    Monte Carlo Study of the Phase Structure of Compact Polymer Chains

    Full text link
    We study the phase behavior of single homopolymers in a simple hydrophobic/hydrophilic off-lattice model with sequence independent local interactions. The specific heat is, not unexpectedly, found to exhibit a pronounced peak well below the collapse temperature, signalling a possible low-temperature phase transition. The system size dependence at this maximum is investigated both with and without the local interactions, using chains with up to 50 monomers. The size dependence is found to be weak. The specific heat itself seems not to diverge. The homopolymer results are compared with those for two non-uniform sequences. Our calculations are performed using the methods of simulated and parallel tempering. The performances of these algorithms are discussed, based on careful tests for a small system.Comment: 13 pages LaTeX, 6 Postscript figures, References adde

    Bounding the Probability of Error for High Precision Recognition

    Full text link
    We consider models for which it is important, early in processing, to estimate some variables with high precision, but perhaps at relatively low rates of recall. If some variables can be identified with near certainty, then they can be conditioned upon, allowing further inference to be done efficiently. Specifically, we consider optical character recognition (OCR) systems that can be bootstrapped by identifying a subset of correctly translated document words with very high precision. This "clean set" is subsequently used as document-specific training data. While many current OCR systems produce measures of confidence for the identity of each letter or word, thresholding these confidence values, even at very high values, still produces some errors. We introduce a novel technique for identifying a set of correct words with very high precision. Rather than estimating posterior probabilities, we bound the probability that any given word is incorrect under very general assumptions, using an approximate worst case analysis. As a result, the parameters of the model are nearly irrelevant, and we are able to identify a subset of words, even in noisy documents, of which we are highly confident. On our set of 10 documents, we are able to identify about 6% of the words on average without making a single error. This ability to produce word lists with very high precision allows us to use a family of models which depends upon such clean word lists
    • …
    corecore