752 research outputs found

    IOD-CNN: Integrating Object Detection Networks for Event Recognition

    Full text link
    Many previous methods have showed the importance of considering semantically relevant objects for performing event recognition, yet none of the methods have exploited the power of deep convolutional neural networks to directly integrate relevant object information into a unified network. We present a novel unified deep CNN architecture which integrates architecturally different, yet semantically-related object detection networks to enhance the performance of the event recognition task. Our architecture allows the sharing of the convolutional layers and a fully connected layer which effectively integrates event recognition, rigid object detection and non-rigid object detection.Comment: submitted to IEEE International Conference on Image Processing 201

    A random forest system combination approach for error detection in digital dictionaries

    Full text link
    When digitizing a print bilingual dictionary, whether via optical character recognition or manual entry, it is inevitable that errors are introduced into the electronic version that is created. We investigate automating the process of detecting errors in an XML representation of a digitized print dictionary using a hybrid approach that combines rule-based, feature-based, and language model-based methods. We investigate combining methods and show that using random forests is a promising approach. We find that in isolation, unsupervised methods rival the performance of supervised methods. Random forests typically require training data so we investigate how we can apply random forests to combine individual base methods that are themselves unsupervised without requiring large amounts of training data. Experiments reveal empirically that a relatively small amount of data is sufficient and can potentially be further reduced through specific selection criteria.Comment: 9 pages, 7 figures, 10 tables; appeared in Proceedings of the Workshop on Innovative Hybrid Approaches to the Processing of Textual Data, April 201

    Bacteriophage T4 and our present concept of the gene

    Get PDF
    A. H. DOERMANN, UNIVERSITY OF WASHINGTON, SEATTLE, WASHINGTON

    THE INTRACELLULAR GROWTH OF BACTERIOPHAGES : II. THE GROWTH OF T3 STUDIED BY SONIC DISINTEGRATION AND BY T6-CYANIDE LYSIS OF INFECTED CELLS

    Get PDF
    The growth of the virus T3 has been followed by breaking up the complexes it forms with host cells at various stages in their development and then assaying the debris for active virus particles. Two independent methods for breaking up cells were used: sonic vibration and lysis by the T6-cyanide method previously used for the study of the growth of T4. During the first half of the latent period both treatments, as well as cyanide alone, destroyed the capacity of the complexes for producing daughter virus particles. Furthermore, the infecting particles could not be recovered from them during the first half of the latent period. After the complexes had had 12 minutes of incubation at 30°C. both methods freed daughter virus particles from them in numbers which increased steadily with time until, near the end of the rise period, the normal burst size was reached. In general the agreement between the two yields is so good that one may conclude that both methods liberate quantitatively the mature daughter T3 particles which exist in the complexes before normal lysis occurs

    Semantic Text-to-Face GAN -ST^2FG

    Full text link
    Faces generated using generative adversarial networks (GANs) have reached unprecedented realism. These faces, also known as "Deep Fakes", appear as realistic photographs with very little pixel-level distortions. While some work has enabled the training of models that lead to the generation of specific properties of the subject, generating a facial image based on a natural language description has not been fully explored. For security and criminal identification, the ability to provide a GAN-based system that works like a sketch artist would be incredibly useful. In this paper, we present a novel approach to generate facial images from semantic text descriptions. The learned model is provided with a text description and an outline of the type of face, which the model uses to sketch the features. Our models are trained using an Affine Combination Module (ACM) mechanism to combine the text embedding from BERT and the GAN latent space using a self-attention matrix. This avoids the loss of features due to inadequate "attention", which may happen if text embedding and latent vector are simply concatenated. Our approach is capable of generating images that are very accurately aligned to the exhaustive textual descriptions of faces with many fine detail features of the face and helps in generating better images. The proposed method is also capable of making incremental changes to a previously generated image if it is provided with additional textual descriptions or sentences.Comment: Experiments needs to be redon
    • …
    corecore