9,829 research outputs found

    Learning scale-variant and scale-invariant features for deep image classification

    Get PDF
    Convolutional Neural Networks (CNNs) require large image corpora to be trained on classification tasks. The variation in image resolutions, sizes of objects and patterns depicted, and image scales, hampers CNN training and performance, because the task-relevant information varies over spatial scales. Previous work attempting to deal with such scale variations focused on encouraging scale-invariant CNN representations. However, scale-invariant representations are incomplete representations of images, because images contain scale-variant information as well. This paper addresses the combined development of scale-invariant and scale-variant representations. We propose a multi- scale CNN method to encourage the recognition of both types of features and evaluate it on a challenging image classification task involving task-relevant characteristics at multiple scales. The results show that our multi-scale CNN outperforms single-scale CNN. This leads to the conclusion that encouraging the combined development of a scale-invariant and scale-variant representation in CNNs is beneficial to image recognition performance

    FaceShop: Deep Sketch-based Face Image Editing

    Get PDF
    We present a novel system for sketch-based face image editing, enabling users to edit images intuitively by sketching a few strokes on a region of interest. Our interface features tools to express a desired image manipulation by providing both geometry and color constraints as user-drawn strokes. As an alternative to the direct user input, our proposed system naturally supports a copy-paste mode, which allows users to edit a given image region by using parts of another exemplar image without the need of hand-drawn sketching at all. The proposed interface runs in real-time and facilitates an interactive and iterative workflow to quickly express the intended edits. Our system is based on a novel sketch domain and a convolutional neural network trained end-to-end to automatically learn to render image regions corresponding to the input strokes. To achieve high quality and semantically consistent results we train our neural network on two simultaneous tasks, namely image completion and image translation. To the best of our knowledge, we are the first to combine these two tasks in a unified framework for interactive image editing. Our results show that the proposed sketch domain, network architecture, and training procedure generalize well to real user input and enable high quality synthesis results without additional post-processing.Comment: 13 pages, 20 figure

    Comparison of continuous and discontinuous collisional bumpers: Dimensionally scaled impact experiments into single wire meshes

    Get PDF
    An experimental inquiry into the utility of discontinuous bumpers was conducted to investigate the collisional outcomes of impacts into single grid-like targets and to compare the results with more traditional bumper designs that employ continuous sheet stock. We performed some 35 experiments using 6.3 and 3.2 mm diameter spherical soda-lime glass projectiles at low velocities (less than 2.5 km/s) and 13 at velocities between 5 and 6 km/s, using 3.2 mm spheres only. The thrust of the experiments related to the characterization of collisional fragments as a function of target thickness or areal shield mass of both bumper designs. The primary product of these experiments was witness plates that record the resulting population of collisional fragments. Substantial interpretive and predictive insights into bumper performance were obtained. All qualitative observations (on the witness plates) and detailed measurements of displaced masses seem simply and consistently related only to bumper mass available for interaction with the impactor. This renders the grid bumper into the superior shield design. These findings present evidence that discontinuous bumpers are a viable concept for collisional shields, possibly superior to continuous geometries
    corecore