5,994 research outputs found

    Unsupervised learning of object landmarks by factorized spatial embeddings

    Full text link
    Learning automatically the structure of object categories remains an important open problem in computer vision. In this paper, we propose a novel unsupervised approach that can discover and learn landmarks in object categories, thus characterizing their structure. Our approach is based on factorizing image deformations, as induced by a viewpoint change or an object deformation, by learning a deep neural network that detects landmarks consistently with such visual effects. Furthermore, we show that the learned landmarks establish meaningful correspondences between different object instances in a category without having to impose this requirement explicitly. We assess the method qualitatively on a variety of object types, natural and man-made. We also show that our unsupervised landmarks are highly predictive of manually-annotated landmarks in face benchmark datasets, and can be used to regress these with a high degree of accuracy.Comment: To be published in ICCV 201

    Transducer applications, a compilation

    Get PDF
    The characteristics and applications of transducers are discussed. Subjects presented are: (1) thermal measurements, (2) liquid level and fluid flow measurements, (3) pressure transducers, (4) stress-strain measurements, (5) acceleration and velocity measurements, (6) displacement and angular rotation, and (7) transducer test and calibration methods

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    Measurement technology: A compilation

    Get PDF
    Technical information is presented on measurement techniques and instruments, measurement applications for inspection activities, measurement sensors, and data conversion methods. Photographs or diagrams are included for each instrument or method described, and where applicable, patent information is given

    Preliminary evaluation of adhesion strength measurement devices for ceramic/titanium matrix composite bonds

    Get PDF
    The adhesive bond between ceramic cement and a titanium matrix composite substrate to be used in the National Aerospace Plane program is evaluated. Two commercially available adhesion testers, the Sebastian Adherence Tester and the CSEM REVETEST Scratch Tester, are evaluated to determine their suitability for quantitatively measuring adhesion strength. Various thicknesses of cements are applied to several substrates, and bond strengths are determined with both testers. The Sabastian Adherence Tester has provided limited data due to an interference from the sample mounting procedure, and has been shown to be incapable of distinguishing adhesion strength from tensile and shear properties of the cement itself. The data from the scratch tester has been found to be difficult to interpret due to the porosity and hardness of the cement. Recommendations are proposed for a more reliable adhesion test method

    Self-Supervised Relative Depth Learning for Urban Scene Understanding

    Full text link
    As an agent moves through the world, the apparent motion of scene elements is (usually) inversely proportional to their depth. It is natural for a learning agent to associate image patterns with the magnitude of their displacement over time: as the agent moves, faraway mountains don't move much; nearby trees move a lot. This natural relationship between the appearance of objects and their motion is a rich source of information about the world. In this work, we start by training a deep network, using fully automatic supervision, to predict relative scene depth from single images. The relative depth training images are automatically derived from simple videos of cars moving through a scene, using recent motion segmentation techniques, and no human-provided labels. This proxy task of predicting relative depth from a single image induces features in the network that result in large improvements in a set of downstream tasks including semantic segmentation, joint road segmentation and car detection, and monocular (absolute) depth estimation, over a network trained from scratch. The improvement on the semantic segmentation task is greater than those produced by any other automatically supervised methods. Moreover, for monocular depth estimation, our unsupervised pre-training method even outperforms supervised pre-training with ImageNet. In addition, we demonstrate benefits from learning to predict (unsupervised) relative depth in the specific videos associated with various downstream tasks. We adapt to the specific scenes in those tasks in an unsupervised manner to improve performance. In summary, for semantic segmentation, we present state-of-the-art results among methods that do not use supervised pre-training, and we even exceed the performance of supervised ImageNet pre-trained models for monocular depth estimation, achieving results that are comparable with state-of-the-art methods

    Feature Based Control of Compact Disc Players

    Get PDF
    corecore