649 research outputs found

    A novel fusion approach in the extraction of kernel descriptor with improved effectiveness and efficiency

    Get PDF
    Image representation using feature descriptors is crucial. A number of histogram-based descriptors are widely used for this purpose. However, histogram-based descriptors have certain limitations and kernel descriptors (KDES) are proven to overcome them. Moreover, the combination of more than one KDES performs better than an individual KDES. Conventionally, KDES fusion is performed by concatenating them after the gradient, colour and shape descriptors have been extracted. This approach has limitations in regard to the efficiency as well as the effectiveness. In this paper, we propose a novel approach to fuse different image features before the descriptor extraction, resulting in a compact descriptor which is efficient and effective. In addition, we have investigated the effect on the proposed descriptor when texture-based features are fused along with the conventionally used features. Our proposed descriptor is examined on two publicly available image databases and shown to provide outstanding performances

    Describing Textures in the Wild

    Get PDF
    Patterns and textures are defining characteristics of many natural objects: a shirt can be striped, the wings of a butterfly can be veined, and the skin of an animal can be scaly. Aiming at supporting this analytical dimension in image understanding, we address the challenging problem of describing textures with semantic attributes. We identify a rich vocabulary of forty-seven texture terms and use them to describe a large dataset of patterns collected in the wild.The resulting Describable Textures Dataset (DTD) is the basis to seek for the best texture representation for recognizing describable texture attributes in images. We port from object recognition to texture recognition the Improved Fisher Vector (IFV) and show that, surprisingly, it outperforms specialized texture descriptors not only on our problem, but also in established material recognition datasets. We also show that the describable attributes are excellent texture descriptors, transferring between datasets and tasks; in particular, combined with IFV, they significantly outperform the state-of-the-art by more than 8 percent on both FMD and KTHTIPS-2b benchmarks. We also demonstrate that they produce intuitive descriptions of materials and Internet images.Comment: 13 pages; 12 figures Fixed misplaced affiliatio

    Coastal fog detection using visual sensing

    Get PDF
    Use of visual sensing techniques to detect low visibility conditions may have a number of advantages when combined with other methods, such as satellite based remote sensing, as data can be collected and processed in real or near real time. Camera-enabled visual sensing can provide direct confirmation of modelling and forecasting results. Fog detection, modelling and prediction are a priority for maritime communities and coastal cities due to economic impacts of fog on aviation, marine, and land transportation. Canadian and Irish coasts are particularly vulnerable to dense fog under certain environmental conditions. Offshore oil and gas production on Grand Bank (off the Canadian East Coast) can be adversely affected by weather and sea state conditions. In particular, fog can disrupt the transfer of equipment and people to/from the production platforms by helicopter. Such disruptions create delays and the delays cost money. According to offshore oil and gas industry representatives at a recent workshop on metocean monitoring and forecasting for the NL offshore, there is a real need for improved forecasting of visibility (fog) out to 3 days. The ability to accurately forecast future fog conditions would improve the industry’s ability to adjust its schedule of operations accordingly. In addition, it was recognized by workshop participants that the physics of Grand Banks fog formation is not well understood, and that more and better data are needed

    Deep filter banks for texture recognition, description, and segmentation

    Get PDF
    Visual textures have played a key role in image understanding because they convey important semantics of images, and because texture representations that pool local image descriptors in an orderless manner have had a tremendous impact in diverse applications. In this paper we make several contributions to texture understanding. First, instead of focusing on texture instance and material category recognition, we propose a human-interpretable vocabulary of texture attributes to describe common texture patterns, complemented by a new describable texture dataset for benchmarking. Second, we look at the problem of recognizing materials and texture attributes in realistic imaging conditions, including when textures appear in clutter, developing corresponding benchmarks on top of the recently proposed OpenSurfaces dataset. Third, we revisit classic texture representations, including bag-of-visual-words and the Fisher vectors, in the context of deep learning and show that these have excellent efficiency and generalization properties if the convolutional layers of a deep model are used as filter banks. We obtain in this manner state-of-the-art performance in numerous datasets well beyond textures, an efficient method to apply deep features to image regions, as well as benefit in transferring features from one domain to another.Comment: 29 pages; 13 figures; 8 table

    Automatic quantitative morphological analysis of interacting galaxies

    Full text link
    The large number of galaxies imaged by digital sky surveys reinforces the need for computational methods for analyzing galaxy morphology. While the morphology of most galaxies can be associated with a stage on the Hubble sequence, morphology of galaxy mergers is far more complex due to the combination of two or more galaxies with different morphologies and the interaction between them. Here we propose a computational method based on unsupervised machine learning that can quantitatively analyze morphologies of galaxy mergers and associate galaxies by their morphology. The method works by first generating multiple synthetic galaxy models for each galaxy merger, and then extracting a large set of numerical image content descriptors for each galaxy model. These numbers are weighted using Fisher discriminant scores, and then the similarities between the galaxy mergers are deduced using a variation of Weighted Nearest Neighbor analysis such that the Fisher scores are used as weights. The similarities between the galaxy mergers are visualized using phylogenies to provide a graph that reflects the morphological similarities between the different galaxy mergers, and thus quantitatively profile the morphology of galaxy mergers.Comment: Astronomy & Computing, accepte
    corecore