166,771 research outputs found

    On Using Physical Analogies for Feature and Shape Extraction in Computer Vision

    No full text
    There is a rich literature of approaches to image feature extraction in computer vision. Many sophisticated approaches exist for low- and high-level feature extraction but can be complex to implement with parameter choice guided by experimentation, but impeded by speed of computation. We have developed new ways to extract features based on notional use of physical paradigms, with parameterisation that is more familiar to a scientifically-trained user, aiming to make best use of computational resource. We describe how analogies based on gravitational force can be used for low-level analysis, whilst analogies of water flow and heat can be deployed to achieve high-level smooth shape detection. These new approaches to arbitrary shape extraction are compared with standard state-of-art approaches by curve evolution. There is no comparator operator to our use of gravitational force. We also aim to show that the implementation is consistent with the original motivations for these techniques and so contend that the exploration of physical paradigms offers a promising new avenue for new approaches to feature extraction in computer vision

    On Using Physical Analogies for Feature and Shape Extraction in Computer Vision

    No full text
    There is a rich literature of approaches to image feature extraction in computer vision. Many sophisticated approaches exist for low- and for high-level feature extraction but can be complex to implement with parameter choice guided by experimentation, but with performance analysis and optimization impeded by speed of computation. We have developed new feature extraction techniques on notional use of physical paradigms, with parametrization aimed to be more familiar to a scientifically trained user, aiming to make best use of computational resource. This paper is the first unified description of these new approaches, outlining the basis and results that can be achieved. We describe how gravitational force can be used for low-level analysis, while analogies of water flow and heat can be deployed to achieve high-level smooth shape detection, by determining features and shapes in a selection of images, comparing results with those by stock approaches from the literature. We also aim to show that the implementation is consistent with the original motivations for these techniques and so contend that the exploration of physical paradigms offers a promising new avenue for new approaches to feature extraction in computer vision

    Background derivation and image flattening: getimages

    Full text link
    Modern high-resolution images obtained with space observatories display extremely strong intensity variations across images on all spatial scales. Source extraction in such images with methods based on global thresholding may bring unacceptably large numbers of spurious sources in bright areas while failing to detect sources in low-background or low-noise areas. It would be highly beneficial to subtract background and equalize the levels of small-scale fluctuations in the images before extracting sources or filaments. This paper describes getimages, a new method of background derivation and image flattening. It is based on median filtering with sliding windows that correspond to a range of spatial scales from the observational beam size up to a maximum structure width XλX_{\lambda}. The latter is a single free parameter of getimages that can be evaluated manually from the observed image Iλ\mathcal{I}_{\lambda}. The median filtering algorithm provides a background image B~λ\tilde{\mathcal{B}}_{\lambda} for structures of all widths below XλX_{\lambda}. The same median filtering procedure applied to an image of standard deviations Dλ\mathcal{D}_{\lambda} derived from a background-subtracted image S~λ\tilde{\mathcal{S}}_{\lambda} results in a flattening image F~λ\tilde{\mathcal{F}}_{\lambda}. Finally, a flattened detection image IλD=S~λ/F~λ\mathcal{I}_{{\lambda}\mathrm{D}}{\,=\,}\tilde{\mathcal{S}}_{\lambda}{/}\tilde{\mathcal{F}}_{\lambda} is computed, whose standard deviations are uniform outside sources and filaments. Detecting sources in such greatly simplified images results in much cleaner extractions that are more complete and reliable. As a bonus, getimages reduces various observational and map-making artifacts and equalizes noise levels between independent tiles of mosaicked images.Comment: 14 pages, 11 figures (main text + 3 appendices), accepted by Astronomy & Astrophysics; fixed Metadata abstract (typesetting

    STV-based Video Feature Processing for Action Recognition

    Get PDF
    In comparison to still image-based processes, video features can provide rich and intuitive information about dynamic events occurred over a period of time, such as human actions, crowd behaviours, and other subject pattern changes. Although substantial progresses have been made in the last decade on image processing and seen its successful applications in face matching and object recognition, video-based event detection still remains one of the most difficult challenges in computer vision research due to its complex continuous or discrete input signals, arbitrary dynamic feature definitions, and the often ambiguous analytical methods. In this paper, a Spatio-Temporal Volume (STV) and region intersection (RI) based 3D shape-matching method has been proposed to facilitate the definition and recognition of human actions recorded in videos. The distinctive characteristics and the performance gain of the devised approach stemmed from a coefficient factor-boosted 3D region intersection and matching mechanism developed in this research. This paper also reported the investigation into techniques for efficient STV data filtering to reduce the amount of voxels (volumetric-pixels) that need to be processed in each operational cycle in the implemented system. The encouraging features and improvements on the operational performance registered in the experiments have been discussed at the end
    corecore