5 research outputs found

    A Multi-scale Bilateral Structure Tensor Based Corner Detector

    Full text link
    9th Asian Conference on Computer Vision, ACCV 2009, Xi'an, 23-27 September 2009In this paper, a novel multi-scale nonlinear structure tensor based corner detection algorithm is proposed to improve effectively the classical Harris corner detector. By considering both the spatial and gradient distances of neighboring pixels, a nonlinear bilateral structure tensor is constructed to examine the image local pattern. It can be seen that the linear structure tensor used in the original Harris corner detector is a special case of the proposed bilateral one by considering only the spatial distance. Moreover, a multi-scale filtering scheme is developed to tell the trivial structures from true corners based on their different characteristics in multiple scales. The comparison between the proposed approach and four representative and state-of-the-art corner detectors shows that our method has much better performance in terms of both detection rate and localization accuracy.Department of ComputingRefereed conference pape

    Harris Corners in the Real World: A Principled Selection Criterion for Interest Points Based on Ecological Statistics

    Get PDF
    In this report, we consider whether statistical regularities in natural images might be exploited to provide an improved selection criterion for interest points. One approach that has been particularly influential in this domain, is the Harris corner detector. The impetus for the selection criterion for Harris corners, proposed in early work and which remains in use to this day, is based on an intuitive mathematical definition constrained by the need for computational parsimony. In this report, we revisit this selection criterion free of the computational constraints that existed 20 years ago, and also importantly, taking advantage of the regularities observed in natural image statistics. Based on the motivating factors of stability and richness of structure, a selection threshold for Harris corners is proposed that is optimal with respect to the structure observed in natural images. Following the protocol proposed by Mikolajczyk et al. \cite{miko2005} we demonstrate that the proposed approach produces interest points that are more stable across various image deformations and are more distinctive resulting in improved matching scores. Finally, the proposal may be shown to generalize to provide an improved selection criterion for other types of interest points. As a whole, the report affords an improved selection criterion for Harris corners which might foreseeably benefit any system that employs Harris corners as a constituent component, and additionally presents a general strategy for the selection of interest points based on any measure of local image structure

    Action Recognition Using Visual-Neuron Feature of Motion-Salience Region

    Get PDF
    This paper proposes a shape-based neurobiological approach for action recognition. Our work is motivated by the successful quantitative model for the organization of the shape pathways in primate visual cortex. In our approach the motion-salience region (MSR) is firstly extracted from the sequential silhouettes of an action. Then, the MSR is represented by simulating the static object representation in the ventral stream of primate visual cortex. Finally, a linear multi-class classifier is used to classify the action. Experiments on publicly available action datasets demonstrate the proposed approach is robust to partial occlusion and deformation of actors and has lower computational cost than the neurobiological models that simulate the motion representation in primate dorsal stream

    Improved Harris’ algorithm for corner and edge detections

    No full text

    Design and Development of Robotic Part Assembly System under Vision Guidance

    Get PDF
    Robots are widely used for part assembly across manufacturing industries to attain high productivity through automation. The automated mechanical part assembly system contributes a major share in production process. An appropriate vision guided robotic assembly system further minimizes the lead time and improve quality of the end product by suitable object detection methods and robot control strategies. An approach is made for the development of robotic part assembly system with the aid of industrial vision system. This approach is accomplished mainly in three phases. The first phase of research is mainly focused on feature extraction and object detection techniques. A hybrid edge detection method is developed by combining both fuzzy inference rule and wavelet transformation. The performance of this edge detector is quantitatively analysed and compared with widely used edge detectors like Canny, Sobel, Prewitt, mathematical morphology based, Robert, Laplacian of Gaussian and wavelet transformation based. A comparative study is performed for choosing a suitable corner detection method. The corner detection technique used in the study are curvature scale space, Wang-Brady and Harris method. The successful implementation of vision guided robotic system is dependent on the system configuration like eye-in-hand or eye-to-hand. In this configuration, there may be a case that the captured images of the parts is corrupted by geometric transformation such as scaling, rotation, translation and blurring due to camera or robot motion. Considering such issue, an image reconstruction method is proposed by using orthogonal Zernike moment invariants. The suggested method uses a selection process of moment order to reconstruct the affected image. This enables the object detection method efficient. In the second phase, the proposed system is developed by integrating the vision system and robot system. The proposed feature extraction and object detection methods are tested and found efficient for the purpose. In the third stage, robot navigation based on visual feedback are proposed. In the control scheme, general moment invariants, Legendre moment and Zernike moment invariants are used. The selection of best combination of visual features are performed by measuring the hamming distance between all possible combinations of visual features. This results in finding the best combination that makes the image based visual servoing control efficient. An indirect method is employed in determining the moment invariants for Legendre moment and Zernike moment. These moments are used as they are robust to noise. The control laws, based on these three global feature of image, perform efficiently to navigate the robot in the desire environment
    corecore