65,123 research outputs found

    Object class recognition using combination of colour dense SIFT and texture descriptors

    Get PDF
    Object class recognition has recently become one of the most popular research fields. This is due to its importance in many applications such as image classification, retrieval, indexing, and searching. The main aim of object class recognition is determining how to make computers understand and identify automatically which object or scene is being displayed on the image. Despite a lot of efforts that have been made, it still considered as one of the most challenging tasks, mainly due to inter-class variations and intra-class variations like occlusion, background clutter, viewpoint changes, pose, scale and illumination. Feature extraction is one of the important steps in any object class recognition system. Different image features are proposed in the literature review to increase categorisation accuracy such as appearance, texture, shape descriptors. In this paper, we propose to combine different descriptors which are dense colour scale-invariant feature transform (dense colour SIFT) as appearance descriptors with different texture descriptors. The colour completed local binary pattern (CCLBP) and completed local ternary pattern (CLTP) are integrated with dense colour SIFT due to the importance of the texture information in the image. Using different pattern sizes to extract the CLTP and CCLBP texture descriptors will help to find dense texture information from the image. Bag of features is also used in the proposed system with each descriptor while the late fusion strategy is used in the classification stage. The proposed system achieved high recognition accuracy rate when applied in some datasets, namely SUN-397, OT4N, OT8, and Event sport datasets, which accomplished 38.9%, 95.9%, 89.02%, and 88.167%, respectively

    OBJECT RECOGNITION USING SIFT ON DM3730 PROCESSOR

    Get PDF
    Stable local feature recognition and representation is really a fundamental element of many image registration and object recognition calculations. This paper examines the neighborhood image descriptor utilized by SIFT. The SIFT formula (Scale Invariant Feature Transform) is definitely a method for removing distinctive invariant features from images. It's been effectively put on a number of computer vision problems according to feature matching including object recognition, pose estimation, image retrieval and many more. Like SIFT, our descriptors encode the salient facets of the look gradient within the feature point’s neighborhood Optical object recognition and pose estimation are extremely challenging tasks in automobiles given that they suffer from problems for example different sights of the object, various light conditions, surface glare, and noise brought on by image sensors. Presently available calculations for example SIFT can to some degree solve these complaints because they compute so known as point features that are invariant towards scaling and rotation. However, these calculations are computationally complex and need effective hardware to be able to operate instantly. In automotive programs and usually in the area of mobile products, limited processing power and also the interest in low electric batteries consumption play a huge role. Hence, adopting individuals sophisticated point feature calculations to mobile hardware is definitely an ambitious, but additionally necessary computer engineering task. However, in tangible-world programs there's still an excuse for improvement from the algorithm’s sturdiness with regards to the correct matching of SIFT features. Within this work, we advise to make use of original SIFT formula to supply more reliable feature matching with regards to object recognition

    Analysis of SURF and SIFT representations to recognize food objects

    Get PDF
    The social media services such as Facebook, Instagram and Twitter has attracted millions of food photos to be uploaded every day since its inception. Automatic analysis on food images are beneficial from health, cultural and marketing aspects. Hence, recognizing food objects using image processing and machine learning techniques has become emerging research topic. However, to represent the key features of foods has become a hassle from the immaturity of current feature representation techniques in handling the complex appearances, high deformation and large variation of foods. To employ many kinds of feature types are also infeasible as it inquire much pre-processing and computational resources for segmentation, feature representation and classification. Motivated from these drawbacks, we proposed the integration on two kinds of local feature namely Speeded-Up Robust Feature (SURF) and Scale Invariant Feature Transform (SIFT) to represent the features large variation food objects. Local invariant features have shown to be successful in describing object appearances for image classification tasks. Such features are robust towards occlusion and clutter and are also invariant against scale and orientation changes. This makes them suitable for classification tasks with little inter-class similarity and large intra-class difference. The Bag of Features (BOF) approach is employed to enhance the discriminative ability of the local features. Experimental results demonstrate impressive overall recognition at 82.38% classification accuracy from the local feature integration based on the challenging UEC-Food100 dataset. Then, we provide depth analysis on SURF and SIFT implementation to highlight the problems towards recognizing foods that need to be rectified in the future research

    Place recognition: An Overview of Vision Perspective

    Full text link
    Place recognition is one of the most fundamental topics in computer vision and robotics communities, where the task is to accurately and efficiently recognize the location of a given query image. Despite years of wisdom accumulated in this field, place recognition still remains an open problem due to the various ways in which the appearance of real-world places may differ. This paper presents an overview of the place recognition literature. Since condition invariant and viewpoint invariant features are essential factors to long-term robust visual place recognition system, We start with traditional image description methodology developed in the past, which exploit techniques from image retrieval field. Recently, the rapid advances of related fields such as object detection and image classification have inspired a new technique to improve visual place recognition system, i.e., convolutional neural networks (CNNs). Thus we then introduce recent progress of visual place recognition system based on CNNs to automatically learn better image representations for places. Eventually, we close with discussions and future work of place recognition.Comment: Applied Sciences (2018

    Histogram of Oriented Principal Components for Cross-View Action Recognition

    Full text link
    Existing techniques for 3D action recognition are sensitive to viewpoint variations because they extract features from depth images which are viewpoint dependent. In contrast, we directly process pointclouds for cross-view action recognition from unknown and unseen views. We propose the Histogram of Oriented Principal Components (HOPC) descriptor that is robust to noise, viewpoint, scale and action speed variations. At a 3D point, HOPC is computed by projecting the three scaled eigenvectors of the pointcloud within its local spatio-temporal support volume onto the vertices of a regular dodecahedron. HOPC is also used for the detection of Spatio-Temporal Keypoints (STK) in 3D pointcloud sequences so that view-invariant STK descriptors (or Local HOPC descriptors) at these key locations only are used for action recognition. We also propose a global descriptor computed from the normalized spatio-temporal distribution of STKs in 4-D, which we refer to as STK-D. We have evaluated the performance of our proposed descriptors against nine existing techniques on two cross-view and three single-view human action recognition datasets. The Experimental results show that our techniques provide significant improvement over state-of-the-art methods

    On the use of SIFT features for face authentication

    Get PDF
    Several pattern recognition and classification techniques have been applied to the biometrics domain. Among them, an interesting technique is the Scale Invariant Feature Transform (SIFT), originally devised for object recognition. Even if SIFT features have emerged as a very powerful image descriptors, their employment in face analysis context has never been systematically investigated. This paper investigates the application of the SIFT approach in the context of face authentication. In order to determine the real potential and applicability of the method, different matching schemes are proposed and tested using the BANCA database and protocol, showing promising results
    corecore