178,473 research outputs found

    Feature fusion, feature selection and local n-ary patterns for object recognition and image classification

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Object recognition is one of the most fundamental topics in computer vision. During past years, it has been the interest for both academies working in computer science and professionals working in the information technology (IT) industry. The popularity of object recognition has been proven by its motivation of sophisticated theories in science and wide spread applications in the industry. Nowadays, with more powerful machine learning tools (both hardware and software) and the huge amount of information (data) readily available, higher expectations are imposed on object recognition. At its early stage in the 1990s, the task of object recognition can be as simple as to differentiate between object of interest and non-object of interest from a single still image. Currently, the task of object recognition may as well includes the segmentation and labeling of different image regions (i.e., to assign each segmented image region a meaningful label based on objects appear in those regions), and then using computer programs to infer the scene of the overall image based on those segmented regions. The original two-class classification problem is now getting more complex as it now evolves toward a multi-class classification problem. In this thesis, contributions on object recognition are made in two aspects. These are, improvements using feature fusion and improvements using feature selection. Three examples are given in this thesis to illustrate three different feature fusion methods, the descriptor concatenation (the low-level fusion), the confidence value escalation (the mid-level fusion) and the coarse-to-fine framework (the high-level fusion). Two examples are provided for feature selection to demonstrate its ideas, those are, optimal descriptor selection and improved classifier selection. Feature extraction plays a key role in object recognition because it is the first and also the most important step. If we consider the overall object recognition process, machine learning tools are to serve the purpose of finding distinctive features from the visual data. Given distinctive features, object recognition is readily available (e.g., a simple threshold function can be used to classify feature descriptors). The proposal of Local N-ary Pattern (LNP) texture features contributes to both feature extraction and texture classification. The distinctive LNP feature generalizes the texture feature extraction process and improves texture classification. Concretely, the local binary pattern (LBP) is the special case of LNP with n = 2 and the texture spectrum is the special case of LNP with n = 3. The proposed LNP representation has been proven to outperform the popular LBP and one of the LBP’s most successful extension - local ternary pattern (LTP) for texture classification

    The application of remote sensing to identify and measure sealed soil and vegetated surfaces in urban environments

    Get PDF
    Soil is an important non-renewable source. Its protection and allocation is critical to sustainable development goals. Urban development presents an important drive of soil loss due to sealing over by buildings, pavements and transport infrastructure. Monitoring sealed soil surfaces in urban environments is gaining increasing interest not only for scientific research studies but also for local planning and national authorities. The aim of this research was to investigate the extent to which automated classification methods can detect soil sealing in UK urban environments, by remote sensing. The objectives include development of object-based classification methods, using two types of earth observation data, and evaluation by comparison with manual aerial photo interpretation techniques. Four sample areas within the city of Cambridge were used for the development of an object-based classification model. The acquired data was a true-colour aerial photography (0.125 m resolution) and a QuickBird satellite imagery (2.8 multi-spectral resolution). The classification scheme included the following land cover classes: sealed surfaces, vegetated surfaces, trees, bare soil and rail tracks. Shadowed areas were also identified as an initial class and attempts were made to reclassify them into the actual land cover type. The accuracy of the thematic maps was determined by comparison with polygons derived from manual air-photo interpretation; the average overall accuracy was 84%. The creation of simple binary maps of sealed vs. vegetated surfaces resulted in a statistically significant accuracy increase to 92%. The integration of ancillary data (OS MasterMap) into the object-based model did not improve the performance of the model (overall accuracy of 91%). The use of satellite data in the object-based model gave an overall accuracy of 80%, a 7% decrease compared to the aerial photography. Future investigation will explore whether the integration of elevation data will aid to discriminate features such as trees from other vegetation types. The use of colour infrared aerial photography should also be tested. Finally, the application of the object- based classification model into a different study area would test its transferability

    Physical Representation-based Predicate Optimization for a Visual Analytics Database

    Full text link
    Querying the content of images, video, and other non-textual data sources requires expensive content extraction methods. Modern extraction techniques are based on deep convolutional neural networks (CNNs) and can classify objects within images with astounding accuracy. Unfortunately, these methods are slow: processing a single image can take about 10 milliseconds on modern GPU-based hardware. As massive video libraries become ubiquitous, running a content-based query over millions of video frames is prohibitive. One promising approach to reduce the runtime cost of queries of visual content is to use a hierarchical model, such as a cascade, where simple cases are handled by an inexpensive classifier. Prior work has sought to design cascades that optimize the computational cost of inference by, for example, using smaller CNNs. However, we observe that there are critical factors besides the inference time that dramatically impact the overall query time. Notably, by treating the physical representation of the input image as part of our query optimization---that is, by including image transforms, such as resolution scaling or color-depth reduction, within the cascade---we can optimize data handling costs and enable drastically more efficient classifier cascades. In this paper, we propose Tahoma, which generates and evaluates many potential classifier cascades that jointly optimize the CNN architecture and input data representation. Our experiments on a subset of ImageNet show that Tahoma's input transformations speed up cascades by up to 35 times. We also find up to a 98x speedup over the ResNet50 classifier with no loss in accuracy, and a 280x speedup if some accuracy is sacrificed.Comment: Camera-ready version of the paper submitted to ICDE 2019, In Proceedings of the 35th IEEE International Conference on Data Engineering (ICDE 2019

    Boosted Random ferns for object detection

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this paper we introduce the Boosted Random Ferns (BRFs) to rapidly build discriminative classifiers for learning and detecting object categories. At the core of our approach we use standard random ferns, but we introduce four main innovations that let us bring ferns from an instance to a category level, and still retain efficiency. First, we define binary features on the histogram of oriented gradients-domain (as opposed to intensity-), allowing for a better representation of intra-class variability. Second, both the positions where ferns are evaluated within the sliding window, and the location of the binary features for each fern are not chosen completely at random, but instead we use a boosting strategy to pick the most discriminative combination of them. This is further enhanced by our third contribution, that is to adapt the boosting strategy to enable sharing of binary features among different ferns, yielding high recognition rates at a low computational cost. And finally, we show that training can be performed online, for sequentially arriving images. Overall, the resulting classifier can be very efficiently trained, densely evaluated for all image locations in about 0.1 seconds, and provides detection rates similar to competing approaches that require expensive and significantly slower processing times. We demonstrate the effectiveness of our approach by thorough experimentation in publicly available datasets in which we compare against state-of-the-art, and for tasks of both 2D detection and 3D multi-view estimation.Peer ReviewedPostprint (author's final draft
    • …
    corecore