222 research outputs found

    Methods for iris classification and macro feature detection

    Get PDF
    This work deals with two distinct aspects of iris-based biometric systems: iris classification and macro-feature detection. Iris classification will benefit identification systems where the query image has to be compared against all identities in the database. By preclassifying the query image based on its texture, this comparison is executed only against those irises that are from the same class as the query image. In the proposed classification method, the normalized iris is tessellated into overlapping rectangular blocks and textural features are extracted from each block. A clustering scheme is used to generate multiple classes of irises based on the extracted features. A minimum distance classifier is then used to assign the query iris to a particular class. The use of multiple blocks with decision level fusion in the classification process is observed to enhance the accuracy of the method.;Most iris-based systems use the global and local texture information of the iris to perform matching. In order to exploit the anatomical structures within the iris during the matching stage, two methods to detect the macro-features of the iris in multi-spectral images are proposed. These macro-features typically correspond to anomalies in pigmentation and structure within the iris. The first method uses the edge-flow technique to localize these features. The second technique uses the SIFT (Scale Invariant Feature Transform) operator to detect discontinuities in the image. Preliminary results show that detection of these macro features is a difficult problem owing to the richness and variability in iris color and texture. Thus a large number of spurious features are detected by both the methods suggesting the need for designing more sophisticated algorithms. However the ability of the SIFT operator to match partial iris images is demonstrated thereby indicating the potential of this scheme to be used for macro-feature detection

    An Intelligent Multi-Resolutional and Rotational Invariant Texture Descriptor for Image Retrieval Systems

    Get PDF
    To find out the identical or comparable images from the large rotated databases with higher retrieval accuracy and lesser time is the challenging task in Content based Image Retrieval systems (CBIR). Considering this problem, an intelligent and efficient technique is proposed for texture based images. In this method, firstly a new joint feature vector is created which inherits the properties of Local binary pattern (LBP) which has steadiness regarding changes in illumination and rotation and discrete wavelet transform (DWT) which is multi-resolutional and multi-oriented along with higher directionality. Secondly, after the creation of hybrid feature vector, to increase the accuracy of the system, classifiers are employed on the combination of LBP and DWT. The performance of two machine learning classifiers is proposed here which are Support Vector Machine (SVM) and Extreme learning machine (ELM). Both proposed methods P1 (LBP+DWT+SVM) and P2 (LBP+DWT+ELM) are tested on rotated Brodatz dataset consisting of 1456 texture images and MIT VisTex dataset of 640 images. In both experiments the results of both the proposed methods are much better than simple combination of DWT +LBP and much other state of art methods in terms of precision and accuracy when different number of images is retrieved.  But the results obtained by ELM algorithm shows some more improvement than SVM. Such as when top 25 images are retrieved then in case of Brodatz database the precision is up to 94% and for MIT VisTex database its value is up to 96% with ELM classifier which is very much superior to other existing texture retrieval methods

    Scene Image Classification and Retrieval

    Get PDF
    Scene image classification and retrieval not only have a great impact on scene image management, but also they can offer immeasurable assistance to other computer vision problems, such as image completion, human activity analysis, object recognition etc. Intuitively scene identification is correlated to recognition of objects or image regions, which prompts the notion to apply local features to scene categorization applications. Even though the adoption of local features in these tasks has yielded promising results, a global perception on scene images is also well-conditioned in cognitive science studies. Since the global description of a scene imposes less computational burden, it is favoured by some scholars despite its less discriminative capacity. Recent studies on global scene descriptors have even yielded classification performance that rivals results obtained by local approaches. The primary objective of this work is to tackle two of the limitations of existing global scene features: representation ineffectiveness and computational complexity. The thesis proposes two global scene features that seek to represent finer scene structures and reduce the dimensionality of feature vectors. Experimental results show that the proposed scene features exceed the performance of existing methods. The thesis is roughly divided into two parts. The first three chapters give an overview on the topic of scene image classification and retrieval methods, with a special attention to the most effective global scene features. In chapter 4, a novel scene descriptor, called ARP-GIST, is proposed and evaluated against the existing methods to show its ability to detect finer scene structures. In chapter 5, a low-dimensional scene feature, GIST-LBP, is proposed. In conjunction with a block ranking approach, the GIST-LBP feature is tested on a standard scene dataset to demonstrate its state-of-the-art performance

    Multi Voxel Descriptor for 3D Texture Retrieval

    Get PDF
    In this paper, we present a new feature descriptors  which exploit voxels for 3D textured retrieval system when models vary either by geometric shape or texture or both. First, we perform pose normalisation to modify arbitrary 3D models  in order to have same orientation. We then map the structure of 3D models into voxels. This purposes to make all the 3D models have the same dimensions. Through this voxels, we can capture information from a number of ways.  First, we build biner voxel histogram and color voxel histogram.  Second, we compute distance from centre voxel into other voxels and generate histogram. Then we also compute fourier transform in spectral space.  For capturing texture feature, we apply voxel tetra pattern. Finally, we merge all features by linear combination. For experiment, we use standard evaluation measures such as Nearest Neighbor (NN), First Tier (FT), Second Tier (ST), Average Dynamic Recall (ADR). Dataset in SHREC 2014  and its evaluation program is used to verify the proposed method. Experiment result show that the proposed method  is more accurate when compared with some methods of state-of-the-art

    Giving eyes to ICT!, or How does a computer recognize a cow?

    Get PDF
    Het door Schouten en andere onderzoekers op het CWI ontwikkelde systeem berust op het beschrijven van beelden met behulp van fractale meetkunde. De menselijke waarneming blijkt mede daardoor zo efficiënt omdat zij sterk werkt met gelijkenissen. Het ligt dus voor de hand het te zoeken in wiskundige methoden die dat ook doen. Schouten heeft daarom beeldcodering met behulp van 'fractals' onderzocht. Fractals zijn zelfgelijkende meetkundige figuren, opgebouwd door herhaalde transformatie (iteratie) van een eenvoudig basispatroon, dat zich daardoor op steeds kleinere schalen vertakt. Op elk niveau van detaillering lijkt een fractal op zichzelf (Droste-effect). Met fractals kan men vrij eenvoudig bedrieglijk echte natuurvoorstellingen maken. Fractale beeldcodering gaat ervan uit dat het omgekeerde ook geldt: een beeld effectief opslaan in de vorm van de basispatronen van een klein aantal fractals, samen met het voorschrift hoe het oorspronkelijke beeld daaruit te reconstrueren. Het op het CWI in samenwerking met onderzoekers uit Leuven ontwikkelde systeem is mede gebaseerd op deze methode. ISBN 906196502

    Morphological quantitation software in breast MRI: application to neoadjuvant chemotherapy patients

    Get PDF
    The work in this thesis examines the use of texture analysis techniques and shape descriptors to analyse MR images of the breast and their application as a potential quantitative tool for prognostic indication.Textural information is undoubtedly very heavily used in a radiologist’s decision making process. However, subtle variations in texture are often missed, thus by quantitatively analysing MR images the textural properties that would otherwise be impossible to discern by simply visually inspecting the image can be obtained. Texture analysis is commonly used in image classification of aerial and satellite photography, studies have also focussed on utilising texture in MRI especially in the brain. Recent research has focussed on other organs such as the breast wherein lesion morphology is known to be an important diagnostic and prognostic indicator. Recent work suggests benefits in assessing lesion texture in dynamic contrast-enhanced (DCE) images, especially with regards to changes during the initial enhancement and subsequent washout phases. The commonest form of analysis is the spatial grey-level dependence matrix method, but there is no direct evidence concerning the most appropriate pixel separation and number of grey levels to utilise in the required co-occurrence matrix calculations. The aim of this work is to systematically assess the efficacy of DCE-MRI based textural analysis in predicting response to chemotherapy in a cohort of breast cancer patients. In addition an attempt was made to use shape parameters in order to assess tumour surface irregularity, and as a predictor of response to chemotherapy.In further work this study aimed to texture map DCE MR images of breast patients utilising the co-occurrence method but on a pixel by pixel basis in order to determine threshold values for normal, benign and malignant tissue and ultimately creating functionality within the in house developed software to highlight hotspots outlining areas of interest (possible lesions). Benign and normal data was taken from MRI screening data and malignant data from patients referred with known malignancies.This work has highlighted that textural differences between groups (based on response, nodal status, triple negative and biopsy grade groupings) are apparent and appear to be most evident 1-3 minutes post-contrast administration. Whilst the large number of statistical tests undertaken necessitates a degree of caution in interpreting the results, the fact that significant differences for certain texture parameters and groupings are consistently observed is encouraging.With regards to shape analysis this thesis has highlighted that some differences between groups were seen in shape descriptors but that shape may be limited as a prognostic indicator. Using textural analysis gave a higher proportion of significant differences whilst shape analysis results showed inconsistency across time points.With regards to the mapping this work successfully analysed the texture maps for each case and established lesion detection is possible. The study successfully highlighted hotspots in the breast patients data post texture mapping, and has demonstrated the relationship between sensitivity and false positive rate via hotspot thresholding

    Remote Sensing Image Scene Classification: Benchmark and State of the Art

    Full text link
    Remote sensing image scene classification plays an important role in a wide range of applications and hence has been receiving remarkable attention. During the past years, significant efforts have been made to develop various datasets or present a variety of approaches for scene classification from remote sensing images. However, a systematic review of the literature concerning datasets and methods for scene classification is still lacking. In addition, almost all existing datasets have a number of limitations, including the small scale of scene classes and the image numbers, the lack of image variations and diversity, and the saturation of accuracy. These limitations severely limit the development of new approaches especially deep learning-based methods. This paper first provides a comprehensive review of the recent progress. Then, we propose a large-scale dataset, termed "NWPU-RESISC45", which is a publicly available benchmark for REmote Sensing Image Scene Classification (RESISC), created by Northwestern Polytechnical University (NWPU). This dataset contains 31,500 images, covering 45 scene classes with 700 images in each class. The proposed NWPU-RESISC45 (i) is large-scale on the scene classes and the total image number, (ii) holds big variations in translation, spatial resolution, viewpoint, object pose, illumination, background, and occlusion, and (iii) has high within-class diversity and between-class similarity. The creation of this dataset will enable the community to develop and evaluate various data-driven algorithms. Finally, several representative methods are evaluated using the proposed dataset and the results are reported as a useful baseline for future research.Comment: This manuscript is the accepted version for Proceedings of the IEE

    Machine Intelligence for Advanced Medical Data Analysis: Manifold Learning Approach

    Get PDF
    In the current work, linear and non-linear manifold learning techniques, specifically Principle Component Analysis (PCA) and Laplacian Eigenmaps, are studied in detail. Their applications in medical image and shape analysis are investigated. In the first contribution, a manifold learning-based multi-modal image registration technique is developed, which results in a unified intensity system through intensity transformation between the reference and sensed images. The transformation eliminates intensity variations in multi-modal medical scans and hence facilitates employing well-studied mono-modal registration techniques. The method can be used for registering multi-modal images with full and partial data. Next, a manifold learning-based scale invariant global shape descriptor is introduced. The proposed descriptor benefits from the capability of Laplacian Eigenmap in dealing with high dimensional data by introducing an exponential weighting scheme. It eliminates the limitations tied to the well-known cotangent weighting scheme, namely dependency on triangular mesh representation and high intra-class quality of 3D models. In the end, a novel descriptive model for diagnostic classification of pulmonary nodules is presented. The descriptive model benefits from structural differences between benign and malignant nodules for automatic and accurate prediction of a candidate nodule. It extracts concise and discriminative features automatically from the 3D surface structure of a nodule using spectral features studied in the previous work combined with a point cloud-based deep learning network. Extensive experiments have been conducted and have shown that the proposed algorithms based on manifold learning outperform several state-of-the-art methods. Advanced computational techniques with a combination of manifold learning and deep networks can play a vital role in effective healthcare delivery by providing a framework for several fundamental tasks in image and shape processing, namely, registration, classification, and detection of features of interest

    Data mining based learning algorithms for semi-supervised object identification and tracking

    Get PDF
    Sensor exploitation (SE) is the crucial step in surveillance applications such as airport security and search and rescue operations. It allows localization and identification of movement in urban settings and can significantly boost knowledge gathering, interpretation and action. Data mining techniques offer the promise of precise and accurate knowledge acquisition techniques in high-dimensional data domains (and diminishing the “curse of dimensionality” prevalent in such datasets), coupled by algorithmic design in feature extraction, discriminative ranking, feature fusion and supervised learning (classification). Consequently, data mining techniques and algorithms can be used to refine and process captured data and to detect, recognize, classify, and track objects with predictable high degrees of specificity and sensitivity. Automatic object detection and tracking algorithms face several obstacles, such as large and incomplete datasets, ill-defined regions of interest (ROIs), variable scalability, lack of compactness, angular regions, partial occlusions, environmental variables, and unknown potential object classes, which work against their ability to achieve accurate real-time results. Methods must produce fast and accurate results by streamlining image processing, data compression and reduction, feature extraction, classification, and tracking algorithms. Data mining techniques can sufficiently address these challenges by implementing efficient and accurate dimensionality reduction with feature extraction to refine incomplete (ill-partitioning) data-space and addressing challenges related to object classification, intra-class variability, and inter-class dependencies. A series of methods have been developed to combat many of the challenges for the purpose of creating a sensor exploitation and tracking framework for real time image sensor inputs. The framework has been broken down into a series of sub-routines, which work in both series and parallel to accomplish tasks such as image pre-processing, data reduction, segmentation, object detection, tracking, and classification. These methods can be implemented either independently or together to form a synergistic solution to object detection and tracking. The main contributions to the SE field include novel feature extraction methods for highly discriminative object detection, classification, and tracking. Also, a new supervised classification scheme is presented for detecting objects in urban environments. This scheme incorporates both novel features and non-maximal suppression to reduce false alarms, which can be abundant in cluttered environments such as cities. Lastly, a performance evaluation of Graphical Processing Unit (GPU) implementations of the subtask algorithms is presented, which provides insight into speed-up gains throughout the SE framework to improve design for real time applications. The overall framework provides a comprehensive SE system, which can be tailored for integration into a layered sensing scheme to provide the war fighter with automated assistance and support. As more sensor technology and integration continues to advance, this SE framework can provide faster and more accurate decision support for both intelligence and civilian applications
    • …
    corecore