625 research outputs found
Rotation and scale invariant texture classification using log polar wavelet energy signatures
Classification of texture images, especially those with different orientation and scale changes, is a challenging and important problem in image analysis and classification. This thesis proposes an effective scheme for rotation and scale invariant texture classification using log-polar wavelet signatures.The rotation and scale invariant feature extraction for a given image involves applying a log-polar transform to eliminate the rotation and scale effects, but at same time produce a row shifted log-polar image, which is then passed to an adaptive row shift invariant wavelet packet transform to eliminate the row shift effects. So,the output wavelet coefficients are rotation and scale invariant. The adaptive row shift invariant wavelet packet transform is quite efficient with only O(n*log n) complexity. A feature vector of the most dominant log-polar wavelet energy signatures extracted from each subband of wavelet coefficients is constructed for rotation and scale invariant texture classification. In the experiments, we employed a modified Mahalanobis classifier to classify a set of 12 distinct natural textures selected from the Brodatz album. The experimental results, based on different testing data sets for images with different orientations and scales, show that the implemented classification scheme using log- polar wavelet signatures outperforms other texture classification methods, its overall accuracy rate for joint rotation and scale invariance being 87.59 percent, demonstrating that the extracted energy signatures are effective rotation and scale invariant features
Rotation and Scale Invariant Texture Classification
Texture classification is very important in image analysis. Content based image retrieval, inspection of surfaces, object recognition by texture, document segmentation are few examples where texture classification plays a major role. Classification of texture images, especially those with different orientation and scale changes, is a challenging and important
problem in image analysis and classification. This thesis proposes an effective scheme for rotation and scale invariant texture classification. The rotation and scale invariant feature extraction for a given image involves applying a log-polar transform to eliminate the rotation and scale effects, but at same time produce a row shifted log-polar image, which is then
passed to an adaptive row shift invariant wavelet packet transform to eliminate the row shift effects. So, the output wavelet coefficients are rotation and scale invariant. The adaptive row shift invariant wavelet packet transform is quite efficient with only O (n*log n) complexity. The experimental results, based on different testing data sets for images from Brodatz album
with different orientations and scales, show that the implemented classification scheme outperforms other texture classification methods, its overall accuracy rate for joint rotation and scale invariance being 87.09 percent
A computer vision approach to classification of birds in flight from video sequences
Bird populations are an important bio-indicator; so collecting reliable data is useful for ecologists helping conserve and manage fragile ecosystems. However, existing manual monitoring methods are labour-intensive, time-consuming, and error-prone. The aim of our work is to develop a reliable system, capable of automatically classifying individual bird species in flight from videos. This is challenging, but appropriate for use in the field, since there is often a requirement to identify in flight, rather than when stationary. We present our work in progress, which uses combined appearance and motion features to classify and present experimental results across seven species using Normal Bayes classifier with majority voting and achieving a classification rate of 86%
Spread spectrum-based video watermarking algorithms for copyright protection
Merged with duplicate record 10026.1/2263 on 14.03.2017 by CS (TIS)Digital technologies know an unprecedented expansion in the last years. The consumer can
now benefit from hardware and software which was considered state-of-the-art several years
ago. The advantages offered by the digital technologies are major but the same digital
technology opens the door for unlimited piracy. Copying an analogue VCR tape was certainly
possible and relatively easy, in spite of various forms of protection, but due to the analogue
environment, the subsequent copies had an inherent loss in quality. This was a natural way of
limiting the multiple copying of a video material. With digital technology, this barrier
disappears, being possible to make as many copies as desired, without any loss in quality
whatsoever. Digital watermarking is one of the best available tools for fighting this threat.
The aim of the present work was to develop a digital watermarking system compliant with the
recommendations drawn by the EBU, for video broadcast monitoring. Since the watermark
can be inserted in either spatial domain or transform domain, this aspect was investigated and
led to the conclusion that wavelet transform is one of the best solutions available. Since
watermarking is not an easy task, especially considering the robustness under various attacks
several techniques were employed in order to increase the capacity/robustness of the system:
spread-spectrum and modulation techniques to cast the watermark, powerful error correction
to protect the mark, human visual models to insert a robust mark and to ensure its invisibility.
The combination of these methods led to a major improvement, but yet the system wasn't
robust to several important geometrical attacks. In order to achieve this last milestone, the
system uses two distinct watermarks: a spatial domain reference watermark and the main
watermark embedded in the wavelet domain. By using this reference watermark and techniques
specific to image registration, the system is able to determine the parameters of the attack and
revert it. Once the attack was reverted, the main watermark is recovered. The final result is a
high capacity, blind DWr-based video watermarking system, robust to a wide range of attacks.BBC Research & Developmen
Iris Recognition Using Scattering Transform and Textural Features
Iris recognition has drawn a lot of attention since the mid-twentieth
century. Among all biometric features, iris is known to possess a rich set of
features. Different features have been used to perform iris recognition in the
past. In this paper, two powerful sets of features are introduced to be used
for iris recognition: scattering transform-based features and textural
features. PCA is also applied on the extracted features to reduce the
dimensionality of the feature vector while preserving most of the information
of its initial value. Minimum distance classifier is used to perform template
matching for each new test sample. The proposed scheme is tested on a
well-known iris database, and showed promising results with the best accuracy
rate of 99.2%
Pigment Melanin: Pattern for Iris Recognition
Recognition of iris based on Visible Light (VL) imaging is a difficult
problem because of the light reflection from the cornea. Nonetheless, pigment
melanin provides a rich feature source in VL, unavailable in Near-Infrared
(NIR) imaging. This is due to biological spectroscopy of eumelanin, a chemical
not stimulated in NIR. In this case, a plausible solution to observe such
patterns may be provided by an adaptive procedure using a variational technique
on the image histogram. To describe the patterns, a shape analysis method is
used to derive feature-code for each subject. An important question is how much
the melanin patterns, extracted from VL, are independent of iris texture in
NIR. With this question in mind, the present investigation proposes fusion of
features extracted from NIR and VL to boost the recognition performance. We
have collected our own database (UTIRIS) consisting of both NIR and VL images
of 158 eyes of 79 individuals. This investigation demonstrates that the
proposed algorithm is highly sensitive to the patterns of cromophores and
improves the iris recognition rate.Comment: To be Published on Special Issue on Biometrics, IEEE Transaction on
Instruments and Measurements, Volume 59, Issue number 4, April 201
Automatic Alignment of 3D Multi-Sensor Point Clouds
Automatic 3D point cloud alignment is a major research topic in photogrammetry, computer vision and computer graphics. In this research, two keypoint feature matching approaches have been developed and proposed for the automatic alignment of 3D point clouds, which have been acquired from different sensor platforms and are in different 3D conformal coordinate systems.
The first proposed approach is based on 3D keypoint feature matching. First, surface curvature information is utilized for scale-invariant 3D keypoint extraction. Adaptive non-maxima suppression (ANMS) is then applied to retain the most distinct and well-distributed set of keypoints. Afterwards, every keypoint is characterized by a scale, rotation and translation invariant 3D surface descriptor, called the radial geodesic distance-slope histogram. Similar keypoints descriptors on the source and target datasets are then matched using bipartite graph matching, followed by a modified-RANSAC for outlier removal.
The second proposed method is based on 2D keypoint matching performed on height map images of the 3D point clouds. Height map images are generated by projecting the 3D point clouds onto a planimetric plane. Afterwards, a multi-scale wavelet 2D keypoint detector with ANMS is proposed to extract keypoints on the height maps. Then, a scale, rotation and translation-invariant 2D descriptor referred to as the Gabor, Log-Polar-Rapid Transform descriptor is computed for all keypoints. Finally, source and target height map keypoint correspondences are determined using a bi-directional nearest neighbour matching, together with the modified-RANSAC for outlier removal.
Each method is assessed on multi-sensor, urban and non-urban 3D point cloud datasets. Results show that unlike the 3D-based method, the height map-based approach is able to align source and target datasets with differences in point density, point distribution and missing point data. Findings also show that the 3D-based method obtained lower transformation errors and a greater number of correspondences when the source and target have similar point characteristics. The 3D-based approach attained absolute mean alignment differences in the range of 0.23m to 2.81m, whereas the height map approach had a range from 0.17m to 1.21m. These differences meet the proximity requirements of the data characteristics and the further application of fine co-registration approaches
- …