741 research outputs found
Recommended from our members
Novel algorithms for 3D human face recognition
textAutomated human face recognition is a computer vision problem of considerable practical significance. Existing two dimensional (2D) face recognition techniques perform poorly for faces with uncontrolled poses, lighting and facial expressions. Face recognition technology based on three dimensional (3D) facial models is now emerging. Geometric facial models can be easily corrected for pose variations. They are illumination invariant, and provide structural information about the facial surface. Algorithms for 3D face recognition exist, however the area is far from being a matured technology. In this dissertation we address a number of open questions in the area of 3D human face recognition. Firstly, we make available to qualified researchers in the field, at no cost, a large Texas 3D Face Recognition Database, which was acquired as a part of this research work. This database contains 1149 2D and 3D images of 118 subjects. We also provide 25 manually located facial fiducial points on each face in this database. Our next contribution is the development of a completely automatic novel 3D face recognition algorithm, which employs discriminatory anthropometric distances between carefully selected local facial features. This algorithm neither uses general purpose pattern recognition approaches, nor does it directly extend 2D face recognition techniques to the 3D domain. Instead, it is based on an understanding of the structurally diverse characteristics of human faces, which we isolate from the scientific discipline of facial anthropometry. We demonstrate the effectiveness and superior performance of the proposed algorithm, relative to existing benchmark 3D face recognition algorithms. A related contribution is the development of highly accurate and reliable 2D+3D algorithms for automatically detecting 10 anthropometric facial fiducial points. While developing these algorithms, we identify unique structural/textural properties associated with the facial fiducial points. Furthermore, unlike previous algorithms for detecting facial fiducial points, we systematically evaluate our algorithms against manually located facial fiducial points on a large database of images. Our third contribution is the development of an effective algorithm for computing the structural dissimilarity of 3D facial surfaces, which uses a recently developed image similarity index called the complex-wavelet structural similarity index. This algorithm is unique in that unlike existing approaches, it does not require that the facial surfaces be finely registered before they are compared. Furthermore, it is nearly an order of magnitude more accurate than existing facial surface matching based approaches. Finally, we propose a simple method to combine the two new 3D face recognition algorithms that we developed, resulting in a 3D face recognition algorithm that is competitive with the existing state-of-the-art algorithms.Electrical and Computer Engineerin
Automatic face recognition using stereo images
Face recognition is an important pattern recognition problem, in the study of both natural and artificial learning problems. Compaxed to other biometrics, it is non-intrusive, non- invasive and requires no paxticipation from the subjects. As a result, it has many applications varying from human-computer-interaction to access control and law-enforcement to crowd surveillance. In typical optical image based face recognition systems, the systematic vaxiability arising from representing the three-dimensional (3D) shape of a face by a two-dimensional (21)) illumination intensity matrix is treated as random vaxiability. Multiple examples of the face displaying vaxying pose and expressions axe captured in different imaging conditions. The imaging environment, pose and expressions are strictly controlled and the images undergo rigorous normalisation and pre-processing. This may be implemented in a paxtially or a fully automated system. Although these systems report high classification accuracies (>90%), they lack versatility and tend to fail when deployed outside laboratory conditions. Recently, more sophisticated 3D face recognition systems haxnessing the depth information have emerged. These systems usually employ specialist equipment such as laser scanners and structured light projectors. Although more accurate than 2D optical image based recognition, these systems are equally difficult to implement in a non-co-operative environment. Existing face recognition systems, both 2D and 3D, detract from the main advantages of face recognition and fail to fully exploit its non-intrusive capacity. This is either because they rely too much on subject co-operation, which is not always available, or because they cannot cope with noisy data. The main objective of this work was to investigate the role of depth information in face recognition in a noisy environment. A stereo-based system, inspired by the human binocular vision, was devised using a pair of manually calibrated digital off-the-shelf cameras in a stereo setup to compute depth information. Depth values extracted from 2D intensity images using stereoscopy are extremely noisy, and as a result this approach for face recognition is rare. This was cofirmed by the results of our experimental work. Noise in the set of correspondences, camera calibration and triangulation led to inaccurate depth reconstruction, which in turn led to poor classifier accuracy for both 3D surface matching and 211) 2 depth maps. Recognition experiments axe performed on the Sheffield Dataset, consisting 692 images of 22 individuals with varying pose, illumination and expressions
The Study and Literature Review of a Feature Extraction Mechanism in Computer Vison
Detecting the Features in the image is a challenging task in computer vison and numerous image processing applications. For example to detect the corners in an image there exists numerous algorithms. Corners are formed by combining multiple edges and which sometimes may not define the boundary of an image. This paper is mainly concentrates on the study of the Harris corner detection algorithm which accurately detects the corners exists in the image. The Harris corner detector is a widely used interest point detector due to strong features such as rotation, scale, illumination and in the case of noise. It is based on the local auto-correlation function of a signal; where the local auto-correlation function measures the local changes of the signal with patches shifted by a small amount in di?erent directions. In out experiments we have shown the results for gray scale images as well as for color images which gives the results for the individual regions present in the image. This algorithm is more reliable than the conventional methods
Automatic Alignment of 3D Multi-Sensor Point Clouds
Automatic 3D point cloud alignment is a major research topic in photogrammetry, computer vision and computer graphics. In this research, two keypoint feature matching approaches have been developed and proposed for the automatic alignment of 3D point clouds, which have been acquired from different sensor platforms and are in different 3D conformal coordinate systems.
The first proposed approach is based on 3D keypoint feature matching. First, surface curvature information is utilized for scale-invariant 3D keypoint extraction. Adaptive non-maxima suppression (ANMS) is then applied to retain the most distinct and well-distributed set of keypoints. Afterwards, every keypoint is characterized by a scale, rotation and translation invariant 3D surface descriptor, called the radial geodesic distance-slope histogram. Similar keypoints descriptors on the source and target datasets are then matched using bipartite graph matching, followed by a modified-RANSAC for outlier removal.
The second proposed method is based on 2D keypoint matching performed on height map images of the 3D point clouds. Height map images are generated by projecting the 3D point clouds onto a planimetric plane. Afterwards, a multi-scale wavelet 2D keypoint detector with ANMS is proposed to extract keypoints on the height maps. Then, a scale, rotation and translation-invariant 2D descriptor referred to as the Gabor, Log-Polar-Rapid Transform descriptor is computed for all keypoints. Finally, source and target height map keypoint correspondences are determined using a bi-directional nearest neighbour matching, together with the modified-RANSAC for outlier removal.
Each method is assessed on multi-sensor, urban and non-urban 3D point cloud datasets. Results show that unlike the 3D-based method, the height map-based approach is able to align source and target datasets with differences in point density, point distribution and missing point data. Findings also show that the 3D-based method obtained lower transformation errors and a greater number of correspondences when the source and target have similar point characteristics. The 3D-based approach attained absolute mean alignment differences in the range of 0.23m to 2.81m, whereas the height map approach had a range from 0.17m to 1.21m. These differences meet the proximity requirements of the data characteristics and the further application of fine co-registration approaches
Pattern Recognition
Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition
Curvilinear Structure Enhancement in Biomedical Images
Curvilinear structures can appear in many different areas and at a variety of scales. They can be axons and dendrites in the brain, blood vessels in the fundus, streets, rivers or fractures in buildings, and others. So, it is essential to study curvilinear structures in many fields such as neuroscience, biology, and cartography regarding image processing.
Image processing is an important field for the help to aid in biomedical imaging especially the diagnosing the disease. Image enhancement is the early step of image analysis.
In this thesis, I focus on the research, development, implementation, and validation of 2D and 3D curvilinear structure enhancement methods, recently established. The proposed methods are based on phase congruency, mathematical morphology, and tensor representation concepts.
First, I have introduced a 3D contrast independent phase congruency-based enhancement approach. The obtained results demonstrate the proposed approach is robust against the contrast variations in 3D biomedical images.
Second, I have proposed a new mathematical morphology-based approach called the bowler-hat transform. In this approach, I have combined the mathematical morphology with a local tensor representation of curvilinear structures in images.
The bowler-hat transform is shown to give better results than comparison methods on challenging data such as retinal/fundus images. The bowler-hat transform is shown to give better results than comparison methods on challenging data such as retinal/fundus images. Especially the proposed method is quite successful while enhancing of curvilinear structures at junctions.
Finally, I have extended the bowler-hat approach to the 3D version to prove the applicability, reliability, and ability of it in 3D
Efficient 3D Face Recognition with Gabor Patched Spectral Regression
In this paper, we utilize a novel framework for 3D face recognition, called 3D Gabor Patched Spectral Regression (3D GPSR), which can overcome some of the continuing challenges encountered with 2D or 3D facial images. In this active field, some obstacles, like expression variations, pose correction and data noise deteriorate the performance significantly. Our proposed system addresses these problems by first extracting the main facial area to remove irrelevant information corresponding to shoulders and necks. Pose correction is used to minimize the influence of large pose variations and then the normalized depth and gray images can be obtained. Due to better time-frequency characteristics and a distinctive biological background, the Gabor feature is extracted on depth images, known as 3D Gabor faces. Data noise is mainly caused by distorted meshes, varieties of subordinates and misalignment. To solve these problems, we introduce a Patched Spectral Regression strategy, which can make good use of the robustness and efficiency of accurate patched discriminant low-dimension features and minimize the effect of noise term. Computational analysis shows that spectral regression is much faster than the traditional approaches. Our experiments are based on the CASIA and FRGC 3D face databases which contain a huge number of challenging data. Experimental results show that our framework consistently outperforms the other existing methods with the distinctive characteristics of efficiency, robustness and generality
- …