25 research outputs found

    Performances of proposed normalization algorithm for iris recognition

    Get PDF
    Iris recognition has very high recognition accuracy in comparison with many other biometric features. The iris pattern is not the same even right and left eye of the same person. It is different and unique. This paper proposes an algorithm to recognize people based on iris images. The algorithm consists of three stages. In the first stage, the segmentation process is using circular Hough transforms to find the region of interest (ROI) of given eye images. After that, a proposed normalization algorithm is to generate the polar images than to enhance the polar images using a modified Daugman’s Rubber sheet model. The last step of the proposed algorithm is to divide the enhance the polar image to be 16 divisions of the iris region. The normalized image is 16 small constant dimensions. The Gray-Level Co-occurrence Matrices (GLCM) technique calculates and extracts the normalized image’s texture feature. Here, the features extracted are contrast, correlation, energy, and homogeneity of the iris. In the last stage, a classification technique, discriminant analysis (DA), is employed for analysis of the proposed normalization algorithm. We have compared the proposed normalization algorithm to the other nine normalization algorithms. The DA technique produces an excellent classification performance with 100% accuracy. We also compare our results with previous results and find out that the proposed iris recognition algorithm is an effective system to detect and recognize person digitally, thus it can be used for security in the building, airports, and other automation in many applications

    CES-513 Stages for Developing Control Systems using EMG and EEG Signals: A survey

    Get PDF
    Bio-signals such as EMG (Electromyography), EEG (Electroencephalography), EOG (Electrooculogram), ECG (Electrocardiogram) have been deployed recently to develop control systems for improving the quality of life of disabled and elderly people. This technical report aims to review the current deployment of these state of the art control systems and explain some challenge issues. In particular, the stages for developing EMG and EEG based control systems are categorized, namely data acquisition, data segmentation, feature extraction, classification, and controller. Some related Bio-control applications are outlined. Finally a brief conclusion is summarized.

    Statistical facial feature extraction and lip segmentation

    Get PDF
    Facial features such as lip corners, eye corners and nose tip are critical points in a human face. Robust extraction of such facial feature locations is an important problem which is used in a wide range of applications including audio-visual speech recognition, human-computer interaction, emotion recognition, fatigue detection and gesture recognition. In this thesis, we develop a probabilistic method for facial feature extraction. This technique is able to automatically learn location and texture information of facial features from a training set. Facial feature locations are extracted from face regions using joint distributions of locations and textures represented with mixtures of Gaussians. This formulation results in a maximum likelihood (ML) optimization problem which can be solved using either a gradient ascent or Newton type algorithm. Extracted lip corner locations are then used to initialize a lip segmentation algorithm to extract the lip contours. We develop a level-set based method that utilizes adaptive color distributions and shape priors for lip segmentation. More precisely, an implicit curve representation which learns the color information of lip and non-lip points from a training set is employed. The model can adapt itself to the image of interest using a coarse elliptical region. Extracted lip contour provides detailed information about the lip shape. Both methods are tested using different databases for facial feature extraction and lip segmentation. It is shown that the proposed methods achieve better results compared to conventional methods. Our facial feature extraction method outperforms the active appearance models in terms of pixel errors, while our lip segmentation method outperforms region based level-set curve evolutions in terms of precision and recall results

    Social cognition and transcranial stimulation of the temporoparietal junction

    Full text link
    This study examined behavioural and electrophysiological effects of transcranial stimulation on social cognitive abilities. Positive effects on aspects of emotion processing were observed, which were also related to neurophysiological markers. Stimulation also interacted with autism-relevant trait scores, underlining the relevance of this research to potential clinical applications.<br /

    3D Pedestrian Tracking and Virtual Reconstruction of Ceramic Vessels Using Geometric and Color Cues

    Get PDF
    Object tracking using cameras has many applications ranging from monitoring children and the elderly, to behavior analysis, entertainment, and homeland security. This thesis concentrates on the problem of tracking person(s) of interest in crowded scenes (e.g., airports, train stations, malls, etc.), rendering their locations in time and space along with high quality close-up images of the person for recognition. The tracking is achieved using a combination of overhead cameras for 3D tracking and a network of pan-tilt-zoom (PTZ) cameras to obtain close-up frontal face images. Based on projective geometry, the overhead cameras track people using salient and easily computable feature points such as head points. When the obtained head point is not accurate enough, the color information of the head tops across subsequent frames is integrated to detect and track people. To capture the best frontal face images of a target across time, a PTZ camera scheduling is proposed, where the 'best' PTZ camera is selected based on the capture quality (as close as possible to frontal view) and handoff success (response time needed by the newly selected camera to move from current to desired state) probabilities. The experiments show the 3D tracking errors are very small (less than 5 cm with 14 people crowding an area of around 4 m2) and the frontal face images are captured effectively with most of them centering in the frames. Computational archaeology is becoming a success story of applying computational tools in the reconstruction of vessels obtained from digs, freeing the expert from hours of intensive labor in manually stitching shards into meaningful vessels. In this thesis, we concentrate on the use of geometric and color information of the fragments for 3D virtual reconstruction of broken ceramic vessels. Generic models generated by the experts as a rendition of what the original vessel may have looked like are also utilized. The generic models need not to be identical to the original vessel, but are within a geometric transformation of it in most of its parts. The markings on the 3D surfaces of fragments and generic models are extracted based on their color cues. Ceramic fragments are then aligned against the corresponding generic models based on the geometric relation between the extracted markings. The alignments yield sub-scanner resolution fitting errors.Ph.D., Electrical Engineering -- Drexel University, 201

    Learning cognitive maps: Finding useful structure in an uncertain world

    Get PDF
    In this chapter we will describe the central mechanisms that influence how people learn about large-scale space. We will focus particularly on how these mechanisms enable people to effectively cope with both the uncertainty inherent in a constantly changing world and also with the high information content of natural environments. The major lessons are that humans get by with a less is more approach to building structure, and that they are able to quickly adapt to environmental changes thanks to a range of general purpose mechanisms. By looking at abstract principles, instead of concrete implementation details, it is shown that the study of human learning can provide valuable lessons for robotics. Finally, these issues are discussed in the context of an implementation on a mobile robot. © 2007 Springer-Verlag Berlin Heidelberg

    Automatic facial recognition based on facial feature analysis

    Get PDF
    corecore