71 research outputs found

    Emotion recognition using facial feature extraction

    Get PDF
    Computerized emotion recognition systems can be powerful tools to help solve problems in a wide range of fields including education, healthcare, and marketing. Existing systems use digital images or live video to track facial expressions on a person\u27s face and deduce that person\u27s emotional state. The research presented in this thesis explores combinations of several facial feature extraction techniques with different classifier algorithms. Namely, the feature extraction techniques used in this research were Discrete Cosine/Sine Transforms, Fast Walsh-Hadamard Transform, Principle Component Analysis, and a novel method called XPoint. Features were extracted from both global (using the entire facial image) and local (using only facial regions like the mouth or eyes) contexts and classified with Linear Discriminant Analysis and k-Nearest Neighbor algorithms. Some experiments also fused many of these features into one system in an effort to create even more accurate systems. The system accuracy for each feature extraction method/classifier combination was calculated and discussed. The combinations that performed the best produced systems between 85%-90% accurate. The most accurate systems utilized Discrete Sine Transform from global and local features in a Linear Discriminant Analysis classifier, as well as feature fusion of all features in a Linear Discriminant Classifier

    A Robust Face Recognition Algorithm for Real-World Applications

    Get PDF
    The proposed face recognition algorithm utilizes representation of local facial regions with the DCT. The local representation provides robustness against appearance variations in local regions caused by partial face occlusion or facial expression, whereas utilizing the frequency information provides robustness against changes in illumination. The algorithm also bypasses the facial feature localization step and formulates face alignment as an optimization problem in the classification stage

    A preliminary experiment definition for video landmark acquisition and tracking

    Get PDF
    Six scientific objectives/experiments were derived which consisted of agriculture/forestry/range resources, land use, geology/mineral resources, water resources, marine resources and environmental surveys. Computer calculations were then made of the spectral radiance signature of each of 25 candidate targets as seen by a satellite sensor system. An imaging system capable of recognizing, acquiring and tracking specific generic type surface features was defined. A preliminary experiment definition and design of a video Landmark Acquisition and Tracking system is given. This device will search a 10-mile swath while orbiting the earth, looking for land/water interfaces such as coastlines and rivers

    Road terrain detection for Advanced Driver Assistance Systems

    Get PDF
    Kühnl T. Road terrain detection for Advanced Driver Assistance Systems. Bielefeld: Bielefeld University; 2013

    Semi-Automatic Registration Utility for MR Brain Imaging of Small Animals

    Get PDF
    The advancements in medical technologies have allowed more accurate diagnosis and quantitative assessments. Magnetic Resonance Imaging is one of the most effective and critical technologies in modern diagnosis. However, preprocessing tasks are required to perform various research topics basing on MR image. Registration is one of the those preprocessing tasks. In this research, a semi-automatic utility was developed for doing MRI registration of small animals. It focuses on 2D rigid body registration. The test results show that this developed utility can perform registration well for MRI of small animals in both intra-subject and inter-subjects

    Symmetry-Adapted Machine Learning for Information Security

    Get PDF
    Symmetry-adapted machine learning has shown encouraging ability to mitigate the security risks in information and communication technology (ICT) systems. It is a subset of artificial intelligence (AI) that relies on the principles of processing future events by learning past events or historical data. The autonomous nature of symmetry-adapted machine learning supports effective data processing and analysis for security detection in ICT systems without the interference of human authorities. Many industries are developing machine-learning-adapted solutions to support security for smart hardware, distributed computing, and the cloud. In our Special Issue book, we focus on the deployment of symmetry-adapted machine learning for information security in various application areas. This security approach can support effective methods to handle the dynamic nature of security attacks by extraction and analysis of data to identify hidden patterns of data. The main topics of this Issue include malware classification, an intrusion detection system, image watermarking, color image watermarking, battlefield target aggregation behavior recognition model, IP camera, Internet of Things (IoT) security, service function chain, indoor positioning system, and crypto-analysis

    Computer Vision Based Early Intraocular Pressure Assessment From Frontal Eye Images

    Get PDF
    Intraocular Pressure (IOP) in general, refers to the pressure in the eyes. Gradual increase of IOP and high IOP are conditions or symptoms that may lead to certain diseases such as glaucoma, and therefore, must be closely monitored. While the pressure in the eye increases, different parts of the eye may become affected until the eye parts are damaged. An effective way to prevent rise in eye pressure is by early detection. Exiting IOP monitoring tools include eye tests at clinical facilities and computer-aided techniques from fundus and optic nerves images. In this work, a new computer vision-based smart healthcare framework is presented to evaluate the intraocular pressure risk from frontal eye images early-on. The framework determines the status of IOP by analyzing frontal eye images using image processing and machine learning techniques. A database of images from the Princess Basma Hospital was used in this work. The database contains 400 eye images; 200 images with normal IOP and 200 high eye pressure case images. This study proposes novel features for IOP determination from two experiments. The first experiment extracts the sclera using circular hough transform, after which four features are extracted from the whole sclera. These features are mean redness level, red area percentage, contour area and contour height. The pupil/iris diameter ratio feature is also extracted from the frontal eye image after a series of pre-processing techniques. The second experiment extracts the sclera and iris segment using a fully conventional neural network technique, after which six features are extracted from only part of the segmented sclera and iris. The features include mean redness level, red area percentage, contour area, contour distance and contour angle along with the pupil/iris diameter ratio. Once the features are extracted, classification techniques are applied in order to train and test the images and features to obtain the status of the patients in terms of eye pressure. For the first experiment, neural network and support vector machine algorithms were adopted in order to detect the status of intraocular pressure. The second experiment adopted support vector machine and decision tree algorithms to detect the status of intraocular pressure. For both experiments, the framework detects the status of IOP (normal or high IOP) with high accuracies. This computer vison-based approach produces evidence of the relationship between the extracted frontal eye image features and IOP, which has not been previously investigated through automated image processing and machine learning techniques from frontal eye images
    corecore