11 research outputs found

    Evaluation of Features in Detection of Dislike Responses to Audio–Visual Stimuli from EEG Signals

    Get PDF
    © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).There is a strong correlation between the like/dislike responses to audio–visual stimuli and the emotional arousal and valence reactions of a person. In the present work, our attention is focused on the automated detection of dislike responses based on EEG activity when music videos are used as audio–visual stimuli. Specifically, we investigate the discriminative capacity of the Logarithmic Energy (LogE), Linear Frequency Cepstral Coefficients (LFCC), Power Spectral Density (PSD) and Discrete Wavelet Transform (DWT)-based EEG features, computed with and without segmentation of the EEG signal, on the dislike detection task. We carried out a comparative evaluation with eighteen modifications of the above-mentioned EEG features that cover different frequency bands and use different energy decomposition methods and spectral resolutions. For that purpose, we made use of Naïve Bayes classifier (NB), Classification and regression trees (CART), k-Nearest Neighbors (kNN) classifier, and support vector machines (SVM) classifier with a radial basis function (RBF) kernel trained with the Sequential Minimal Optimization (SMO) method. The experimental evaluation was performed on the well-known and widely used DEAP dataset. A classification accuracy of up to 98.6% was observed for the best performing combination of pre-processing, EEG features and classifier. These results support that the automated detection of like/dislike reactions based on EEG activity is feasible in a personalized setup. This opens opportunities for the incorporation of such functionality in entertainment, healthcare and security applications.Peer reviewedFinal Published versio

    Improving Facial Emotion Recognition with Image processing and Deep Learning

    Get PDF
    Humans often use facial expressions along with words in order to communicate effectively. There has been extensive study of how we can classify facial emotion with computer vision methodologies. These have had varying levels of success given challenges and the limitations of databases, such as static data or facial capture in non-real environments. Given this, we believe that new preprocessing techniques are required to improve the accuracy of facial detection models. In this paper, we propose a new yet simple method for facial expression recognition that enhances accuracy. We conducted our experiments on the FER-2013 dataset that contains static facial images. We utilized Unsharp Mask and Histogram equalization to emphasize texture and details of the images. We implemented Convolution Neural Networks [CNNs] to classify the images into 7 different facial expressions, yielding an accuracy of 69.46% on the test set. We also employed pre-trained models such as Resnet-50, Senet-50, VGG16, and FaceNet, and applied transfer learning to achieve an accuracy of 76.01% using an ensemble of seven models

    Speaker Recognition Using Machine Learning Techniques

    Get PDF
    Speaker recognition is a technique of identifying the person talking to a machine using the voice features and acoustics. It has multiple applications ranging in the fields of Human Computer Interaction (HCI), biometrics, security, and Internet of Things (IoT). With the advancements in technology, hardware is getting powerful and software is becoming smarter. Subsequently, the utilization of devices to interact effectively with humans and performing complex calculations is also increasing. This is where speaker recognition is important as it facilitates a seamless communication between humans and computers. Additionally, the field of security has seen a rise in biometrics. At present, multiple biometric techniques co-exist with each other, for instance, iris, fingerprint, voice, facial, and more. Voice is one metric which apart from being natural to the users, provides comparable and sometimes even higher levels of security when compared to some traditional biometric approaches. Hence, it is a widely accepted form of biometric technique and is constantly being studied by scientists for further improvements. This study aims to evaluate different pre-processing, feature extraction, and machine learning techniques on audios recorded in unconstrained and natural environments to determine which combination of these works well for speaker recognition and classification. Thus, the report presents several methods of audio pre- processing like trimming, split and merge, noise reduction, and vocal enhancements to enhance the audios obtained from real-world situations. Additionally, a text-independent approach is used in this research which makes the model flexible to multiple languages. Mel Frequency Cepstral Coefficients (MFCC) are extracted for each audio, along with their differentials and accelerations to evaluate machine learning classification techniques such as kNN, Support Vector Machines, and Random Forest Classifiers. Lastly, the approaches are evaluated against existing research to study which techniques performs well on these sets of audio recordings

    A Human−Computer Interface Replacing Mouse and Keyboard for Individuals with Limited Upper Limb Mobility

    Get PDF
    People with physical disabilities in their upper extremities face serious issues in using classical input devices due to lacking movement possibilities and precision. This article suggests an alternative input concept and presents corresponding input devices. The proposed interface combines an inertial measurement unit and force sensing resistors, which can replace mouse and keyboard. Head motions are mapped to mouse pointer positions, while mouse button actions are triggered by contracting mastication muscles. The contact pressures of each fingertip are acquired to replace the conventional keyboard. To allow for complex text entry, the sensory concept is complemented by an ambiguous keyboard layout with ten keys. The related word prediction function provides disambiguation at word level. Haptic feedback is provided to users corresponding to their virtual keystrokes for enhanced closed-loop interactions. This alternative input system enables text input as well as the emulation of a two-button mouse

    An intelligent decision support system for acute lymphoblastic leukaemia detection

    Get PDF
    The morphological analysis of blood smear slides by haematologists or haematopathologists is one of the diagnostic procedures available to evaluate the presence of acute leukaemia. This operation is a complex and costly process, and often lacks standardized accuracy owing to a variety of factors, including insufficient expertise and operator fatigue. This research proposes an intelligent decision support system for automatic detection of acute lymphoblastic leukaemia (ALL) using microscopic blood smear images to overcome the above barrier. The work has four main key stages. (1) Firstly, a modified marker-controlled watershed algorithm integrated with the morphological operations is proposed for the segmentation of the membrane of the lymphocyte and lymphoblast cell images. The aim of this stage is to isolate a lymphocyte/lymphoblast cell membrane from touching and overlapping of red blood cells, platelets and artefacts of the microscopic peripheral blood smear sub-images. (2) Secondly, a novel clustering algorithm with stimulating discriminant measure (SDM) of both within- and between-cluster scatter variances is proposed to produce robust segmentation of the nucleus and cytoplasm of lymphocytic cell membranes. The SDM measures are used in conjunction with Genetic Algorithm for the clustering of nucleus, cytoplasm, and background regions. (3) Thirdly, a total of eighty features consisting of shape, texture, and colour information from the nucleus and cytoplasm of the identified lymphocyte/lymphoblast images are extracted. (4) Finally, the proposed feature optimisation algorithm, namely a variant of Bare-Bones Particle Swarm Optimisation (BBPSO), is presented to identify the most significant discriminative characteristics of the nucleus and cytoplasm segmented by the SDM-based clustering algorithm. The proposed BBPSO variant algorithm incorporates Cuckoo Search, Dragonfly Algorithm, BBPSO, and local and global random walk operations of uniform combination, and Lévy flights to diversify the search and mitigate the premature convergence problem of the conventional BBPSO. In addition, it also employs subswarm concepts, self-adaptive parameters, and convergence degree monitoring mechanisms to enable fast convergence. The optimal feature subsets identified by the proposed algorithm are subsequently used for ALL detection and classification. The proposed system achieves the highest classification accuracy of 96.04% and significantly outperforms related meta-heuristic search methods and related research for ALL detection

    Essays on Technology in Presence of Globalization

    Get PDF
    Technology has long been known to enable globalization in ways previously not thought possible, with instantaneous communication allowing members of organizations all across the globe to communicate and share information with little to no delay. However, as the effects of globalization have become more prominent, they have in turn helped to shape the very technologies that enable these processes. These three essays analyze three examples of how these two processes – globalization and technological development – impact one another. The first looks at a national policy level, attempting to understand how increased possibilities for inside leakers can force governments to consider asylum requests. The second analyzes the issue at the level of corporations, attempting to understand how and why business leaders choose to hire individuals from other countries. The third and final essay analyzes the issue at the most micro level, studying a potential application that could help analyze linguistic factors that have taken a more prominent role in a more globalized society

    Quantifying Quality of Life

    Get PDF
    Describes technological methods and tools for objective and quantitative assessment of QoL Appraises technology-enabled methods for incorporating QoL measurements in medicine Highlights the success factors for adoption and scaling of technology-enabled methods This open access book presents the rise of technology-enabled methods and tools for objective, quantitative assessment of Quality of Life (QoL), while following the WHOQOL model. It is an in-depth resource describing and examining state-of-the-art, minimally obtrusive, ubiquitous technologies. Highlighting the required factors for adoption and scaling of technology-enabled methods and tools for QoL assessment, it also describes how these technologies can be leveraged for behavior change, disease prevention, health management and long-term QoL enhancement in populations at large. Quantifying Quality of Life: Incorporating Daily Life into Medicine fills a gap in the field of QoL by providing assessment methods, techniques and tools. These assessments differ from the current methods that are now mostly infrequent, subjective, qualitative, memory-based, context-poor and sparse. Therefore, it is an ideal resource for physicians, physicians in training, software and hardware developers, computer scientists, data scientists, behavioural scientists, entrepreneurs, healthcare leaders and administrators who are seeking an up-to-date resource on this subject

    Quantifying Quality of Life

    Get PDF
    Describes technological methods and tools for objective and quantitative assessment of QoL Appraises technology-enabled methods for incorporating QoL measurements in medicine Highlights the success factors for adoption and scaling of technology-enabled methods This open access book presents the rise of technology-enabled methods and tools for objective, quantitative assessment of Quality of Life (QoL), while following the WHOQOL model. It is an in-depth resource describing and examining state-of-the-art, minimally obtrusive, ubiquitous technologies. Highlighting the required factors for adoption and scaling of technology-enabled methods and tools for QoL assessment, it also describes how these technologies can be leveraged for behavior change, disease prevention, health management and long-term QoL enhancement in populations at large. Quantifying Quality of Life: Incorporating Daily Life into Medicine fills a gap in the field of QoL by providing assessment methods, techniques and tools. These assessments differ from the current methods that are now mostly infrequent, subjective, qualitative, memory-based, context-poor and sparse. Therefore, it is an ideal resource for physicians, physicians in training, software and hardware developers, computer scientists, data scientists, behavioural scientists, entrepreneurs, healthcare leaders and administrators who are seeking an up-to-date resource on this subject
    corecore