852 research outputs found

    Application of Computer Vision and Mobile Systems in Education: A Systematic Review

    Get PDF
    The computer vision industry has experienced a significant surge in growth, resulting in numerous promising breakthroughs in computer intelligence. The present review paper outlines the advantages and potential future implications of utilizing this technology in education. A total of 84 research publications have been thoroughly scrutinized and analyzed. The study revealed that computer vision technology integrated with a mobile application is exceptionally useful in monitoring students’ perceptions and mitigating academic dishonesty. Additionally, it facilitates the digitization of handwritten scripts for plagiarism detection and automates attendance tracking to optimize valuable classroom time. Furthermore, several potential applications of computer vision technology for educational institutions have been proposed to enhance students’ learning processes in various faculties, such as engineering, medical science, and others. Moreover, the technology can also aid in creating a safer campus environment by automatically detecting abnormal activities such as ragging, bullying, and harassment

    Face Emotion Recognition Based on Machine Learning: A Review

    Get PDF
    Computers can now detect, understand, and evaluate emotions thanks to recent developments in machine learning and information fusion. Researchers across various sectors are increasingly intrigued by emotion identification, utilizing facial expressions, words, body language, and posture as means of discerning an individual's emotions. Nevertheless, the effectiveness of the first three methods may be limited, as individuals can consciously or unconsciously suppress their true feelings. This article explores various feature extraction techniques, encompassing the development of machine learning classifiers like k-nearest neighbour, naive Bayesian, support vector machine, and random forest, in accordance with the established standard for emotion recognition. The paper has three primary objectives: firstly, to offer a comprehensive overview of effective computing by outlining essential theoretical concepts; secondly, to describe in detail the state-of-the-art in emotion recognition at the moment; and thirdly, to highlight important findings and conclusions from the literature, with an emphasis on important obstacles and possible future paths, especially in the creation of state-of-the-art machine learning algorithms for the identification of emotions

    Sound Event Detection by Exploring Audio Sequence Modelling

    Get PDF
    Everyday sounds in real-world environments are a powerful source of information by which humans can interact with their environments. Humans can infer what is happening around them by listening to everyday sounds. At the same time, it is a challenging task for a computer algorithm in a smart device to automatically recognise, understand, and interpret everyday sounds. Sound event detection (SED) is the process of transcribing an audio recording into sound event tags with onset and offset time values. This involves classification and segmentation of sound events in the given audio recording. SED has numerous applications in everyday life which include security and surveillance, automation, healthcare monitoring, multimedia information retrieval, and assisted living technologies. SED is to everyday sounds what automatic speech recognition (ASR) is to speech and automatic music transcription (AMT) is to music. The fundamental questions in designing a sound recognition system are, which portion of a sound event should the system analyse, and what proportion of a sound event should the system process in order to claim a confident detection of that particular sound event. While the classification of sound events has improved a lot in recent years, it is considered that the temporal-segmentation of sound events has not improved in the same extent. The aim of this thesis is to propose and develop methods to improve the segmentation and classification of everyday sound events in SED models. In particular, this thesis explores the segmentation of sound events by investigating audio sequence encoding-based and audio sequence modelling-based methods, in an effort to improve the overall sound event detection performance. In the first phase of this thesis, efforts are put towards improving sound event detection by explicitly conditioning the audio sequence representations of an SED model using sound activity detection (SAD) and onset detection. To achieve this, we propose multi-task learning-based SED models in which SAD and onset detection are used as auxiliary tasks for the SED task. The next part of this thesis explores self-attention-based audio sequence modelling, which aggregates audio representations based on temporal relations within and between sound events, scored on the basis of the similarity of sound event portions in audio event sequences. We propose SED models that include memory-controlled, adaptive, dynamic, and source separation-induced self-attention variants, with the aim to improve overall sound recognition

    Integrating audio and visual modalities for multimodal personality trait recognition via hybrid deep learning

    Get PDF
    Recently, personality trait recognition, which aims to identify people’s first impression behavior data and analyze people’s psychological characteristics, has been an interesting and active topic in psychology, affective neuroscience and artificial intelligence. To effectively take advantage of spatio-temporal cues in audio-visual modalities, this paper proposes a new method of multimodal personality trait recognition integrating audio-visual modalities based on a hybrid deep learning framework, which is comprised of convolutional neural networks (CNN), bi-directional long short-term memory network (Bi-LSTM), and the Transformer network. In particular, a pre-trained deep audio CNN model is used to learn high-level segment-level audio features. A pre-trained deep face CNN model is leveraged to separately learn high-level frame-level global scene features and local face features from each frame in dynamic video sequences. Then, these extracted deep audio-visual features are fed into a Bi-LSTM and a Transformer network to individually capture long-term temporal dependency, thereby producing the final global audio and visual features for downstream tasks. Finally, a linear regression method is employed to conduct the single audio-based and visual-based personality trait recognition tasks, followed by a decision-level fusion strategy used for producing the final Big-Five personality scores and interview scores. Experimental results on the public ChaLearn First Impression-V2 personality dataset show the effectiveness of our method, outperforming other used methods

    Human Activity Recognition and Fall Detection Using Unobtrusive Technologies

    Full text link
    As the population ages, health issues like injurious falls demand more attention. Wearable devices can be used to detect falls. However, despite their commercial success, most wearable devices are obtrusive, and patients generally do not like or may forget to wear them. In this thesis, a monitoring system consisting of two 24×32 thermal array sensors and a millimetre-wave (mmWave) radar sensor was developed to unobtrusively detect locations and recognise human activities such as sitting, standing, walking, lying, and falling. Data were collected by observing healthy young volunteers simulate ten different scenarios. The optimal installation position of the sensors was initially unknown. Therefore, the sensors were mounted on a side wall, a corner, and on the ceiling of the experimental room to allow performance comparison between these sensor placements. Every thermal frame was converted into an image and a set of features was manually extracted or convolutional neural networks (CNNs) were used to automatically extract features. Applying a CNN model on the infrared stereo dataset to recognise five activities (falling plus lying on the floor, lying in bed, sitting on chair, sitting in bed, standing plus walking), overall average accuracy and F1-score were 97.6%, and 0.935, respectively. The scores for detecting falling plus lying on the floor from the remaining activities were 97.9%, and 0.945, respectively. When using radar technology, the generated point clouds were converted into an occupancy grid and a CNN model was used to automatically extract features, or a set of features was manually extracted. Applying several classifiers on the manually extracted features to detect falling plus lying on the floor from the remaining activities, Random Forest (RF) classifier achieved the best results in overhead position (an accuracy of 92.2%, a recall of 0.881, a precision of 0.805, and an F1-score of 0.841). Additionally, the CNN model achieved the best results (an accuracy of 92.3%, a recall of 0.891, a precision of 0.801, and an F1-score of 0.844), in overhead position and slightly outperformed the RF method. Data fusion was performed at a feature level, combining both infrared and radar technologies, however the benefit was not significant. The proposed system was cost, processing time, and space efficient. The system with further development can be utilised as a real-time fall detection system in aged care facilities or at homes of older people

    A Survey on Deep Multi-modal Learning for Body Language Recognition and Generation

    Full text link
    Body language (BL) refers to the non-verbal communication expressed through physical movements, gestures, facial expressions, and postures. It is a form of communication that conveys information, emotions, attitudes, and intentions without the use of spoken or written words. It plays a crucial role in interpersonal interactions and can complement or even override verbal communication. Deep multi-modal learning techniques have shown promise in understanding and analyzing these diverse aspects of BL. The survey emphasizes their applications to BL generation and recognition. Several common BLs are considered i.e., Sign Language (SL), Cued Speech (CS), Co-speech (CoS), and Talking Head (TH), and we have conducted an analysis and established the connections among these four BL for the first time. Their generation and recognition often involve multi-modal approaches. Benchmark datasets for BL research are well collected and organized, along with the evaluation of SOTA methods on these datasets. The survey highlights challenges such as limited labeled data, multi-modal learning, and the need for domain adaptation to generalize models to unseen speakers or languages. Future research directions are presented, including exploring self-supervised learning techniques, integrating contextual information from other modalities, and exploiting large-scale pre-trained multi-modal models. In summary, this survey paper provides a comprehensive understanding of deep multi-modal learning for various BL generations and recognitions for the first time. By analyzing advancements, challenges, and future directions, it serves as a valuable resource for researchers and practitioners in advancing this field. n addition, we maintain a continuously updated paper list for deep multi-modal learning for BL recognition and generation: https://github.com/wentaoL86/awesome-body-language

    Modern automatic recognition technologies for visual communication tools

    Get PDF
    Общение представляет собой широкий спектр различных действий, связанных с приёмом и передачей информации. Процесс общения складывается из вербальных, паравербальных и невербальных компонентов, содержащих информационную часть передаваемого сообщения и его эмоциональную окраску соответственно. Комплексный анализ всех компонентов общения позволяет оценить не только содержательную составляющую, но и ситуативный контекст сказанного, а также выявлять дополнительные факторы, относящиеся к психическому и соматическому состоянию говорящего. Существует несколько методов передачи вербального сообщения, среди которых устная и жестовая речь. Речевые и околоречевые компоненты общения могут содержаться в различных каналах данных, таких как аудио- или видеоканалы. В данном обзоре рассматриваются системы анализа видеоданных ввиду того, что аудиоканал не способен передать ряд околоречевых компонентов общения, вносящих в передаваемое сообщение дополнительную информацию. Проводится анализ существующих баз данных статических и динамических образов и систем, разрабатываемых для распознавания вербальной составляющей в устной и жестовой речи, а также систем, оценивающих паравербальные и невербальные компоненты общения. Обозначены сложности, с которыми сталкиваются разработчики подобных баз данных и систем. Также сформулированы перспективные направления разработок, связанные в том числе с комплексным анализом всех компонентов общения с целью наиболее полной оценки передаваемого сообщения.Работа выполнена при поддержке Госпрограммы 47 ГП «Научно-технологическое развитие Российской Федерации» (2019-2030), тема 0134-2019-0006

    Improved Human Face Recognition by Introducing a New Cnn Arrangement and Hierarchical Method

    Get PDF
    Human face recognition has become one of the most attractive topics in the fields ‎of biometrics due to its wide applications. The face is a part of the body that carries ‎the most information regarding identification in human interactions. Features such ‎as the composition of facial components, skin tone, face\u27s central axis, distances ‎between eyes, and many more, alongside the other biometrics, are used ‎unconsciously by the brain to distinguish a person. Indeed, analyzing the facial ‎features could be the first method humans use to identify a person in their lives. ‎As one of the main biometric measures, human face recognition has been utilized in ‎various commercial applications over the past two decades. From banking to smart ‎advertisement and from border security to mobile applications. These are a few ‎examples that show us how far these methods have come. We can confidently say ‎that the techniques for face recognition have reached an acceptable level of ‎accuracy to be implemented in some real-life applications. However, there are other ‎applications that could benefit from improvement. Given the increasing demand ‎for the topic and the fact that nowadays, we have almost all the infrastructure that ‎we might need for our application, make face recognition an appealing topic. ‎ When we are evaluating the quality of a face recognition method, there are some ‎benchmarks that we should consider: accuracy, speed, and complexity are the main ‎parameters. Of course, we can measure other aspects of the algorithm, such as size, ‎precision, cost, etc. But eventually, every one of those parameters will contribute to ‎improving one or some of these three concepts of the method. Then again, although ‎we can see a significant level of accuracy in existing algorithms, there is still much ‎room for improvement in speed and complexity. In addition, the accuracy of the ‎mentioned methods highly depends on the properties of the face images. In other ‎words, uncontrolled situations and variables like head pose, occlusion, lighting, ‎image noise, etc., can affect the results dramatically. ‎ Human face recognition systems are used in either identification or verification. In ‎verification, the system\u27s main goal is to check if an input belongs to a pre-determined tag or a person\u27s ID. ‎Almost every face recognition system consists of four major steps. These steps are ‎pre-processing, face detection, feature extraction, and classification. Improvement ‎in each of these steps will lead to the overall enhancement of the system. In this ‎work, the main objective is to propose new, improved and enhanced methods in ‎each of those mentioned steps, evaluate the results by comparing them with other ‎existing techniques and investigate the outcome of the proposed system.
    corecore