829 research outputs found

    Methods and techniques for analyzing human factors facets on drivers

    Get PDF
    Mención Internacional en el título de doctorWith millions of cars moving daily, driving is the most performed activity worldwide. Unfortunately, according to the World Health Organization (WHO), every year, around 1.35 million people worldwide die from road traffic accidents and, in addition, between 20 and 50 million people are injured, placing road traffic accidents as the second leading cause of death among people between the ages of 5 and 29. According to WHO, human errors, such as speeding, driving under the influence of drugs, fatigue, or distractions at the wheel, are the underlying cause of most road accidents. Global reports on road safety such as "Road safety in the European Union. Trends, statistics, and main challenges" prepared by the European Commission in 2018 presented a statistical analysis that related road accident mortality rates and periods segmented by hours and days of the week. This report revealed that the highest incidence of mortality occurs regularly in the afternoons during working days, coinciding with the period when the volume of traffic increases and when any human error is much more likely to cause a traffic accident. Accordingly, mitigating human errors in driving is a challenge, and there is currently a growing trend in the proposal for technological solutions intended to integrate driver information into advanced driving systems to improve driver performance and ergonomics. The study of human factors in the field of driving is a multidisciplinary field in which several areas of knowledge converge, among which stand out psychology, physiology, instrumentation, signal treatment, machine learning, the integration of information and communication technologies (ICTs), and the design of human-machine communication interfaces. The main objective of this thesis is to exploit knowledge related to the different facets of human factors in the field of driving. Specific objectives include identifying tasks related to driving, the detection of unfavorable cognitive states in the driver, such as stress, and, transversely, the proposal for an architecture for the integration and coordination of driver monitoring systems with other active safety systems. It should be noted that the specific objectives address the critical aspects in each of the issues to be addressed. Identifying driving-related tasks is one of the primary aspects of the conceptual framework of driver modeling. Identifying maneuvers that a driver performs requires training beforehand a model with examples of each maneuver to be identified. To this end, a methodology was established to form a data set in which a relationship is established between the handling of the driving controls (steering wheel, pedals, gear lever, and turn indicators) and a series of adequately identified maneuvers. This methodology consisted of designing different driving scenarios in a realistic driving simulator for each type of maneuver, including stop, overtaking, turns, and specific maneuvers such as U-turn and three-point turn. From the perspective of detecting unfavorable cognitive states in the driver, stress can damage cognitive faculties, causing failures in the decision-making process. Physiological signals such as measurements derived from the heart rhythm or the change of electrical properties of the skin are reliable indicators when assessing whether a person is going through an episode of acute stress. However, the detection of stress patterns is still an open problem. Despite advances in sensor design for the non-invasive collection of physiological signals, certain factors prevent reaching models capable of detecting stress patterns in any subject. This thesis addresses two aspects of stress detection: the collection of physiological values during stress elicitation through laboratory techniques such as the Stroop effect and driving tests; and the detection of stress by designing a process flow based on unsupervised learning techniques, delving into the problems associated with the variability of intra- and inter-individual physiological measures that prevent the achievement of generalist models. Finally, in addition to developing models that address the different aspects of monitoring, the orchestration of monitoring systems and active safety systems is a transversal and essential aspect in improving safety, ergonomics, and driving experience. Both from the perspective of integration into test platforms and integration into final systems, the problem of deploying multiple active safety systems lies in the adoption of monolithic models where the system-specific functionality is run in isolation, without considering aspects such as cooperation and interoperability with other safety systems. This thesis addresses the problem of the development of more complex systems where monitoring systems condition the operability of multiple active safety systems. To this end, a mediation architecture is proposed to coordinate the reception and delivery of data flows generated by the various systems involved, including external sensors (lasers, external cameras), cabin sensors (cameras, smartwatches), detection models, deliberative models, delivery systems and machine-human communication interfaces. Ontology-based data modeling plays a crucial role in structuring all this information and consolidating the semantic representation of the driving scene, thus allowing the development of models based on data fusion.I would like to thank the Ministry of Economy and Competitiveness for granting me the predoctoral fellowship BES-2016-078143 corresponding to the project TRA2015-63708-R, which provided me the opportunity of conducting all my Ph. D activities, including completing an international internship.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: José María Armingol Moreno.- Secretario: Felipe Jiménez Alonso.- Vocal: Luis Mart

    Face Emotion Recognition Based on Machine Learning: A Review

    Get PDF
    Computers can now detect, understand, and evaluate emotions thanks to recent developments in machine learning and information fusion. Researchers across various sectors are increasingly intrigued by emotion identification, utilizing facial expressions, words, body language, and posture as means of discerning an individual's emotions. Nevertheless, the effectiveness of the first three methods may be limited, as individuals can consciously or unconsciously suppress their true feelings. This article explores various feature extraction techniques, encompassing the development of machine learning classifiers like k-nearest neighbour, naive Bayesian, support vector machine, and random forest, in accordance with the established standard for emotion recognition. The paper has three primary objectives: firstly, to offer a comprehensive overview of effective computing by outlining essential theoretical concepts; secondly, to describe in detail the state-of-the-art in emotion recognition at the moment; and thirdly, to highlight important findings and conclusions from the literature, with an emphasis on important obstacles and possible future paths, especially in the creation of state-of-the-art machine learning algorithms for the identification of emotions

    On the Recognition of Emotion from Physiological Data

    Get PDF
    This work encompasses several objectives, but is primarily concerned with an experiment where 33 participants were shown 32 slides in order to create ‗weakly induced emotions‘. Recordings of the participants‘ physiological state were taken as well as a self report of their emotional state. We then used an assortment of classifiers to predict emotional state from the recorded physiological signals, a process known as Physiological Pattern Recognition (PPR). We investigated techniques for recording, processing and extracting features from six different physiological signals: Electrocardiogram (ECG), Blood Volume Pulse (BVP), Galvanic Skin Response (GSR), Electromyography (EMG), for the corrugator muscle, skin temperature for the finger and respiratory rate. Improvements to the state of PPR emotion detection were made by allowing for 9 different weakly induced emotional states to be detected at nearly 65% accuracy. This is an improvement in the number of states readily detectable. The work presents many investigations into numerical feature extraction from physiological signals and has a chapter dedicated to collating and trialing facial electromyography techniques. There is also a hardware device we created to collect participant self reported emotional states which showed several improvements to experimental procedure

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Cooperate or not? Exploring drivers’ interactions and response times to a lane-changing request in a connected environment

    Get PDF
    Lane-changing is one of the complex driving tasks that depends on the number of vehicles, objectives, and lanes. A driver often needs to respond to a lane-changing request of a lane-changer, which is a function of their personality traits and the current driving conditions. A connected environment is expected to assist during the lane-changing decision-making process by increasing situational awareness of surrounding traffic through vehicle-to-vehicle communication and vehicle-to-infrastructure communication. Although lane changing decision making process in a traditional environment (an environment without driving aids) has been frequently investigated, our understanding of drivers’ interactions during the lane-changing decision- making process in a connected environment remains elusive due to the novelty of the connected environment and the scarcity of relevant data. As such, this study examines drivers’ responses to lane-changing requests in a connected environment using the CARRS-Q Advanced Driving Simulator. Seventy-eight participants responded to the lane-changing request of a lane- changer under two randomised driving conditions: baseline (traditional environment without driving aids) and connected environment (with driving aids). A segmentation-based approach is employed to extract drivers’ responses to the lane- changing request and subsequently estimate their response time from trajectory data. Additionally, drivers’ response times are modelled using a random parameter accelerated failure time (AFT) hazard-based duration model. Results reveal that drivers tend to be more cooperative in response to a lane-changing request in the connected environment compared with the baseline condition whereby they tend to accelerate to avoid the lane-changing request. The AFT model suggests that on average drivers’ response times are shorter in the connected environment, implying that drivers respond to the lane-changing request faster in the presence of driving aids. However, at the individual level, connected environment’s impact on drivers’ response times is mixed as drivers’ response times may increase or decrease in the connected environment compared to the baseline condition, for instance, we find that female drivers have lower response times in the connected environment than that of male drivers. Overall, this study finds that drivers in connected environment, on average, take less time to respond and appear to be more cooperative, and thus, are less likely to be engaged in safety-critical events

    Visual complexity in human-machine interaction = Visuelle Komplexität in der Mensch-Maschine Interaktion

    Get PDF
    Visuelle Komplexität wird oft als der Grad an Detail oder Verworrenheit in einem Bild definiert (Snodgrass & Vanderwart, 1980). Diese hat Einfluss auf viele Bereiche des menschlichen Lebens, darunter auch solche, die die Interaktion mit Technologie invol-vieren. So wurden Effekte visueller Komplexität etwa im Straßenverkehr (Edquist et al., 2012; Mace & Pollack, 1983) oder bei der Interaktion mit Software (Alemerien & Magel, 2014) oder Webseiten (Deng & Poole, 2010; Tuch et al., 2011) nachgewie-sen. Obwohl die Erforschung visueller Komplexität bereits bis auf die Gestaltpsycho-logen zurückgeht, welche etwa mit dem Gestaltprinzip der Prägnanz die Bedeutung von Simplizität und Komplexität im Wahrnehmungsprozess verankerten (Koffka, 1935; Wertheimer, 1923), sind weder die Einflussfaktoren visueller Komplexität, noch die Zusammenhänge mit Blickbewegungen oder mentaler Beanspruchung bisher ab-schließend erforscht. Diese Punkte adressiert die vorliegende Arbeit mithilfe von vier empirischen Forschungsarbeiten. In Studie 1 wird anhand der Komplexität von Videos in Leitwarten sowie der Effekte auf subjektive, physiologische und Leistungsparameter mentaler Beanspruchung die Bedeutung des Konstruktes im Bereich der Mensch-Maschine Interaktion untersucht. Studie 2 betrachtet die dimensionale Struktur und die Bedeutung verschiedener Ein-flussfaktoren visueller Komplexität genauer, wobei unterschiedliches Stimulusmaterial genutzt wird. In Studie 3 werden mithilfe eines experimentellen Ansatzes die Auswir-kungen von Einflussfaktoren visueller Komplexität auf subjektive Bewertungen sowie eine Auswahl okularer Parameter untersucht. Als Stimuli dienen dabei einfache, schwarz-weiße Formenmuster. Zudem werden verschiedene computationale und oku-lare Parameter genutzt, um anhand dieser Komplexitätsbewertungen vorherzusagen. Dieser Ansatz wird in Studie 4 auf Screenshots von Webseiten übertragen, um die Aussagekraft in einem anwendungsnahen Bereich zu untersuchen. Neben vorangegangenen Forschungsarbeiten legen insbesondere die gefundenen Zusammenhänge mit mentaler Beanspruchung nahe, dass visuelle Komplexität ein relevantes Konstrukt im Bereich der Mensch-Maschine Interaktion darstellt. Dabei haben insbesondere quantitative und strukturelle, aber potentiell auch weitere Aspekte Einfluss auf die Bewertung visueller Komplexität sowie auf das Blickverhalten der Be-trachter. Die gewonnenen Ergebnisse erlauben darüber hinaus Rückschlüsse auf die Zusammenhänge mit computationalen Maßen, welche in Kombination mit okularen Parametern gut für die Vorhersage von Komplexitätsbewertungen geeignet sind. Die Erkenntnisse aus den durchgeführten Studien werden im Kontext vorheriger For-schungsarbeiten diskutiert. Daraus wird ein integratives Forschungsmodell visueller Komplexität in der Mensch-Maschine-Interaktion abgeleitet

    Driver lane change intention inference using machine learning methods.

    Get PDF
    Lane changing manoeuvre on highway is a highly interactive task for human drivers. The intelligent vehicles and the advanced driver assistance systems (ADAS) need to have proper awareness of the traffic context as well as the driver. The ADAS also need to understand the driver potential intent correctly since it shares the control authority with the human driver. This study provides a research on the driver intention inference, particular focus on the lane change manoeuvre on highways. This report is organised in a paper basis, where each chapter corresponding to a publication, which is submitted or to be submitted. Part â…  introduce the motivation and general methodology framework for this thesis. Part â…¡ includes the literature survey and the state-of-art of driver intention inference. Part â…¢ contains the techniques for traffic context perception that focus on the lane detection. A literature review on lane detection techniques and its integration with parallel driving framework is proposed. Next, a novel integrated lane detection system is designed. Part â…£ contains two parts, which provides the driver behaviour monitoring system for normal driving and secondary tasks detection. The first part is based on the conventional feature selection methods while the second part introduces an end-to-end deep learning framework. The design and analysis of driver lane change intention inference system for the lane change manoeuvre is proposed in Part â…¤. Finally, discussions and conclusions are made in Part â…¥. A major contribution of this project is to propose novel algorithms which accurately model the driver intention inference process. Lane change intention will be recognised based on machine learning (ML) methods due to its good reasoning and generalizing characteristics. Sensors in the vehicle are used to capture context traffic information, vehicle dynamics, and driver behaviours information. Machine learning and image processing are the techniques to recognise human driver behaviour.PhD in Transpor

    Detection of Driver Drowsiness and Distraction Using Computer Vision and Machine Learning Approaches

    Get PDF
    Drowsiness and distracted driving are leading factor in most car crashes and near-crashes. This research study explores and investigates the applications of both conventional computer vision and deep learning approaches for the detection of drowsiness and distraction in drivers. In the first part of this MPhil research study conventional computer vision approaches was studied to develop a robust drowsiness and distraction system based on yawning detection, head pose detection and eye blinking detection. These algorithms were implemented by using existing human crafted features. Experiments were performed for the detection and classification with small image datasets to evaluate and measure the performance of system. It was observed that the use of human crafted features together with a robust classifier such as SVM gives better performance in comparison to previous approaches. Though, the results were satisfactorily, there are many drawbacks and challenges associated with conventional computer vision approaches, such as definition and extraction of human crafted features, thus making these conventional algorithms to be subjective in nature and less adaptive in practice. In contrast, deep learning approaches automates the feature selection process and can be trained to learn the most discriminative features without any input from human. In the second half of this research study, the use of deep learning approaches for the detection of distracted driving was investigated. It was observed that one of the advantages of the applied methodology and technique for distraction detection includes and illustrates the contribution of CNN enhancement to a better pattern recognition accuracy and its ability to learn features from various regions of a human body simultaneously. The comparison of the performance of four convolutional deep net architectures (AlexNet, ResNet, MobileNet and NASNet) was carried out, investigated triplet training and explored the impact of combining a support vector classifier (SVC) with a trained deep net. The images used in our experiments with the deep nets are from the State Farm Distracted Driver Detection dataset hosted on Kaggle, each of which captures the entire body of a driver. The best results were obtained with the NASNet trained using triplet loss and combined with an SVC. It was observed that one of the advantages of deep learning approaches are their ability to learn discriminative features from various regions of a human body simultaneously. The ability has enabled deep learning approaches to reach accuracy at human level.

    The landscape context of planning for recreation: "the psycho- physiological approach to the design of open spaces"

    Get PDF
    The demand of modern living with social and economic tensions has created a society characterised by change. This constant change in most aspects of daily life has encouraged an increasing number of people to turn to recreation interests as a means of gaining inner satisfaction, self expression and personal fulfilment. Recreation has become an important component of the urban planning and design for societies.The research is devoted to the study of recreation as a vital component of life. Its importance stems from its maintenance of the physical and mental health for both society and individuals. The lack of satisfactory planning and design for recreation in a society would accordingly lead to slow disintegration
    • …
    corecore