15 research outputs found

    保育実習における「学び」と「気づき」による保育士像の形成 ―保育科学生の学び・気づきの原点、将来象、目指す保育士像―

    Get PDF
    The purpose of this research was to clarify students\u27perceptions and knowledge after the child care practices at a nursery school and a kindergarten. It also examined how their ideal images of a child care person were formed. The survey was conducted on 182 students at K Junior College. According to the results, the students learned the importance of a way of looking after children individually, of bringing out their basic talents, and of communicating effectively with them. Their ideal images, then, were the ones whose attitude is being considerate, cheerful, and active in dealing with children. This paper concludes that their realization as child care persons was brought forth through pragmatic care experiences

    Data fusion for driver behaviour analysis

    Get PDF
    A driver behaviour analysis tool is presented. The proposal offers a novel contribution based on low-cost hardware and advanced software capabilities based on data fusion. The device takes advantage of the information provided by the in-vehicle sensors using Controller Area Network Bus (CAN-BUS), an Inertial Measurement Unit (IMU) and a GPS. By fusing this information, the system can infer the behaviour of the driver, providing aggressive behaviour detection. By means of accurate GPS-based localization, the system is able to add context information, such as digital map information, speed limits, etc. Several parameters and signals are taken into account, both in the temporal and frequency domains, to provide real time behaviour detection. The system was tested in urban, interurban and highways scenarios.This work was supported by the Spanish Government through the CICYT project (TRA2013-48314-C3-1-R) and DGT project (SPID2015-01802) and by the company SERCORE Tech. S.L. through the project: “Proyecto de Viabilidad de la Comunicación entre el BUS CAN de un Vehículo Específico con un Dispositivo de Adquisición de Datos Móviles”. SERCORE provided invaluable support in the development of the communication technologies through the CAN BUS, presented in this paper

    Estimation of Driver's Gaze Region from Head Position and Orientation using Probabilistic Confidence Regions

    Full text link
    A smart vehicle should be able to understand human behavior and predict their actions to avoid hazardous situations. Specific traits in human behavior can be automatically predicted, which can help the vehicle make decisions, increasing safety. One of the most important aspects pertaining to the driving task is the driver's visual attention. Predicting the driver's visual attention can help a vehicle understand the awareness state of the driver, providing important contextual information. While estimating the exact gaze direction is difficult in the car environment, a coarse estimation of the visual attention can be obtained by tracking the position and orientation of the head. Since the relation between head pose and gaze direction is not one-to-one, this paper proposes a formulation based on probabilistic models to create salient regions describing the visual attention of the driver. The area of the predicted region is small when the model has high confidence on the prediction, which is directly learned from the data. We use Gaussian process regression (GPR) to implement the framework, comparing the performance with different regression formulations such as linear regression and neural network based methods. We evaluate these frameworks by studying the tradeoff between spatial resolution and accuracy of the probability map using naturalistic recordings collected with the UTDrive platform. We observe that the GPR method produces the best result creating accurate predictions with localized salient regions. For example, the 95% confidence region is defined by an area that covers 3.77% region of a sphere surrounding the driver.Comment: 13 Pages, 12 figures, 2 table

    Decoding Neural Correlates of Cognitive States to Enhance Driving Experience

    Get PDF
    Modern cars can support their drivers by assessing and autonomously performing different driving maneuvers based on information gathered by in-car sensors. We propose that brain–machine interfaces (BMIs) can provide complementary information that can ease the interaction with intelligent cars in order to enhance the driving experience. In our approach, the human remains in control, while a BMI is used to monitor the driver's cognitive state and use that information to modulate the assistance provided by the intelligent car. In this paper, we gather our proof-of-concept studies demonstrating the feasibility of decoding electroencephalography correlates of upcoming actions and those reflecting whether the decisions of driving assistant systems are in-line with the drivers' intentions. Experimental results while driving both simulated and real cars consistently showed neural signatures of anticipation, movement preparation, and error processing. Remarkably, despite the increased noise inherent to real scenarios, these signals can be decoded on a single-trial basis, reflecting some of the cognitive processes that take place while driving. However, moderate decoding performance compared to the controlled experimental BMI paradigms indicate there exists room for improvement of the machine learning methods typically used in the state-of-the-art BMIs. We foresee that neural fusion correlates with information extracted from other physiological measures, e.g., eye movements or electromyography as well as contextual information gathered by in-car sensors will allow intelligent cars to provide timely and tailored assistance only if it is required; thus, keeping the user in the loop and allowing him to fully enjoy the driving experience

    Human-Centric Detection and Mitigation Approach for Various Levels of Cell Phone-Based Driver Distractions

    Get PDF
    abstract: Driving a vehicle is a complex task that typically requires several physical interactions and mental tasks. Inattentive driving takes a driver’s attention away from the primary task of driving, which can endanger the safety of driver, passenger(s), as well as pedestrians. According to several traffic safety administration organizations, distracted and inattentive driving are the primary causes of vehicle crashes or near crashes. In this research, a novel approach to detect and mitigate various levels of driving distractions is proposed. This novel approach consists of two main phases: i.) Proposing a system to detect various levels of driver distractions (low, medium, and high) using a machine learning techniques. ii.) Mitigating the effects of driver distractions through the integration of the distracted driving detection algorithm and the existing vehicle safety systems. In phase- 1, vehicle data were collected from an advanced driving simulator and a visual based sensor (webcam) for face monitoring. In addition, data were processed using a machine learning algorithm and a head pose analysis package in MATLAB. Then the model was trained and validated to detect different human operator distraction levels. In phase 2, the detected level of distraction, time to collision (TTC), lane position (LP), and steering entropy (SE) were used as an input to feed the vehicle safety controller that provides an appropriate action to maintain and/or mitigate vehicle safety status. The integrated detection algorithm and vehicle safety controller were then prototyped using MATLAB/SIMULINK for validation. A complete vehicle power train model including the driver’s interaction was replicated, and the outcome from the detection algorithm was fed into the vehicle safety controller. The results show that the vehicle safety system controller reacted and mitigated the vehicle safety status-in closed loop real-time fashion. The simulation results show that the proposed approach is efficient, accurate, and adaptable to dynamic changes resulting from the driver, as well as the vehicle system. This novel approach was applied in order to mitigate the impact of visual and cognitive distractions on the driver performance.Dissertation/ThesisDoctoral Dissertation Applied Psychology 201

    Predicting Driver Distraction: An Analysis of Machine Learning Algorithms and Input Measures

    Get PDF
    The research area on the detection and classification of distracted driving is growing in importance as in-vehicle information systems such as navigation and entertainment displays, which introduce sources of distraction for drivers, become more common in vehicles. To mitigate the potential consequences of distracted driving it is necessary for such systems to provide a means of detecting driver distraction and then responding appropriately. This study uses a machine-learning approach to develop classification models that detect and differentiate both cognitive and sensorimotor distraction among drivers, which were induced via secondary tasks in a simulator study. The inputs to these models are combinations of driving performance measures (e.g. brake force, lane offset, speed, and steering angle) and driver physiological measures (e.g. breathing rate, heart rate, and perinasal electrodermal activity), and the outputs are predictions of driver distraction (e.g. cognitive distraction, sensorimotor distraction, or normal driving). Various combinations of driving performance and driver physiological measures, multiple types of machine-learning algorithms, and a systematic feature extraction and reduction method called TSFRESH were used to develop the classification models. Results showed that the physiological measures did not provide significant information for detecting and classifying driver distraction. Furthermore, no significant differences were found between the different machine-learning algorithms. Analyses on feature importance also revealed that driving performance measures including steering angle, lane offset, and speed were the most important indicators of distracted driving, and that features characterizing the extreme values, the variance and fluctuation, and the non linearity and complexity of time series input were more informative for classifying driver distraction than other features. Conclusions suggest that distraction detection models gain more information from driving performance measures than physiological measures and that using features that characterize specific aspects of time series input is useful for classifying driver distraction

    Integration of body sensor networks and vehicular ad-hoc networks for traffic safety

    Get PDF
    The emergence of Body Sensor Networks (BSNs) constitutes a new and fast growing trend for the development of daily routine applications. However, in the case of heterogeneous BSNs integration with Vehicular ad hoc Networks (VANETs) a large number of difficulties remain, that must be solved, especially when talking about the detection of human state factors that impair the driving of motor vehicles. The main contributions of this investigation are principally three: (1) an exhaustive review of the current mechanisms to detect four basic physiological behavior states (drowsy, drunk, driving under emotional state disorders and distracted driving) that may cause traffic accidents is presented; (2) A middleware architecture is proposed. This architecture can communicate with the car dashboard, emergency services, vehicles belonging to the VANET and road or street facilities. This architecture seeks on the one hand to improve the car driving experience of the driver and on the other hand to extend security mechanisms for the surrounding individuals; and (3) as a proof of concept, an Android real-time attention low level detection application that runs in a next-generation smartphone is developed. The application features mechanisms that allow one to measure the degree of attention of a driver on the base of her/his EEG signals, establish wireless communication links via various standard wireless means, GPRS, Bluetooth and WiFi and issue alarms of critical low driver attention levels.Peer ReviewedPostprint (author's final draft

    Deep Learning-based Driver Behavior Modeling and Analysis

    Get PDF
    Driving safety continues receiving widespread attention from car designers, safety regulators, and automotive research community as driving accidents due to driver distraction or fatigue have increased drastically over the years. In the past decades, there has been a remarkable push towards designing and developing new driver assistance systems with much better recognition and prediction capabilities. Equipped with various sensory systems, these Advanced Driver Assistance Systems (ADAS) are able to accurately perceive information on road conditions, predict traffic situations, estimate driving risks, and provide drivers with imminent warnings and visual assistance. In this thesis, we focus on two main aspects of driver behavior modeling in the design of new generation of ADAS. We first aim at improving the generalization ability of driver distraction recognition systems to diverse driving scenarios using the latest tools of machine learning and connectionist modeling, namely deep learning. To this end, we collect a large dataset of images on various driving situations of drivers from the Internet. Then we introduce Generative Adversarial Networks (GANs) as a data augmentation tool to enhance detection accuracy. A novel driver monitoring system is also introduced. This monitoring system combines multi-information resources, including a driver distraction recognition system, to assess the danger levels of driving situations. Moreover, this thesis proposes a multi-modal system for distraction recognition under various lighting conditions and presents a new Convolutional Neural Network (CNN) architecture, which can operate real-time on a resources-limited computational platform. The new CNN is built upon a novel network bottleneck of Depthwise Separable Convolution layers. The second part of this thesis focuses on driver maneuver prediction, which infers the direction a driver will turn to before a green traffic light is on and predicts accurately whether or not he/she will change the current driving lane. Here, a new method to label driving maneuver records is proposed, by which driving feature sequences for the training of prediction systems are more closely related to their labels. To this end, a new prediction system, which is based on Quasi-Recurrent Neural Networks, is introduced. In addition, and as an application of maneuver prediction, a novel driving proficiency assessment method is proposed. This method exploits the generalization abilities of different maneuver prediction systems to estimate drivers' driving abilities, and it demonstrates several advantages against existing assessment methods. In conjunction with the theoretical contribution, a series of comprehensive experiments are conducted, and the proposed methods are assessed against state-of-the-art works. The analysis of experimental results shows the improvement of results as compared with existing techniques
    corecore