11 research outputs found

    A Feature Selection Method for Driver Stress Detection Using Heart Rate Variability and Breathing Rate

    Full text link
    Driver stress is a major cause of car accidents and death worldwide. Furthermore, persistent stress is a health problem, contributing to hypertension and other diseases of the cardiovascular system. Stress has a measurable impact on heart and breathing rates and stress levels can be inferred from such measurements. Galvanic skin response is a common test to measure the perspiration caused by both physiological and psychological stress, as well as extreme emotions. In this paper, galvanic skin response is used to estimate the ground truth stress levels. A feature selection technique based on the minimal redundancy-maximal relevance method is then applied to multiple heart rate variability and breathing rate metrics to identify a novel and optimal combination for use in detecting stress. The support vector machine algorithm with a radial basis function kernel was used along with these features to reliably predict stress. The proposed method has achieved a high level of accuracy on the target dataset.Comment: In Proceedings of the 15th International Conference on Machine Vision (ICMV), Rome, Italy, 18-20 November 2022. arXiv admin note: text overlap with arXiv:2206.0322

    E-Scooter Rider Detection and Classification in Dense Urban Environments

    Full text link
    Accurate detection and classification of vulnerable road users is a safety critical requirement for the deployment of autonomous vehicles in heterogeneous traffic. Although similar in physical appearance to pedestrians, e-scooter riders follow distinctly different characteristics of movement and can reach speeds of up to 45kmph. The challenge of detecting e-scooter riders is exacerbated in urban environments where the frequency of partial occlusion is increased as riders navigate between vehicles, traffic infrastructure and other road users. This can lead to the non-detection or mis-classification of e-scooter riders as pedestrians, providing inaccurate information for accident mitigation and path planning in autonomous vehicle applications. This research introduces a novel benchmark for partially occluded e-scooter rider detection to facilitate the objective characterization of detection models. A novel, occlusion-aware method of e-scooter rider detection is presented that achieves a 15.93% improvement in detection performance over the current state of the art

    The Impact of Partial Occlusion on Pedestrian Detectability

    Full text link
    Robust detection of vulnerable road users is a safety critical requirement for the deployment of autonomous vehicles in heterogeneous traffic. One of the most complex outstanding challenges is that of partial occlusion where a target object is only partially available to the sensor due to obstruction by another foreground object. A number of leading pedestrian detection benchmarks provide annotation for partial occlusion, however each benchmark varies greatly in their definition of the occurrence and severity of occlusion. Recent research demonstrates that a high degree of subjectivity is used to classify occlusion level in these cases and occlusion is typically categorized into 2 to 3 broad categories such as partially and heavily occluded. This can lead to inaccurate or inconsistent reporting of pedestrian detection model performance depending on which benchmark is used. This research introduces a novel, objective benchmark for partially occluded pedestrian detection to facilitate the objective characterization of pedestrian detection models. Characterization is carried out on seven popular pedestrian detection models for a range of occlusion levels from 0-99%, in order to demonstrate the efficacy and increased analysis capabilities of the proposed characterization method. Results demonstrate that pedestrian detection performance degrades, and the number of false negative detections increase as pedestrian occlusion level increases. Of the seven popular pedestrian detection routines characterized, CenterNet has the greatest overall performance, followed by SSDlite. RetinaNet has the lowest overall detection performance across the range of occlusion levels

    Non-Contact NIR PPG Sensing through Large Sequence Signal Regression

    Full text link
    Non-Contact sensing is an emerging technology with applications across many industries from driver monitoring in vehicles to patient monitoring in healthcare. Current state-of-the-art implementations focus on RGB video, but this struggles in varying/noisy light conditions and is almost completely unfeasible in the dark. Near Infra-Red (NIR) video, however, does not suffer from these constraints. This paper aims to demonstrate the effectiveness of an alternative Convolution Attention Network (CAN) architecture, to regress photoplethysmography (PPG) signal from a sequence of NIR frames. A combination of two publicly available datasets, which is split into train and test sets, is used for training the CAN. This combined dataset is augmented to reduce overfitting to the 'normal' 60 - 80 bpm heart rate range by providing the full range of heart rates along with corresponding videos for each subject. This CAN, when implemented over video cropped to the subject's head, achieved a Mean Average Error (MAE) of just 0.99 bpm, proving its effectiveness on NIR video and the architecture's feasibility to regress an accurate signal output.Comment: 4 pages, 3 figures, 3 tables, Irish Machine Vision and Image Processing Conference 202

    Non-Contact Breathing Rate Detection Using Optical Flow

    Full text link
    Breathing rate is a vital health metric that is an invaluable indicator of the overall health of a person. In recent years, the non-contact measurement of health signals such as breathing rate has been a huge area of development, with a wide range of applications from telemedicine to driver monitoring systems. This paper presents an investigation into a method of non-contact breathing rate detection using a motion detection algorithm, optical flow. Optical flow is used to successfully measure breathing rate by tracking the motion of specific points on the body. In this study, the success of optical flow when using different sets of points is evaluated. Testing shows that both chest and facial movement can be used to determine breathing rate but to different degrees of success. The chest generates very accurate signals, with an RMSE of 0.63 on the tested videos. Facial points can also generate reliable signals when there is minimal head movement but are much more vulnerable to noise caused by head/body movements. These findings highlight the potential of optical flow as a non-invasive method for breathing rate detection and emphasize the importance of selecting appropriate points to optimize accuracy.Comment: In Proceedings of Irish Machine Vision and Image Processing Conference 2023 (IMVIP2023), Galway, Ireland, August 202

    Improved Cardiac Arrhythmia Prediction Based on Heart Rate Variability Analysis

    Get PDF
    Many types of ventricular and atrial cardiac arrhythmias have been discovered in clinical practice in the past 100 years, and these arrhythmias are a major contributor to sudden cardiac death. Ventricular tachycardia, ventricular fibrillation, and paroxysmal atrial fibrillation are the most commonly-occurring and dangerous arrhythmias, therefore early detection is crucial to prevent any further complications and reduce fatalities. Implantable devices such as pacemakers are commonly used in patients at high risk of sudden cardiac death. While great advances have been made in medical technology, there remain significant challenges in effective management of common arrhythmias. This thesis proposes novel arrhythmia detection and prediction methods to differentiate cardiac arrhythmias from non-life-threatening cardiac events, to increase the likelihood of detecting events that may lead to mortality, as well as reduce the incidence of unnecessary therapeutic intervention. The methods are based on detailed analysis of Heart Rate Variability (HRV) information. The results of the work show good performance of the proposed methods and support the potential for their deployment in resource-constrained devices for ventricular and atrial arrhythmia prediction, such as implantable pacemakers and defibrillators.Comment: PhD thesi

    Exploring the viability of bypassing the image signal processor for CNN-based object detection in autonomous vehicles

    No full text
    In the field of autonomous driving, cameras are crucial sensors for providing information about a vehicle’s environment. Image quality refers to a camera system’s ability to capture, process, and display signals to form an image. Historically, ‘‘good quality’’ in this context refers to images that have been processed by an Image Signal Processor (ISP) designed with the goal of providing the optimal experience for human consumption. However, image quality perceived by humans may not always result in optimal conditions for computer vision. In the context of human consumption, image quality is well documented and understood. Image quality for computer vision applications, such as those in the autonomous vehicle industry, requires more research. Fully autonomous vehicles inevitably encounter constraints concerning data storage, transmission speed, and energy consumption. This is a result of enormous amounts of data being generated by the vehicle from suites made up of multiple different sensors. We propose a potential optimization along the computer vision pipeline, by completely bypassing the ISP block for a class of applications. We demonstrate that doing so has a negligible impact on the performance of Convolutional Neural Network (CNN) object detectors. The results also highlight the benefits of using raw pre-ISP data, in the context of computation and energy savings achieved by removing the ISP.</p

    A review of the impact of rain on camera-based perception in automated driving systems

    No full text
    Automated vehicles rely heavily on image data from visible spectrum cameras to perform a wide range of tasks from object detection, classification, and avoidance to path planning. The availability and reliability of these sensors in adverse weather is therefore of critical importance to the safe and continuous operation of an automated vehicle. This review paper presents a data communication-inspired Image Formation Framework that characterizes the data flow from object through channel to sensor, and subsequent processing of the data. This framework is used to explore the degree to which adverse weather conditions affect the cameras used in automated vehicles for sensing and perception. The effects of rain on each element of the model are reviewed. Furthermore, the prevalence of these rain-induced changes in publicly available open-source datasets is reviewed. The degree to which synthetic rain generation techniques can accurately capture these changes is also examined. Finally, this paper offers some suggestions on how future adverse weather automotive datasets should be collected.</p

    Pedestrian crossing intention forecasting at unsignalized intersections using naturalistic trajectories

    No full text
    Interacting with other roads users is a challenge for an autonomous vehicle, particularly in urban areas. Existing vehicle systems behave in a reactive manner, warning the driver or applying the brakes when the pedestrian is already in front of the vehicle. The ability to anticipate a pedestrian’s crossing intention ahead of time will result in safer roads and smoother vehicle maneuvers. The problem of crossing intent forecasting at intersections is formulated in this paper as a classification task. A model that predicts pedestrian crossing behaviour at different locations around an urban intersection is proposed. The model not only provides a classification label (e.g., crossing, not-crossing), but a quantitative confidence level (i.e., probability). The training and evaluation are carried out using naturalistic trajectories provided by a publicly available dataset recorded from a drone. Results show that the model is able to predict crossing intention within a 3-s time window.</p
    corecore