29 research outputs found

    Ellipse Feature Detection in Rope Skipping for Gross Motor Skill Development

    Get PDF
    To support novice learners to acquire a new skill, it is important to understand the underlying principles of the skill. This means that the support system must analyze the skill and develop methods therein to offer credible support to enable efficient learning. This work employs the data from 8 skilled rope skipping subjects to analyze the rope skipping task to find features that can be used for support. The processes involved include finding the best rope and hand trajectory fitting using ellipses. This will enable better speed variation detection and timing that can be used to support new learners. In this work, the rope skill attempted is learning to do the “double under” jump. A “double under” jump is defined as completing two rope rotations per jump. Experimental results prove that this is an effective method for accurate feature detection and tracking

    Vehicle Detection and Type Classification Based on CNN-SVM

    Get PDF
    In this paper, we propose vehicle detection and classification in a real road environment using a modified and improved AlexNet. Among the various challenges faced, the problem of poor robustness in extracting vehicle candidate regions through a single feature is solved using the YOLO deep learning series algorithm used to propose potential regions and to further improve the speed of detection. For this, the lightweight network Yolov2-tiny is chosen as the location network. In the training process, anchor box clustering is performed based on the ground truth of the training set, which improves its performance on the specific dataset. The low classification accuracy problem after template-based feature extraction is solved using the optimal feature description extracted through convolution neural network learning. Moreover, based on AlexNet, through adjusting parameters, an improved algorithm was proposed whose model size is smaller and classification speed is faster than the original AlexNet. Spatial Pyramid Pooling (SPP) is added to the vehicle classification network which solves the problem of low accuracy due to image distortion caused by image resizing. By combining CNN with SVM and normalizing features in SVM, the generalization ability of the model was improved. Experiments show that our method has a better performance in vehicle detection and type classification

    Towards Autonomous Driving : Road Surface Signs Recognition using Neural Networks

    Get PDF
    In recent years, the number of traffic accidents has rapidly increased for many reasons. However, the most common cause is the driver carelessness and inattentiveness to road signs. Therefore, the aim of this paper is to automatically recognize road surface markings and by availing the information to the driver, hope to reduce road accidents. In the proposed method, the captured image is transformed and edges information used to extract the target road area. The road marking candidate is then extracted and recognized using a neural network

    Human Detection and Tracking Using Hog Feature and Particle Filter

    Get PDF
    Video surveillance system has recently attracted much attention in various fields for monitoring and ensuring security. One of its promising applications is in crowd control to maintain the general security in public places. However, the problem of video surveillance systems is the required continuous manual monitoring especially for crime deterrence. In order to assist the security monitoring the live surveillance systems, intelligent target detection and tracking techniques can send a warning signal to the monitoring officers automatically. Towards this end, in this paper, we propose an innovative method to detect and track a target person in a crowded area using the individual’s features. In the proposed method, to realize automatic detection and tracking we combine Histogram of Oriented Gradient (HOG) feature detection with a particle filter. The HOG feature is applied for the description of contour detection for the person, while the particle filter is used for tracking the targets using skin and clothes color based features. We have developed the evaluation system implementing our proposed method. From the experimental results, we have achieved high accuracy detection rate and tracked the specific target precisely

    DETECTION OF DRIVER’S VISUAL DISTRACTION USING DUAL CAMERAS

    Get PDF
    Most serious accidents are caused by the driver’s visual distraction. Therefore, early detection of a driver’s visual distraction is very important. The detection system mostly used is the dashboard camera because it is cheap and convenient. However, some studies have focused on various methods using additional equipment such as vehicle-mounted devices, wearable devices, and specific cameras that are common. However, these proposals are expensive. Therefore, the main goal of our research is to create a low-cost, non-intrusive, and lightweight driver’s visual distraction detection (DVDD) system using only a simple dual dashboard camera. Currently, most research has focused only on tracking and estimating the driver’s gaze. In our study, additionally, we also aim to monitor the road environment and then evaluate the driver’s visual distraction detection based on the two pieces of information. The proposed system has two main modules: 1) gaze mapping and 2) moving object detection. The gaze mapping module receives video captured through a camera placed in front of the driver, and then predicts a driver’s gaze direction to one of predefined 16 gaze regions. Concurrently, the moving object detection module identifies the moving objects from the front view and determines in which part of the predefined 16 gaze regions it appears. By combining and evaluating the two modules, the state of the distraction of the driver can be estimated. If the two module outputs are different gaze regions or non-neighbor gaze regions, the system considers that the driver is visually distracted and issues a warning. We conducted experiments based on our self-built real-driving DriverGazeMapping dataset. In the gaze mapping module, we compared the two methods MobileNet and OpenFace with the SVM classifier. The two methods outperformed the baseline gaze mapping module. Moreover, in the OpenFace with SVM classifier method, we investigated which features extracted by OpenFace affected the performance of the gaze mapping module. Of these, the most effective feature was the combination of a gaze angle and head position_R features. The OpenFace with SVM method using gaze angle and head position_R features achieved a 6.25% higher accuracy than the method using MobileNet. Besides, the moving object detection module using the Lukas-Kanade dense method was faster and more reliable than in the previous study in our experiments

    Advanced Safety Vehicle (ASV) Technology Driver Support System Monitor using Three Onboard Camera

    Get PDF
    This paper presents the development of safety driving support system using 3 onboard cameras. One camera monitors the driver to determine the driver ’s current gaze and the other is a front camera that detects pedestrians, running lane and vehicles in front. The rear camera detects pedestrians and approaching vehicles. The pedestrians and vehicles are detected using a specially trained HOG/Adaboost system. Lane detections uses edge detection and RANSAC. Information from the 3 cameras is then used to determine whether a situation is dangerous enough to warrant warning the driver. We have conducted experiments and the results confirm that this system has the potential to support automatic driving

    Two-Stage Robust Optimization for the Orienteering Problem with Stochastic Weights

    Get PDF
    In this paper, the two-stage orienteering problem with stochastic weights is studied, where the first-stage problem is to plan a path under the uncertain environment and the second-stage problem is a recourse action to make sure that the length constraint is satisfied after the uncertainty is realized. First, we explain the recourse model proposed by Evers et al. (2014) and point out that this model is very complex. Then, we introduce a new recourse model which is much simpler with less variables and less constraints. Based on these two recourse models, we introduce two different two-stage robust models for the orienteering problem with stochastic weights. We theoretically prove that the two-stage robust models are equivalent to their corresponding static robust models under the box uncertainty set, which indicates that the two-stage robust models can be solved by using common mathematical programming solvers (e.g., IBM CPLEX optimizer). Furthermore, we prove that the two two-stage robust models are equivalent to each other even though they are based on different recourse models, which indicates that we can use a much simpler model instead of a complex model for practical use. A case study is presented by comparing the two-stage robust models with a one-stage robust model for the orienteering problem with stochastic weights. The numerical results of the comparative studies show the effectiveness and superiority of the proposed two-stage robust models for dealing with the two-stage orienteering problem with stochastic weights

    Domain adaptation for driver's gaze mapping for different drivers and new environments

    Get PDF
    Distracted driving is a leading cause of traffic accidents, and often arises from a lack of visual attention on the road. To enhance road safety, monitoring a driver's visual attention is crucial. Appearance-based gaze estimation using deep learning and Convolutional Neural Networks (CNN) has shown promising results, but it faces challenges when applied to different drivers and environments. In this paper, we propose a domain adaptation-based solution for gaze mapping, which aims to accurately estimate a driver's gaze in diverse drivers and new environments. Our method consists of three steps: pre-processing, facial feature extraction, and gaze region classification. We explore two strategies for input feature extraction, one utilizing the full appearance of the driver and environment and the other focusing on the driver's face. Through unsupervised domain adaptation, we align the feature distributions of the source and target domains using a conditional Generative Adversarial Network (GAN). We conduct experiments on the Driver Gaze Mapping (DGM) dataset and the Columbia Cave-DB dataset to evaluate the performance of our method. The results demonstrate that our proposed method reduces the gaze mapping error, achieves better performance on different drivers and camera positions, and outperforms existing methods. We achieved an average Strictly Correct Estimation Rate (SCER) accuracy of 81.38% and 93.53% and Loosely Correct Estimation Rate (LCER) accuracy of 96.69% and 98.9% for the two strategies, respectively, indicating the effectiveness of our approach in adapting to different domains and camera positions. Our study contributes to the advancement of gaze mapping techniques and provides insights for improving driver safety in various driving scenarios

    Auto-Differentiated Fixed Point Notation on Low-Powered Hardware Acceleration

    Get PDF
    Using less electric power or speeding up processing is catching the interests of researchers in deep learning. Quantization has offered distillation mechanisms that substitute floating numbers for integers, but little has been suggested about the floating numbers themselves. The use of Q-format notation reduces computational overheads that frees resources for the introduction of more operations. Our experiments, conditioned on varying regimes, introduce automatic differentiation on algorithms like the fast Fourier transforms and Winograd minimal filtering to reduce computational complexity (expressed in total number of MACs) and suggest a path towards the assistive intelligence concept. Empirical results show that, under specific heuristics, the Q-format number notation can overcome the shortfalls of floating numbers, especially for embedded systems. Further benchmarks like the FPBench standard give more details by comparing our proposals with common deep learning operations
    corecore