278 research outputs found

    Nighttime Driver Behavior Prediction Using Taillight Signal Recognition via CNN-SVM Classifier

    Full text link
    This paper aims to enhance the ability to predict nighttime driving behavior by identifying taillights of both human-driven and autonomous vehicles. The proposed model incorporates a customized detector designed to accurately detect front-vehicle taillights on the road. At the beginning of the detector, a learnable pre-processing block is implemented, which extracts deep features from input images and calculates the data rarity for each feature. In the next step, drawing inspiration from soft attention, a weighted binary mask is designed that guides the model to focus more on predetermined regions. This research utilizes Convolutional Neural Networks (CNNs) to extract distinguishing characteristics from these areas, then reduces dimensions using Principal Component Analysis (PCA). Finally, the Support Vector Machine (SVM) is used to predict the behavior of the vehicles. To train and evaluate the model, a large-scale dataset is collected from two types of dash-cams and Insta360 cameras from the rear view of Ford Motor Company vehicles. This dataset includes over 12k frames captured during both daytime and nighttime hours. To address the limited nighttime data, a unique pixel-wise image processing technique is implemented to convert daytime images into realistic night images. The findings from the experiments demonstrate that the proposed methodology can accurately categorize vehicle behavior with 92.14% accuracy, 97.38% specificity, 92.09% sensitivity, 92.10% F1-measure, and 0.895 Cohen's Kappa Statistic. Further details are available at https://github.com/DeepCar/Taillight_Recognition.Comment: 12 pages, 10 figure

    Computer vision for advanced driver assistance systems

    Get PDF

    Computer vision for advanced driver assistance systems

    Get PDF

    Global Shipping Container Monitoring Using Machine Learning with Multi-Sensor Hubs and Catadioptric Imaging

    Get PDF
    We describe a framework for global shipping container monitoring using machine learning with multi-sensor hubs and infrared catadioptric imaging. A wireless mesh radio satellite tag architecture provides connectivity anywhere in the world which is a significant improvement to legacy methods. We discuss the design and testing of a low-cost long-wave infrared catadioptric imaging device and multi-sensor hub combination as an intelligent edge computing system that, when equipped with physics-based machine learning algorithms, can interpret the scene inside a shipping container to make efficient use of expensive communications bandwidth. The histogram of oriented gradients and T-channel (HOG+) feature as introduced for human detection on low-resolution infrared catadioptric images is shown to be effective for various mirror shapes designed to give wide volume coverage with controlled distortion. Initial results for through-metal communication with ultrasonic guided waves show promise using the Dynamic Wavelet Fingerprint Technique (DWFT) to identify Lamb waves in a complicated ultrasonic signal

    Vision Sensors and Edge Detection

    Get PDF
    Vision Sensors and Edge Detection book reflects a selection of recent developments within the area of vision sensors and edge detection. There are two sections in this book. The first section presents vision sensors with applications to panoramic vision sensors, wireless vision sensors, and automated vision sensor inspection, and the second one shows image processing techniques, such as, image measurements, image transformations, filtering, and parallel computing

    A comparison of near-infrared and visible imaging for surveillance applications

    Get PDF
    A computer vision approach is investigated which has low computational complexity and which compares near-infrared and visible image systems. The target application is a surveillance system for pedestrian and vehicular traffic. Near-infrared light has potential benefits including non-visible illumination requirements. Image-processing and intelligent classification algorithms for monitoring pedestrians are implemented in outdoor and indoor environments with frequent traffic. The image set collected consists of persons walking in the presence of foreground as well as background objects at different times during the day. Image sets with nonperson objects, e.g. bicycles and vehicles, are also considered. The complex, cluttered environments are highly variable, e.g. shadows and moving foliage. The system performance for near-infrared images is compared to that of traditional visible images. The approach consists of thresholding an image and creating a silhouette of new objects in the scene. Filtering is used to eliminate noise. Twenty-four features are calculated by MATLABâ™­ code for each identified object. These features are analyzed for usefulness in object discrimination. Minimal combinations of features are proposed and explored for effective automated discrimination. Features were used to train and test a variety of classification architectures. The results show that the algorithm can effectively manipulate near-infrared images and that effective object classification is possible even in the presence of system noise and environmental clutter. The potential for automated surveillance based on near-infrared imaging and automated feature processing are discussed --Abstract, page iii

    Marshall Space Flight Center Research and Technology Report 2019

    Get PDF
    Today, our calling to explore is greater than ever before, and here at Marshall Space Flight Centerwe make human deep space exploration possible. A key goal for Artemis is demonstrating and perfecting capabilities on the Moon for technologies needed for humans to get to Mars. This years report features 10 of the Agencys 16 Technology Areas, and I am proud of Marshalls role in creating solutions for so many of these daunting technical challenges. Many of these projects will lead to sustainable in-space architecture for human space exploration that will allow us to travel to the Moon, on to Mars, and beyond. Others are developing new scientific instruments capable of providing an unprecedented glimpse into our universe. NASA has led the charge in space exploration for more than six decades, and through the Artemis program we will help build on our work in low Earth orbit and pave the way to the Moon and Mars. At Marshall, we leverage the skills and interest of the international community to conduct scientific research, develop and demonstrate technology, and train international crews to operate further from Earth for longer periods of time than ever before first at the lunar surface, then on to our next giant leap, human exploration of Mars. While each project in this report seeks to advance new technology and challenge conventions, it is important to recognize the diversity of activities and people supporting our mission. This report not only showcases the Centers capabilities and our partnerships, it also highlights the progress our people have achieved in the past year. These scientists, researchers and innovators are why Marshall and NASA will continue to be a leader in innovation, exploration, and discovery for years to come

    Face pose estimation with automatic 3D model creation for a driver inattention monitoring application

    Get PDF
    Texto en inglés y resumen en inglés y españolRecent studies have identified inattention (including distraction and drowsiness) as the main cause of accidents, being responsible of at least 25% of them. Driving distraction has been less studied, since it is more diverse and exhibits a higher risk factor than fatigue. In addition, it is present over half of the inattention involved crashes. The increased presence of In Vehicle Information Systems (IVIS) adds to the potential distraction risk and modifies driving behaviour, and thus research on this issue is of vital importance. Many researchers have been working on different approaches to deal with distraction during driving. Among them, Computer Vision is one of the most common, because it allows for a cost effective and non-invasive driver monitoring and sensing. Using Computer Vision techniques it is possible to evaluate some facial movements that characterise the state of attention of a driver. This thesis presents methods to estimate the face pose and gaze direction of a person in real-time, using a stereo camera as a basic for assessing driver distractions. The methods are completely automatic and user-independent. A set of features in the face are identified at initialisation, and used to create a sparse 3D model of the face. These features are tracked from frame to frame, and the model is augmented to cover parts of the face that may have been occluded before. The algorithm is designed to work in a naturalistic driving simulator, which presents challenging low light conditions. We evaluate several techniques to detect features on the face that can be matched between cameras and tracked with success. Well-known methods such as SURF do not return good results, due to the lack of salient points in the face, as well as the low illumination of the images. We introduce a novel multisize technique, based on Harris corner detector and patch correlation. This technique benefits from the better performance of small patches under rotations and illumination changes, and the more robust correlation of the bigger patches under motion blur. The head rotates in a range of ±90º in the yaw angle, and the appearance of the features change noticeably. To deal with these changes, we implement a new re-registering technique that captures new textures of the features as the face rotates. These new textures are incorporated to the model, which mixes the views of both cameras. The captures are taken at regular angle intervals for rotations in yaw, so that each texture is only used in a range of ±7.5º around the capture angle. Rotations in pitch and roll are handled using affine patch warping. The 3D model created at initialisation can only take features in the frontal part of the face, and some of these may occlude during rotations. The accuracy and robustness of the face tracking depends on the number of visible points, so new points are added to the 3D model when new parts of the face are visible from both cameras. Bundle adjustment is used to reduce the accumulated drift of the 3D reconstruction. We estimate the pose from the position of the features in the images and the 3D model using POSIT or Levenberg-Marquardt. A RANSAC process detects incorrectly tracked points, which are not considered for pose estimation. POSIT is faster, while LM obtains more accurate results. Using the model extension and the re-registering technique, we can accurately estimate the pose in the full head rotation range, with error levels that improve the state of the art. A coarse eye direction is composed with the face pose estimation to obtain the gaze and driver's fixation area, parameter which gives much information about the distraction pattern of the driver. The resulting gaze estimation algorithm proposed in this thesis has been tested on a set of driving experiments directed by a team of psychologists in a naturalistic driving simulator. This simulator mimics conditions present in real driving, including weather changes, manoeuvring and distractions due to IVIS. Professional drivers participated in the tests. The driver?s fixation statistics obtained with the proposed system show how the utilisation of IVIS influences the distraction pattern of the drivers, increasing reaction times and affecting the fixation of attention on the road and the surroundings
    • …
    corecore