7,597 research outputs found
Towards a Practical Pedestrian Distraction Detection Framework using Wearables
Pedestrian safety continues to be a significant concern in urban communities
and pedestrian distraction is emerging as one of the main causes of grave and
fatal accidents involving pedestrians. The advent of sophisticated mobile and
wearable devices, equipped with high-precision on-board sensors capable of
measuring fine-grained user movements and context, provides a tremendous
opportunity for designing effective pedestrian safety systems and applications.
Accurate and efficient recognition of pedestrian distractions in real-time
given the memory, computation and communication limitations of these devices,
however, remains the key technical challenge in the design of such systems.
Earlier research efforts in pedestrian distraction detection using data
available from mobile and wearable devices have primarily focused only on
achieving high detection accuracy, resulting in designs that are either
resource intensive and unsuitable for implementation on mainstream mobile
devices, or computationally slow and not useful for real-time pedestrian safety
applications, or require specialized hardware and less likely to be adopted by
most users. In the quest for a pedestrian safety system that achieves a
favorable balance between computational efficiency, detection accuracy, and
energy consumption, this paper makes the following main contributions: (i)
design of a novel complex activity recognition framework which employs motion
data available from users' mobile and wearable devices and a lightweight
frequency matching approach to accurately and efficiently recognize complex
distraction related activities, and (ii) a comprehensive comparative evaluation
of the proposed framework with well-known complex activity recognition
techniques in the literature with the help of data collected from human subject
pedestrians and prototype implementations on commercially-available mobile and
wearable devices
SonoNet: Real-Time Detection and Localisation of Fetal Standard Scan Planes in Freehand Ultrasound
Identifying and interpreting fetal standard scan planes during 2D ultrasound
mid-pregnancy examinations are highly complex tasks which require years of
training. Apart from guiding the probe to the correct location, it can be
equally difficult for a non-expert to identify relevant structures within the
image. Automatic image processing can provide tools to help experienced as well
as inexperienced operators with these tasks. In this paper, we propose a novel
method based on convolutional neural networks which can automatically detect 13
fetal standard views in freehand 2D ultrasound data as well as provide a
localisation of the fetal structures via a bounding box. An important
contribution is that the network learns to localise the target anatomy using
weak supervision based on image-level labels only. The network architecture is
designed to operate in real-time while providing optimal output for the
localisation task. We present results for real-time annotation, retrospective
frame retrieval from saved videos, and localisation on a very large and
challenging dataset consisting of images and video recordings of full clinical
anomaly screenings. We found that the proposed method achieved an average
F1-score of 0.798 in a realistic classification experiment modelling real-time
detection, and obtained a 90.09% accuracy for retrospective frame retrieval.
Moreover, an accuracy of 77.8% was achieved on the localisation task.Comment: 12 pages, 8 figures, published in IEEE Transactions in Medical
Imagin
Machine Learning Models for Network Intrusion Detection and Authentication of Smart Phone Users
A thesis presented to the faculty of the Elmer R. Smith College of Business and Technology at Morehead State University in partial fulfillment of the requirements for the Degree of Master of Science by S. Sareh Ahmadi on November 18, 2019
Overview of the Applied Aerodynamics Division
A major reorganization of the Aeronautics Directorate of the Langley Research Center occurred in early 1989. As a result of this reorganization, the scope of research in the Applied Aeronautics Division is now quite different than that in the past. An overview of the current organization, mission, and facilities of this division is presented. A summary of current research programs and sample highlights of recent research are also presented. This is intended to provide a general view of the scope and capabilities of the division
Adaptive video segmentation
The efficiency of a video indexing technique depends on the efficiency of the video segmentation algorithm which is a fundamental step in video indexing. Video segmentation is a process of splitting up a video sequence into its constituent scenes. This work focuses on the problem of video segmentation. A content-based approach has been used which segments a video based on the information extracted from the video itself. The main emphasis is on using structural information in the video such as edges as they are largely invariant to illumination and motion changes. The edge-based features have been used in conjunction with the intensity-based features in a multi-resolution framework to improve the performance of the segmentation algorithm.;To further improve the performance and to reduce the problem of automated choice of parameters, we introduce adaptation in the video segmentation process. (Abstract shortened by UMI.)
An Intelligent Safety System for Human-Centered Semi-Autonomous Vehicles
Nowadays, automobile manufacturers make efforts to develop ways to make cars
fully safe. Monitoring driver's actions by computer vision techniques to detect
driving mistakes in real-time and then planning for autonomous driving to avoid
vehicle collisions is one of the most important issues that has been
investigated in the machine vision and Intelligent Transportation Systems
(ITS). The main goal of this study is to prevent accidents caused by fatigue,
drowsiness, and driver distraction. To avoid these incidents, this paper
proposes an integrated safety system that continuously monitors the driver's
attention and vehicle surroundings, and finally decides whether the actual
steering control status is safe or not. For this purpose, we equipped an
ordinary car called FARAZ with a vision system consisting of four mounted
cameras along with a universal car tool for communicating with surrounding
factory-installed sensors and other car systems, and sending commands to
actuators. The proposed system leverages a scene understanding pipeline using
deep convolutional encoder-decoder networks and a driver state detection
pipeline. We have been identifying and assessing domestic capabilities for the
development of technologies specifically of the ordinary vehicles in order to
manufacture smart cars and eke providing an intelligent system to increase
safety and to assist the driver in various conditions/situations.Comment: 15 pages and 5 figures, Submitted to the international conference on
Contemporary issues in Data Science (CiDaS 2019), Learn more about this
project at https://iasbs.ac.ir/~ansari/fara
- …