7,542 research outputs found
Analysis and development of a novel algorithm for the in-vehicle hand-usage of a smartphone
Smartphone usage while driving is unanimously considered to be a really
dangerous habit due to strong correlation with road accidents. In this paper,
the problem of detecting whether the driver is using the phone during a trip is
addressed. To do this, high-frequency data from the triaxial inertial
measurement unit (IMU) integrated in almost all modern phone is processed
without relying on external inputs so as to provide a self-contained approach.
By resorting to a frequency-domain analysis, it is possible to extract from the
raw signals the useful information needed to detect when the driver is using
the phone, without being affected by the effects that vehicle motion has on the
same signals. The selected features are used to train a Support Vector Machine
(SVM) algorithm. The performance of the proposed approach are analyzed and
tested on experimental data collected during mixed naturalistic driving
scenarios, proving the effectiveness of the proposed approach
MOSDEN: A Scalable Mobile Collaborative Platform for Opportunistic Sensing Applications
Mobile smartphones along with embedded sensors have become an efficient
enabler for various mobile applications including opportunistic sensing. The
hi-tech advances in smartphones are opening up a world of possibilities. This
paper proposes a mobile collaborative platform called MOSDEN that enables and
supports opportunistic sensing at run time. MOSDEN captures and shares sensor
data across multiple apps, smartphones and users. MOSDEN supports the emerging
trend of separating sensors from application-specific processing, storing and
sharing. MOSDEN promotes reuse and re-purposing of sensor data hence reducing
the efforts in developing novel opportunistic sensing applications. MOSDEN has
been implemented on Android-based smartphones and tablets. Experimental
evaluations validate the scalability and energy efficiency of MOSDEN and its
suitability towards real world applications. The results of evaluation and
lessons learned are presented and discussed in this paper.Comment: Accepted to be published in Transactions on Collaborative Computing,
2014. arXiv admin note: substantial text overlap with arXiv:1310.405
SaferCross: Enhancing Pedestrian Safety Using Embedded Sensors of Smartphone
The number of pedestrian accidents continues to keep climbing. Distraction
from smartphone is one of the biggest causes for pedestrian fatalities. In this
paper, we develop SaferCross, a mobile system based on the embedded sensors of
smartphone to improve pedestrian safety by preventing distraction from
smartphone. SaferCross adopts a holistic approach by identifying and developing
essential system components that are missing in existing systems and
integrating the system components into a "fully-functioning" mobile system for
pedestrian safety. Specifically, we create algorithms for improving the
accuracy and energy efficiency of pedestrian positioning, effectiveness of
phone activity detection, and real-time risk assessment. We demonstrate that
SaferCross, through systematic integration of the developed algorithms,
performs situation awareness effectively and provides a timely warning to the
pedestrian based on the information obtained from smartphone sensors and Direct
Wi-Fi-based peer-to-peer communication with approaching cars. Extensive
experiments are conducted in a department parking lot for both component-level
and integrated testing. The results demonstrate that the energy efficiency and
positioning accuracy of SaferCross are improved by 52% and 72% on average
compared with existing solutions with missing support for positioning accuracy
and energy efficiency, and the phone-viewing event detection accuracy is over
90%. The integrated test results show that SaferCross alerts the pedestrian
timely with an average error of 1.6sec in comparison with the ground truth
data, which can be easily compensated by configuring the system to fire an
alert message a couple of seconds early.Comment: Published in IEEE Access, 202
Seamless Interactions Between Humans and Mobility Systems
As mobility systems, including vehicles and roadside infrastructure, enter a period of rapid and profound change, it is important to enhance interactions between people and mobility systems. Seamless human—mobility system interactions can promote widespread deployment of engaging applications, which are crucial for driving safety and efficiency.
The ever-increasing penetration rate of ubiquitous computing devices, such as smartphones and wearable devices, can facilitate realization of this goal. Although researchers and developers have attempted to adapt ubiquitous sensors for mobility applications (e.g., navigation apps), these solutions often suffer from limited usability and can be risk-prone. The root causes of these limitations include the low sensing modality and limited computational power available in ubiquitous computing devices.
We address these challenges by developing and demonstrating that novel sensing techniques and machine learning can be applied to extract essential, safety-critical information from drivers natural driving behavior, even actions as subtle as steering maneuvers (e.g., left-/righthand turns and lane changes). We first show how ubiquitous sensors can be used to detect steering maneuvers regardless of disturbances to sensing devices. Next, by focusing on turning maneuvers, we characterize drivers driving patterns using a quantifiable metric. Then, we demonstrate how microscopic analyses of crowdsourced ubiquitous sensory data can be used to infer critical macroscopic contextual information, such as risks present at road intersections. Finally, we use ubiquitous sensors to profile a driver’s behavioral patterns on a large scale; such sensors are found to be essential to the analysis and improvement of drivers driving behavior.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163127/1/chendy_1.pd
A human computer interactions framework for biometric user identification
Computer assisted functionalities and services have saturated our world becoming such an integral part of our daily activities that we hardly notice them. In this study we are focusing on enhancements in Human-Computer Interaction (HCI) that can be achieved by natural user recognition embedded in the employed interaction models. Natural identification among humans is mostly based on biometric characteristics representing what-we-are (face, body outlook, voice, etc.) and how-we-behave (gait, gestures, posture, etc.) Following this observation, we investigate different approaches and methods for adapting existing biometric identification methods and technologies to the needs of evolving natural human computer interfaces
Driver Distraction Identification with an Ensemble of Convolutional Neural Networks
The World Health Organization (WHO) reported 1.25 million deaths yearly due
to road traffic accidents worldwide and the number has been continuously
increasing over the last few years. Nearly fifth of these accidents are caused
by distracted drivers. Existing work of distracted driver detection is
concerned with a small set of distractions (mostly, cell phone usage).
Unreliable ad-hoc methods are often used.In this paper, we present the first
publicly available dataset for driver distraction identification with more
distraction postures than existing alternatives. In addition, we propose a
reliable deep learning-based solution that achieves a 90% accuracy. The system
consists of a genetically-weighted ensemble of convolutional neural networks,
we show that a weighted ensemble of classifiers using a genetic algorithm
yields in a better classification confidence. We also study the effect of
different visual elements in distraction detection by means of face and hand
localizations, and skin segmentation. Finally, we present a thinned version of
our ensemble that could achieve 84.64% classification accuracy and operate in a
real-time environment.Comment: arXiv admin note: substantial text overlap with arXiv:1706.0949
Efficient Opportunistic Sensing using Mobile Collaborative Platform MOSDEN
Mobile devices are rapidly becoming the primary computing device in people's
lives. Application delivery platforms like Google Play, Apple App Store have
transformed mobile phones into intelligent computing devices by the means of
applications that can be downloaded and installed instantly. Many of these
applications take advantage of the plethora of sensors installed on the mobile
device to deliver enhanced user experience. The sensors on the smartphone
provide the opportunity to develop innovative mobile opportunistic sensing
applications in many sectors including healthcare, environmental monitoring and
transportation. In this paper, we present a collaborative mobile sensing
framework namely Mobile Sensor Data EngiNe (MOSDEN) that can operate on
smartphones capturing and sharing sensed data between multiple distributed
applications and users. MOSDEN follows a component-based design philosophy
promoting reuse for easy and quick opportunistic sensing application
deployments. MOSDEN separates the application-specific processing from the
sensing, storing and sharing. MOSDEN is scalable and requires minimal
development effort from the application developer. We have implemented our
framework on Android-based mobile platforms and evaluate its performance to
validate the feasibility and efficiency of MOSDEN to operate collaboratively in
mobile opportunistic sensing applications. Experimental outcomes and lessons
learnt conclude the paper
- …