4,424 research outputs found
Context-Aware Android Applications through Transportation Mode Detection Techniques
In this paper, we study the problem of how to detect the current transportation mode of the user from the smartphone sensors data, because this issue is considered crucial for the deployment of a multitude of mobility-aware systems, ranging from trace collectors to health monitoring and urban sensing systems. Although some feasibility studies have been performed in the literature, most of the proposed systems rely on the utilization of the GPS and on computational expensive algorithms that do not take into account the limited resources of mobile phones. On the opposite, this paper focuses on the design and implementation of a feasible and efficient detection system that takes into account both the issues of accuracy of classification and of energy consumption. To this purpose, we propose the utilization of embedded sensor data (accelerometer/gyroscope) with a novel meta-classifier based on a cascading technique, and we show that our combined approach can provide similar performance than a GPS-based classifier, but introducing also the possibility to control the computational load based on requested confidence. We describe the implementation of the proposed system into an Android framework that can be leveraged by third-part mobile applications to access context-aware information in a transparent way
Custom Dual Transportation Mode Detection by Smartphone Devices Exploiting Sensor Diversity
Making applications aware of the mobility experienced by the user can open
the door to a wide range of novel services in different use-cases, from smart
parking to vehicular traffic monitoring. In the literature, there are many
different studies demonstrating the theoretical possibility of performing
Transportation Mode Detection (TMD) by mining smart-phones embedded sensors
data. However, very few of them provide details on the benchmarking process and
on how to implement the detection process in practice. In this study, we
provide guidelines and fundamental results that can be useful for both
researcher and practitioners aiming at implementing a working TMD system. These
guidelines consist of three main contributions. First, we detail the
construction of a training dataset, gathered by heterogeneous users and
including five different transportation modes; the dataset is made available to
the research community as reference benchmark. Second, we provide an in-depth
analysis of the sensor-relevance for the case of Dual TDM, which is required by
most of mobility-aware applications. Third, we investigate the possibility to
perform TMD of unknown users/instances not present in the training set and we
compare with state-of-the-art Android APIs for activity recognition.Comment: Pre-print of the accepted version for the 14th Workshop on Context
and Activity Modeling and Recognition (IEEE COMOREA 2018), Athens, Greece,
March 19-23, 201
MobilitApp: Analysing mobility data of citizens in the metropolitan area of Barcelona
MobilitApp is a platform designed to provide smart mobility services in urban
areas. It is designed to help citizens and transport authorities alike.
Citizens will be able to access the MobilitApp mobile application and decide
their optimal transportation strategy by visualising their usual routes, their
carbon footprint, receiving tips, analytics and general mobility information,
such as traffic and incident alerts. Transport authorities and service
providers will be able to access information about the mobility pattern of
citizens to o er their best services, improve costs and planning. The
MobilitApp client runs on Android devices and records synchronously, while
running in the background, periodic location updates from its users. The
information obtained is processed and analysed to understand the mobility
patterns of our users in the city of Barcelona, Spain
The University of Sussex-Huawei locomotion and transportation dataset for multimodal analytics with mobile devices
Scientific advances build on reproducible research which need publicly available benchmark datasets. The computer vision and speech recognition communities have led the way in establishing benchmark datasets. There are much less datasets available in mobile computing, especially for rich locomotion and transportation analytics.
This paper presents a highly versatile and precisely annotated large-scale dataset of smartphone sensor data for multimodal locomotion and transportation analytics of mobile users. The dataset comprises 7 months of measurements, collected from all sensors of 4 smartphones carried at typical body locations, including the images of a body-worn camera, while 3 participants used 8 different modes of transportation in the southeast of the United Kingdom, including in London. In total 28 context labels were annotated, including transportation mode, participant’s posture, inside/outside location, road conditions, traffic conditions, presence in tunnels, social interactions, and having meals. The total amount of collected data exceed 950 GB of sensor data, which corresponds to 2812 hours of labelled data and 17562 km of traveled distance. We present how we set up the data collection, including the equipment used and the experimental protocol.
We discuss the dataset, including the data curation process, the analysis of the annotations and of the sensor data. We discuss the challenges encountered and present the lessons learned and some of the best practices we developed to ensure high quality data collection and annotation. We discuss the potential applications which can be developed using this large-scale dataset. In particular, we present how a machine-learning system can use this dataset to automatically recognize modes of transportations. Many other research questions related to transportation analytics, activity recognition, radio signal propagation and mobility modelling can be adressed through this dataset. The full dataset is being made available to the community, and a thorough preview is already publishe
PinMe: Tracking a Smartphone User around the World
With the pervasive use of smartphones that sense, collect, and process
valuable information about the environment, ensuring location privacy has
become one of the most important concerns in the modern age. A few recent
research studies discuss the feasibility of processing data gathered by a
smartphone to locate the phone's owner, even when the user does not intend to
share his location information, e.g., when the Global Positioning System (GPS)
is off. Previous research efforts rely on at least one of the two following
fundamental requirements, which significantly limit the ability of the
adversary: (i) the attacker must accurately know either the user's initial
location or the set of routes through which the user travels and/or (ii) the
attacker must measure a set of features, e.g., the device's acceleration, for
potential routes in advance and construct a training dataset. In this paper, we
demonstrate that neither of the above-mentioned requirements is essential for
compromising the user's location privacy. We describe PinMe, a novel
user-location mechanism that exploits non-sensory/sensory data stored on the
smartphone, e.g., the environment's air pressure, along with publicly-available
auxiliary information, e.g., elevation maps, to estimate the user's location
when all location services, e.g., GPS, are turned off.Comment: This is the preprint version: the paper has been published in IEEE
Trans. Multi-Scale Computing Systems, DOI: 0.1109/TMSCS.2017.275146
Towards a Practical Pedestrian Distraction Detection Framework using Wearables
Pedestrian safety continues to be a significant concern in urban communities
and pedestrian distraction is emerging as one of the main causes of grave and
fatal accidents involving pedestrians. The advent of sophisticated mobile and
wearable devices, equipped with high-precision on-board sensors capable of
measuring fine-grained user movements and context, provides a tremendous
opportunity for designing effective pedestrian safety systems and applications.
Accurate and efficient recognition of pedestrian distractions in real-time
given the memory, computation and communication limitations of these devices,
however, remains the key technical challenge in the design of such systems.
Earlier research efforts in pedestrian distraction detection using data
available from mobile and wearable devices have primarily focused only on
achieving high detection accuracy, resulting in designs that are either
resource intensive and unsuitable for implementation on mainstream mobile
devices, or computationally slow and not useful for real-time pedestrian safety
applications, or require specialized hardware and less likely to be adopted by
most users. In the quest for a pedestrian safety system that achieves a
favorable balance between computational efficiency, detection accuracy, and
energy consumption, this paper makes the following main contributions: (i)
design of a novel complex activity recognition framework which employs motion
data available from users' mobile and wearable devices and a lightweight
frequency matching approach to accurately and efficiently recognize complex
distraction related activities, and (ii) a comprehensive comparative evaluation
of the proposed framework with well-known complex activity recognition
techniques in the literature with the help of data collected from human subject
pedestrians and prototype implementations on commercially-available mobile and
wearable devices
- …