9,441 research outputs found

    Transportation mode recognition fusing wearable motion, sound and vision sensors

    Get PDF
    We present the first work that investigates the potential of improving the performance of transportation mode recognition through fusing multimodal data from wearable sensors: motion, sound and vision. We first train three independent deep neural network (DNN) classifiers, which work with the three types of sensors, respectively. We then propose two schemes that fuse the classification results from the three mono-modal classifiers. The first scheme makes an ensemble decision with fixed rules including Sum, Product, Majority Voting, and Borda Count. The second scheme is an adaptive fuser built as another classifier (including Naive Bayes, Decision Tree, Random Forest and Neural Network) that learns enhanced predictions by combining the outputs from the three mono-modal classifiers. We verify the advantage of the proposed method with the state-of-the-art Sussex-Huawei Locomotion and Transportation (SHL) dataset recognizing the eight transportation activities: Still, Walk, Run, Bike, Bus, Car, Train and Subway. We achieve F1 scores of 79.4%, 82.1% and 72.8% with the mono-modal motion, sound and vision classifiers, respectively. The F1 score is remarkably improved to 94.5% and 95.5% by the two data fusion schemes, respectively. The recognition performance can be further improved with a post-processing scheme that exploits the temporal continuity of transportation. When assessing generalization of the model to unseen data, we show that while performance is reduced - as expected - for each individual classifier, the benefits of fusion are retained with performance improved by 15 percentage points. Besides the actual performance increase, this work, most importantly, opens up the possibility for dynamically fusing modalities to achieve distinct power-performance trade-off at run time

    Inferring transportation modes from GPS trajectories using a convolutional neural network

    Full text link
    Identifying the distribution of users' transportation modes is an essential part of travel demand analysis and transportation planning. With the advent of ubiquitous GPS-enabled devices (e.g., a smartphone), a cost-effective approach for inferring commuters' mobility mode(s) is to leverage their GPS trajectories. A majority of studies have proposed mode inference models based on hand-crafted features and traditional machine learning algorithms. However, manual features engender some major drawbacks including vulnerability to traffic and environmental conditions as well as possessing human's bias in creating efficient features. One way to overcome these issues is by utilizing Convolutional Neural Network (CNN) schemes that are capable of automatically driving high-level features from the raw input. Accordingly, in this paper, we take advantage of CNN architectures so as to predict travel modes based on only raw GPS trajectories, where the modes are labeled as walk, bike, bus, driving, and train. Our key contribution is designing the layout of the CNN's input layer in such a way that not only is adaptable with the CNN schemes but represents fundamental motion characteristics of a moving object including speed, acceleration, jerk, and bearing rate. Furthermore, we ameliorate the quality of GPS logs through several data preprocessing steps. Using the clean input layer, a variety of CNN configurations are evaluated to achieve the best CNN architecture. The highest accuracy of 84.8% has been achieved through the ensemble of the best CNN configuration. In this research, we contrast our methodology with traditional machine learning algorithms as well as the seminal and most related studies to demonstrate the superiority of our framework.Comment: 12 pages, 3 figures, 7 tables, Transportation Research Part C: Emerging Technologie

    Custom Dual Transportation Mode Detection by Smartphone Devices Exploiting Sensor Diversity

    Full text link
    Making applications aware of the mobility experienced by the user can open the door to a wide range of novel services in different use-cases, from smart parking to vehicular traffic monitoring. In the literature, there are many different studies demonstrating the theoretical possibility of performing Transportation Mode Detection (TMD) by mining smart-phones embedded sensors data. However, very few of them provide details on the benchmarking process and on how to implement the detection process in practice. In this study, we provide guidelines and fundamental results that can be useful for both researcher and practitioners aiming at implementing a working TMD system. These guidelines consist of three main contributions. First, we detail the construction of a training dataset, gathered by heterogeneous users and including five different transportation modes; the dataset is made available to the research community as reference benchmark. Second, we provide an in-depth analysis of the sensor-relevance for the case of Dual TDM, which is required by most of mobility-aware applications. Third, we investigate the possibility to perform TMD of unknown users/instances not present in the training set and we compare with state-of-the-art Android APIs for activity recognition.Comment: Pre-print of the accepted version for the 14th Workshop on Context and Activity Modeling and Recognition (IEEE COMOREA 2018), Athens, Greece, March 19-23, 201

    Map++: A Crowd-sensing System for Automatic Map Semantics Identification

    Full text link
    Digital maps have become a part of our daily life with a number of commercial and free map services. These services have still a huge potential for enhancement with rich semantic information to support a large class of mapping applications. In this paper, we present Map++, a system that leverages standard cell-phone sensors in a crowdsensing approach to automatically enrich digital maps with different road semantics like tunnels, bumps, bridges, footbridges, crosswalks, road capacity, among others. Our analysis shows that cell-phones sensors with humans in vehicles or walking get affected by the different road features, which can be mined to extend the features of both free and commercial mapping services. We present the design and implementation of Map++ and evaluate it in a large city. Our evaluation shows that we can detect the different semantics accurately with at most 3% false positive rate and 6% false negative rate for both vehicle and pedestrian-based features. Moreover, we show that Map++ has a small energy footprint on the cell-phones, highlighting its promise as a ubiquitous digital maps enriching service.Comment: Published in the Eleventh Annual IEEE International Conference on Sensing, Communication, and Networking (IEEE SECON 2014

    DeepWalking: Enabling Smartphone-based Walking Speed Estimation Using Deep Learning

    Full text link
    Walking speed estimation is an essential component of mobile apps in various fields such as fitness, transportation, navigation, and health-care. Most existing solutions are focused on specialized medical applications that utilize body-worn motion sensors. These approaches do not serve effectively the general use case of numerous apps where the user holding a smartphone tries to find his or her walking speed solely based on smartphone sensors. However, existing smartphone-based approaches fail to provide acceptable precision for walking speed estimation. This leads to a question: is it possible to achieve comparable speed estimation accuracy using a smartphone over wearable sensor based obtrusive solutions? We find the answer from advanced neural networks. In this paper, we present DeepWalking, the first deep learning-based walking speed estimation scheme for smartphone. A deep convolutional neural network (DCNN) is applied to automatically identify and extract the most effective features from the accelerometer and gyroscope data of smartphone and to train the network model for accurate speed estimation. Experiments are performed with 10 participants using a treadmill. The average root-mean-squared-error (RMSE) of estimated walking speed is 0.16m/s which is comparable to the results obtained by state-of-the-art approaches based on a number of body-worn sensors (i.e., RMSE of 0.11m/s). The results indicate that a smartphone can be a strong tool for walking speed estimation if the sensor data are effectively calibrated and supported by advanced deep learning techniques.Comment: 6 pages, 9 figures, published in IEEE Global Communications Conference (GLOBECOM

    Detecting changes of transportation-mode by using classification data

    Get PDF
    • …
    corecore