4,513 research outputs found
Inferring transportation modes from GPS trajectories using a convolutional neural network
Identifying the distribution of users' transportation modes is an essential
part of travel demand analysis and transportation planning. With the advent of
ubiquitous GPS-enabled devices (e.g., a smartphone), a cost-effective approach
for inferring commuters' mobility mode(s) is to leverage their GPS
trajectories. A majority of studies have proposed mode inference models based
on hand-crafted features and traditional machine learning algorithms. However,
manual features engender some major drawbacks including vulnerability to
traffic and environmental conditions as well as possessing human's bias in
creating efficient features. One way to overcome these issues is by utilizing
Convolutional Neural Network (CNN) schemes that are capable of automatically
driving high-level features from the raw input. Accordingly, in this paper, we
take advantage of CNN architectures so as to predict travel modes based on only
raw GPS trajectories, where the modes are labeled as walk, bike, bus, driving,
and train. Our key contribution is designing the layout of the CNN's input
layer in such a way that not only is adaptable with the CNN schemes but
represents fundamental motion characteristics of a moving object including
speed, acceleration, jerk, and bearing rate. Furthermore, we ameliorate the
quality of GPS logs through several data preprocessing steps. Using the clean
input layer, a variety of CNN configurations are evaluated to achieve the best
CNN architecture. The highest accuracy of 84.8% has been achieved through the
ensemble of the best CNN configuration. In this research, we contrast our
methodology with traditional machine learning algorithms as well as the seminal
and most related studies to demonstrate the superiority of our framework.Comment: 12 pages, 3 figures, 7 tables, Transportation Research Part C:
Emerging Technologie
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Transportation mode recognition fusing wearable motion, sound and vision sensors
We present the first work that investigates the potential of improving the performance of transportation mode recognition through fusing multimodal data from wearable sensors: motion, sound and vision. We first train three independent deep neural network (DNN) classifiers, which work with the three types of sensors, respectively. We then propose two schemes that fuse the classification results from the three mono-modal classifiers. The first scheme makes an ensemble decision with fixed rules including Sum, Product, Majority Voting, and Borda Count. The second scheme is an adaptive fuser built as another classifier (including Naive Bayes, Decision Tree, Random Forest and Neural Network) that learns enhanced predictions by combining the outputs from the three mono-modal classifiers. We verify the advantage of the proposed method with the state-of-the-art Sussex-Huawei Locomotion and Transportation (SHL) dataset recognizing the eight transportation activities: Still, Walk, Run, Bike, Bus, Car, Train and Subway. We achieve F1 scores of 79.4%, 82.1% and 72.8% with the mono-modal motion, sound and vision classifiers, respectively. The F1 score is remarkably improved to 94.5% and 95.5% by the two data fusion schemes, respectively. The recognition performance can be further improved with a post-processing scheme that exploits the temporal continuity of transportation. When assessing generalization of the model to unseen data, we show that while performance is reduced - as expected - for each individual classifier, the benefits of fusion are retained with performance improved by 15 percentage points. Besides the actual performance increase, this work, most importantly, opens up the possibility for dynamically fusing modalities to achieve distinct power-performance trade-off at run time
- …