5,174 research outputs found

    Custom Dual Transportation Mode Detection by Smartphone Devices Exploiting Sensor Diversity

    Full text link
    Making applications aware of the mobility experienced by the user can open the door to a wide range of novel services in different use-cases, from smart parking to vehicular traffic monitoring. In the literature, there are many different studies demonstrating the theoretical possibility of performing Transportation Mode Detection (TMD) by mining smart-phones embedded sensors data. However, very few of them provide details on the benchmarking process and on how to implement the detection process in practice. In this study, we provide guidelines and fundamental results that can be useful for both researcher and practitioners aiming at implementing a working TMD system. These guidelines consist of three main contributions. First, we detail the construction of a training dataset, gathered by heterogeneous users and including five different transportation modes; the dataset is made available to the research community as reference benchmark. Second, we provide an in-depth analysis of the sensor-relevance for the case of Dual TDM, which is required by most of mobility-aware applications. Third, we investigate the possibility to perform TMD of unknown users/instances not present in the training set and we compare with state-of-the-art Android APIs for activity recognition.Comment: Pre-print of the accepted version for the 14th Workshop on Context and Activity Modeling and Recognition (IEEE COMOREA 2018), Athens, Greece, March 19-23, 201

    Transportation mode recognition fusing wearable motion, sound and vision sensors

    Get PDF
    We present the first work that investigates the potential of improving the performance of transportation mode recognition through fusing multimodal data from wearable sensors: motion, sound and vision. We first train three independent deep neural network (DNN) classifiers, which work with the three types of sensors, respectively. We then propose two schemes that fuse the classification results from the three mono-modal classifiers. The first scheme makes an ensemble decision with fixed rules including Sum, Product, Majority Voting, and Borda Count. The second scheme is an adaptive fuser built as another classifier (including Naive Bayes, Decision Tree, Random Forest and Neural Network) that learns enhanced predictions by combining the outputs from the three mono-modal classifiers. We verify the advantage of the proposed method with the state-of-the-art Sussex-Huawei Locomotion and Transportation (SHL) dataset recognizing the eight transportation activities: Still, Walk, Run, Bike, Bus, Car, Train and Subway. We achieve F1 scores of 79.4%, 82.1% and 72.8% with the mono-modal motion, sound and vision classifiers, respectively. The F1 score is remarkably improved to 94.5% and 95.5% by the two data fusion schemes, respectively. The recognition performance can be further improved with a post-processing scheme that exploits the temporal continuity of transportation. When assessing generalization of the model to unseen data, we show that while performance is reduced - as expected - for each individual classifier, the benefits of fusion are retained with performance improved by 15 percentage points. Besides the actual performance increase, this work, most importantly, opens up the possibility for dynamically fusing modalities to achieve distinct power-performance trade-off at run time

    Detecting changes of transportation-mode by using classification data

    Get PDF

    MobilitApp: Analysing mobility data of citizens in the metropolitan area of Barcelona

    Full text link
    MobilitApp is a platform designed to provide smart mobility services in urban areas. It is designed to help citizens and transport authorities alike. Citizens will be able to access the MobilitApp mobile application and decide their optimal transportation strategy by visualising their usual routes, their carbon footprint, receiving tips, analytics and general mobility information, such as traffic and incident alerts. Transport authorities and service providers will be able to access information about the mobility pattern of citizens to o er their best services, improve costs and planning. The MobilitApp client runs on Android devices and records synchronously, while running in the background, periodic location updates from its users. The information obtained is processed and analysed to understand the mobility patterns of our users in the city of Barcelona, Spain

    Forecasting transport mode use with support vector machines based approach

    Get PDF
    The paper explores potential to forecast what transport mode one will use for his/her next trip. The support vector machines based approach learns from individual's behavior (validated GPS tracks) to support smart city transport planning services. The overall success rate, in forecasting the transport mode, is 82 %, with lower confusion for private car, bike and walking
    corecore