13,140 research outputs found

    Inferring transportation modes from GPS trajectories using a convolutional neural network

    Full text link
    Identifying the distribution of users' transportation modes is an essential part of travel demand analysis and transportation planning. With the advent of ubiquitous GPS-enabled devices (e.g., a smartphone), a cost-effective approach for inferring commuters' mobility mode(s) is to leverage their GPS trajectories. A majority of studies have proposed mode inference models based on hand-crafted features and traditional machine learning algorithms. However, manual features engender some major drawbacks including vulnerability to traffic and environmental conditions as well as possessing human's bias in creating efficient features. One way to overcome these issues is by utilizing Convolutional Neural Network (CNN) schemes that are capable of automatically driving high-level features from the raw input. Accordingly, in this paper, we take advantage of CNN architectures so as to predict travel modes based on only raw GPS trajectories, where the modes are labeled as walk, bike, bus, driving, and train. Our key contribution is designing the layout of the CNN's input layer in such a way that not only is adaptable with the CNN schemes but represents fundamental motion characteristics of a moving object including speed, acceleration, jerk, and bearing rate. Furthermore, we ameliorate the quality of GPS logs through several data preprocessing steps. Using the clean input layer, a variety of CNN configurations are evaluated to achieve the best CNN architecture. The highest accuracy of 84.8% has been achieved through the ensemble of the best CNN configuration. In this research, we contrast our methodology with traditional machine learning algorithms as well as the seminal and most related studies to demonstrate the superiority of our framework.Comment: 12 pages, 3 figures, 7 tables, Transportation Research Part C: Emerging Technologie

    Sensing motion using spectral and spatial analysis of WLAN RSSI

    Get PDF
    In this paper we present how motion sensing can be obtained just by observing the WLAN radio signal strength and its fluctuations. The temporal, spectral and spatial characteristics of WLAN signal are analyzed. Our analysis confirms our claim that ’signal strength from access points appear to jump around more vigorously when the device is moving compared to when it is still and the number of detectable access points vary considerably while the user is on the move’. Using this observation, we present a novel motion detection algorithm, Spectrally Spread Motion Detection (SpecSMD) based on the spectral analysis of WLAN signal’s RSSI. To benchmark the proposed algorithm, we used Spatially Spread Motion Detection (SpatSMD), which is inspired by the recent work of Sohn et al. Both algorithms were evaluated by carrying out extensive measurements in a diverse set of conditions (indoors in different buildings and outdoors - city center, parking lot, university campus etc.,) and tested against the same data sets. The 94% average classification accuracy of the proposed SpecSMD is outperforming the accuracy of SpatSMD (accuracy 87%). The motion detection algorithms presented in this paper provide ubiquitous methods for deriving the state of the user. The algorithms can be implemented and run on a commodity device with WLAN capability without the need of any additional hardware support

    Multi-Sensor Context-Awareness in Mobile Devices and Smart Artefacts

    Get PDF
    The use of context in mobile devices is receiving increasing attention in mobile and ubiquitous computing research. In this article we consider how to augment mobile devices with awareness of their environment and situation as context. Most work to date has been based on integration of generic context sensors, in particular for location and visual context. We propose a different approach based on integration of multiple diverse sensors for awareness of situational context that can not be inferred from location, and targeted at mobile device platforms that typically do not permit processing of visual context. We have investigated multi-sensor context-awareness in a series of projects, and report experience from development of a number of device prototypes. These include development of an awareness module for augmentation of a mobile phone, of the Mediacup exemplifying context-enabled everyday artifacts, and of the Smart-Its platform for aware mobile devices. The prototypes have been explored in various applications to validate the multi-sensor approach to awareness, and to develop new perspectives of how embedded context-awareness can be applied in mobile and ubiquitous computing
    • 

    corecore