984 research outputs found

    Custom Dual Transportation Mode Detection by Smartphone Devices Exploiting Sensor Diversity

    Full text link
    Making applications aware of the mobility experienced by the user can open the door to a wide range of novel services in different use-cases, from smart parking to vehicular traffic monitoring. In the literature, there are many different studies demonstrating the theoretical possibility of performing Transportation Mode Detection (TMD) by mining smart-phones embedded sensors data. However, very few of them provide details on the benchmarking process and on how to implement the detection process in practice. In this study, we provide guidelines and fundamental results that can be useful for both researcher and practitioners aiming at implementing a working TMD system. These guidelines consist of three main contributions. First, we detail the construction of a training dataset, gathered by heterogeneous users and including five different transportation modes; the dataset is made available to the research community as reference benchmark. Second, we provide an in-depth analysis of the sensor-relevance for the case of Dual TDM, which is required by most of mobility-aware applications. Third, we investigate the possibility to perform TMD of unknown users/instances not present in the training set and we compare with state-of-the-art Android APIs for activity recognition.Comment: Pre-print of the accepted version for the 14th Workshop on Context and Activity Modeling and Recognition (IEEE COMOREA 2018), Athens, Greece, March 19-23, 201

    Texting and Driving Recognition leveraging the Front Camera of Smartphones

    Get PDF
    The recognition of the activity of texting while driving is an open problem in literature and it is crucial for the security within the scope of automotive. This can bring to life new insurance policies and increase the overall safety on the roads. Many works in literature leverage smartphone sensors for this purpose, however it is shown that these methods take a considerable amount of time to perform a recognition with sufficient confidence. In this paper we propose to leverage the smartphone front camera to perform an image classification and recognize whether the subject is seated in the driver position or in the passenger position. We first applied standalone Convolutional Neural Networks with poor results, then we focused on object detection-based algorithms to detect the presence and the position of discriminant objects (i.e. the security belts and the car win-dow). We then applied the model over short videos by classifying frame by frame until reaching a satisfactory confidence. Results show that we are able to reach around 90 % accuracy in only few seconds of the video, demonstrating the applicability of our method in the real world

    Distributed and adaptive location identification system for mobile devices

    Full text link
    Indoor location identification and navigation need to be as simple, seamless, and ubiquitous as its outdoor GPS-based counterpart is. It would be of great convenience to the mobile user to be able to continue navigating seamlessly as he or she moves from a GPS-clear outdoor environment into an indoor environment or a GPS-obstructed outdoor environment such as a tunnel or forest. Existing infrastructure-based indoor localization systems lack such capability, on top of potentially facing several critical technical challenges such as increased cost of installation, centralization, lack of reliability, poor localization accuracy, poor adaptation to the dynamics of the surrounding environment, latency, system-level and computational complexities, repetitive labor-intensive parameter tuning, and user privacy. To this end, this paper presents a novel mechanism with the potential to overcome most (if not all) of the abovementioned challenges. The proposed mechanism is simple, distributed, adaptive, collaborative, and cost-effective. Based on the proposed algorithm, a mobile blind device can potentially utilize, as GPS-like reference nodes, either in-range location-aware compatible mobile devices or preinstalled low-cost infrastructure-less location-aware beacon nodes. The proposed approach is model-based and calibration-free that uses the received signal strength to periodically and collaboratively measure and update the radio frequency characteristics of the operating environment to estimate the distances to the reference nodes. Trilateration is then used by the blind device to identify its own location, similar to that used in the GPS-based system. Simulation and empirical testing ascertained that the proposed approach can potentially be the core of future indoor and GPS-obstructed environments

    Inferring transportation mode from smartphone sensors:Evaluating the potential of Wi-Fi and Bluetooth

    Get PDF
    Understanding which transportation modes people use is critical for smart cities and planners to better serve their citizens. We show that using information from pervasive Wi-Fi access points and Bluetooth devices can enhance GPS and geographic information to improve transportation detection on smartphones. Wi-Fi information also improves the identification of transportation mode and helps conserve battery since it is already collected by most mobile phones. Our approach uses a machine learning approach to determine the mode from pre-prepocessed data. This approach yields an overall accuracy of 89% and average F1 score of 83% for inferring the three grouped modes of self-powered, car-based, and public transportation. When broken out by individual modes, Wi-Fi features improve detection accuracy of bus trips, train travel, and driving compared to GPS features alone and can substitute for GIS features without decreasing performance. Our results suggest that Wi-Fi and Bluetooth can be useful in urban transportation research, for example by improving mobile travel surveys and urban sensing applications

    Smartphone-Based Recognition of Access Trip Phase to Public Transport Stops Via Machine Learning Models

    Get PDF
    The usage of mobile phones is nowadays reaching full penetration rate in most countries. Smartphones are a valuable source for urban planners to understand and investigate passengers’ behavior and recognize travel patterns more precisely. Different investigations tried to automatically extract transit mode from sensors embedded in the phones such as GPS, accelerometer, and gyroscope. This allows to reduce the resources used in travel diary surveys, which are time-consuming and costly. However, figuring out which mode of transportation individuals use is still challenging. The main limitations include GPS, and mobile sensor data collection, and data labeling errors. First, this paper aims at solving a transport mode classification problem including (still, walking, car, bus, and metro) and then as a first investigation, presents a new algorithm to compute waiting time and access time to public transport stops based on a random forest model

    Analysis and comparison of publicly available databases for urban mobility applications

    Get PDF
    The challenges of multimodal applications can be addressed with machine learning or artificial intelligence methods, for which a database with large amounts of good quality data and ground truth is crucial. Since generating and publishing such a database is a challenging endeavour, there are only a handful of them available for the community to be used. In this article, we want to analyze three of these databases and compare them. We assess these databases regarding the ground truth that they provide, e.g. labels of the means of transport, and assess how much unlabelled data they publish. We compare these databases regarding the number of hours of data, and how these hours are distributed among different means of transport and activities. Finally, we assess the data in each public database regarding crucial aspects such as the stability of the sampling frequency, the minimum sampling frequency required to observe certain means of transport or activities and, how much lost data these databases have. One of our main conclusions is that accurately labelling data and ensuring a stable sampling frequency are two of the biggest challenges to be addressed when generating a public database for urban mobility

    Widening Access to Applied Machine Learning with TinyML

    Get PDF
    Broadening access to both computational and educational resources is critical to diffusing machine-learning (ML) innovation. However, today, most ML resources and experts are siloed in a few countries and organizations. In this paper, we describe our pedagogical approach to increasing access to applied ML through a massive open online course (MOOC) on Tiny Machine Learning (TinyML). We suggest that TinyML, ML on resource-constrained embedded devices, is an attractive means to widen access because TinyML both leverages low-cost and globally accessible hardware, and encourages the development of complete, self-contained applications, from data collection to deployment. To this end, a collaboration between academia (Harvard University) and industry (Google) produced a four-part MOOC that provides application-oriented instruction on how to develop solutions using TinyML. The series is openly available on the edX MOOC platform, has no prerequisites beyond basic programming, and is designed for learners from a global variety of backgrounds. It introduces pupils to real-world applications, ML algorithms, data-set engineering, and the ethical considerations of these technologies via hands-on programming and deployment of TinyML applications in both the cloud and their own microcontrollers. To facilitate continued learning, community building, and collaboration beyond the courses, we launched a standalone website, a forum, a chat, and an optional course-project competition. We also released the course materials publicly, hoping they will inspire the next generation of ML practitioners and educators and further broaden access to cutting-edge ML technologies.Comment: Understanding the underpinnings of the TinyML edX course series: https://www.edx.org/professional-certificate/harvardx-tiny-machine-learnin

    Delivering IoT Services in Smart Cities and Environmental Monitoring through Collective Awareness, Mobile Crowdsensing and Open Data

    Get PDF
    The Internet of Things (IoT) is the paradigm that allows us to interact with the real world by means of networking-enabled devices and convert physical phenomena into valuable digital knowledge. Such a rapidly evolving field leveraged the explosion of a number of technologies, standards and platforms. Consequently, different IoT ecosystems behave as closed islands and do not interoperate with each other, thus the potential of the number of connected objects in the world is far from being totally unleashed. Typically, research efforts in tackling such challenge tend to propose a new IoT platforms or standards, however, such solutions find obstacles in keeping up the pace at which the field is evolving. Our work is different, in that it originates from the following observation: in use cases that depend on common phenomena such as Smart Cities or environmental monitoring a lot of useful data for applications is already in place somewhere or devices capable of collecting such data are already deployed. For such scenarios, we propose and study the use of Collective Awareness Paradigms (CAP), which offload data collection to a crowd of participants. We bring three main contributions: we study the feasibility of using Open Data coming from heterogeneous sources, focusing particularly on crowdsourced and user-contributed data that has the drawback of being incomplete and we then propose a State-of-the-Art algorith that automatically classifies raw crowdsourced sensor data; we design a data collection framework that uses Mobile Crowdsensing (MCS) and puts the participants and the stakeholders in a coordinated interaction together with a distributed data collection algorithm that prevents the users from collecting too much or too less data; (3) we design a Service Oriented Architecture that constitutes a unique interface to the raw data collected through CAPs through their aggregation into ad-hoc services, moreover, we provide a prototype implementation
    • 

    corecore