3,863 research outputs found

    Custom Dual Transportation Mode Detection by Smartphone Devices Exploiting Sensor Diversity

    Full text link
    Making applications aware of the mobility experienced by the user can open the door to a wide range of novel services in different use-cases, from smart parking to vehicular traffic monitoring. In the literature, there are many different studies demonstrating the theoretical possibility of performing Transportation Mode Detection (TMD) by mining smart-phones embedded sensors data. However, very few of them provide details on the benchmarking process and on how to implement the detection process in practice. In this study, we provide guidelines and fundamental results that can be useful for both researcher and practitioners aiming at implementing a working TMD system. These guidelines consist of three main contributions. First, we detail the construction of a training dataset, gathered by heterogeneous users and including five different transportation modes; the dataset is made available to the research community as reference benchmark. Second, we provide an in-depth analysis of the sensor-relevance for the case of Dual TDM, which is required by most of mobility-aware applications. Third, we investigate the possibility to perform TMD of unknown users/instances not present in the training set and we compare with state-of-the-art Android APIs for activity recognition.Comment: Pre-print of the accepted version for the 14th Workshop on Context and Activity Modeling and Recognition (IEEE COMOREA 2018), Athens, Greece, March 19-23, 201

    Hardware for recognition of human activities: a review of smart home and AAL related technologies

    Get PDF
    Activity recognition (AR) from an applied perspective of ambient assisted living (AAL) and smart homes (SH) has become a subject of great interest. Promising a better quality of life, AR applied in contexts such as health, security, and energy consumption can lead to solutions capable of reaching even the people most in need. This study was strongly motivated because levels of development, deployment, and technology of AR solutions transferred to society and industry are based on software development, but also depend on the hardware devices used. The current paper identifies contributions to hardware uses for activity recognition through a scientific literature review in the Web of Science (WoS) database. This work found four dominant groups of technologies used for AR in SH and AAL—smartphones, wearables, video, and electronic components—and two emerging technologies: Wi-Fi and assistive robots. Many of these technologies overlap across many research works. Through bibliometric networks analysis, the present review identified some gaps and new potential combinations of technologies for advances in this emerging worldwide field and their uses. The review also relates the use of these six technologies in health conditions, health care, emotion recognition, occupancy, mobility, posture recognition, localization, fall detection, and generic activity recognition applications. The above can serve as a road map that allows readers to execute approachable projects and deploy applications in different socioeconomic contexts, and the possibility to establish networks with the community involved in this topic. This analysis shows that the research field in activity recognition accepts that specific goals cannot be achieved using one single hardware technology, but can be using joint solutions, this paper shows how such technology works in this regard

    On the use of on-cow accelerometers for the classification of behaviours in dairy barns

    Get PDF
    Analysing behaviours can provide insight into the health and overall well-being of dairy cows. Automatic monitoring systems using e.g., accelerometers are becoming increasingly important to accurately quantify cows' behaviours as the herd size increases. The aim of this study is to automatically classify cows' behaviours by comparing leg- and neck-mounted accelerometers, and to study the effect of the sampling rate and the number of accelerometer axes logged on the classification performances. Lying, standing, and feeding behaviours of 16 different lactating dairy cows were logged for 6 h with 3D-accelerometers. The behaviours were simultaneously recorded using visual observation and video recordings as a reference. Different features were extracted from the raw data and machine learning algorithms were used for the classification. The classification models using combined data of the neck- and the leg-mounted accelerometers have classified the three behaviours with high precision (80-99%) and sensitivity (87-99%). For the leg-mounted accelerometer, lying behaviour was classified with high precision (99%) and sensitivity (98%). Feeding was classified more accurately by the neck-mounted versus the leg-mounted accelerometer (precision 92% versus 80%; sensitivity 97% versus 88%). Standing was the most difficult behaviour to classify when only one accelerometer was used. In addition, the classification performances were not highly influenced when only X, X and Z, or Z and Y axes were used for the classification instead of three axes, especially for the neck-mounted accelerometer. Moreover, the accuracy of the models decreased with about 20% when the sampling rate was decreased from 1 Hz to 0.05 Hz
    corecore