10 research outputs found
Recommended from our members
High reliability Android application for multidevice multimodal mobile data acquisition and annotation
We have completed the collection of one of the richest accurately annotated mobile dataset of modes of transportation and locomotion. To do this, we developed a highly reliable Android application called DataLogger capable of recording multisensor data from multiple synchronized smartphones simultaneously. The application allows real-time data annotation. We explain how we designed the app to achieve high reliability and ease of use. We also present an evaluation of the application in a big-data collection (750 hours, 950 GB of data, 17 different sensor modalities), analysing the data loss (less than 0.4‰) and battery consumption (≈6% on average per hour). The application is available as open source
Recommended from our members
A versatile annotated dataset for multimodal locomotion analytics with mobile devices
We explain how to obtain a highly versatile and precisely annotated dataset for the multimodal locomotion of mobile users. After presenting the experimental setup, data management challenges and potential applications, we conclude with the best practices for assuring data quality and reducing loss. The dataset currently comprises 7 months of measurements, collected by smartphone’s sensors and a body-worn camera, while the 3 participants used 8 different modes of transportation. It comprises 950 GB of sensor data, which corresponds to 750 hours of labelled data. The obtained data will be useful for a wide range of research questions related to activity recognition, and will be made available to the community
Using mobile devices as scientific measurement instruments: Reliable android task scheduling
In various usage scenarios, smartphones are used as measuring instruments to systematically and unobtrusively collect data measurements (e.g., sensor data, user activity, phone usage data). Unfortunately, in the race towards extending battery life and improving privacy, mobile phone manufacturers are gradually restricting developers in (frequently) scheduling background (sensing) tasks and impede the exact scheduling of their execution time (i.e., Android’s “best effort” approach). This evolution hampers successful deployment of smartphones in sensing applications in scientific contexts, with unreliable and incomplete sampling rates frequently reported in literature. In this article, we discuss the ins and outs of Android’s background tasks scheduling mechanism, and formulate guidelines for developers to successfully implement reliable task scheduling. Implementing these guidelines, we present a software library, agnostic from the underlying Android scheduling mechanisms and restrictions, that allows Android developers to reliably schedule tasks with a maximum sampling rate of one minute. Our evaluation demonstrates the use and versatility of our task scheduler, and experimentally confirms its reliability and acceptable energy usage.Funding for open access charge: CRUE-Universitat Jaume
Electronic Imaging & the Visual Arts. EVA 2013 Florence
Important Information Technology topics are presented: multimedia systems, data-bases, protection of data, access to the content. Particular reference is reserved to digital images (2D, 3D) regarding Cultural Institutions (Museums, Libraries, Palace – Monuments, Archaeological Sites). The main parts of the Conference Proceedings regard: Strategic Issues, EC Projects and Related Networks & Initiatives, International Forum on “Culture & Technology”, 2D – 3D Technologies & Applications, Virtual Galleries – Museums and Related Initiatives, Access to the Culture Information. Three Workshops are related to: International Cooperation, Innovation and Enterprise, Creative Industries and Cultural Tourism
Recommended from our members
e-mission: an open source, extensible platform for human mobility systems
Transportation is the single largest source of carbon emissions in the US. Decarbonizing it is challenging because it depends on individual behaviors, which in turn, depend on local land use planning. The interdisciplinary field of Computational Mobility, focusing on collecting, analysing and influencing human travel behavior, can frame solutions to this challenge.Innovation flows in interdisciplinary fields are bi-directional. The flow to the domain is focused on building a strong foundation for methodological improvements. As the improvements are deployed, they result in use-inspired computational research. This temporal dependency results in our initial focus on the modularity, accuracy and reproducibility of e-mission, an extensible platform for instrumenting human mobility. This open source platform has a modular architecture that supports power efficient duty cycling using virtual sensors, a read-only data model and a pipeline with novel algorithm adaptations for smartphone sensing.We also perform the first empirical evaluations of smartphone-based platforms in this domain. The architectural evaluation is based on three real world deployments: a classic travel diary, a crowdsourcing initiative, and a behavioral study. The accuracy evaluation is based on an novel procedure that uses artificial trips and multiple parallel phones to mitigate concerns over privacy, context sensitive power consumption and inherent sensing error. Data collected from three artifical timelines was used to evaluate the trajectory, segmentation and classification accuracies vs. power for various configurations.On computational side, challenges derived from the deployments can contribute to ongoing CS research in privacy, trustworthiness, incentivization and decision making. On the mobility side, this enables methodological innovations such as Agile Urban Planning for prototyping infrastructure changes
Recommended from our members
Movement recognition from wearable sensors data: power-aware evolutionary training for template matching and data annotation recovery methods
Human activities recognition finds numerous applications for example in sport training, patient rehabilitation, gait analysis and surgical skills evaluation. Wearable sensing and Template Matching Methods (TMMs) offer significant advantages compared to manual assessment methods as well as to more cumbersome camera-based setups and other machine learning (ML) algorithms.
TMMs require less data for training than other ML methods, they are low-power and therefore suitable for integration on wearable sensor. They compute a sample-by-sample distance between two time series. When applied to gestures sensors data, this even enables a richer and more movement-specific assessment and feedback. However, TMMs lack of a standard training procedure.
In this thesis, we introduce an innovative evolutionary training algorithm for TMMthat not only can maximize recognition performance, but it can also prefer power-minimisation by reducing the TMM’s computational cost with a configurable trade-off. We exhibit that a reduction is even possible without sacrificing recognition performance by exploiting the long-established concept of “time warping”. We demonstrate that our method is suitable for a wide variety of raw data as well as processed, fused and encoded sensor data.
We present a new original multi-modal, multi-user dataset of beach volleyball movements that allowed to evaluate our training methods on a real-case of sport training actions. Moreover, the collection of this dataset helped to generate a set of guidelines for the collection of movement data in the wild, using wearable sensors.
We introduce a 3D human model that can be animated through inertial wearable sensors data for troubleshooting, movement analysis and privacy-safe annotation of human activities. Finally, through a case study on a dataset of drinking actions, we demonstrate how TMM can improve the quality of a badly annotated but highly valuable dataset
Multimodal Content Delivery for Geo-services
This thesis describes a body of work carried out over several research projects in the area of multimodal interaction for location-based services. Research in this area has progressed from using simulated mobile environments to demonstrate the visual modality, to the ubiquitous delivery of rich media using multimodal interfaces (geo- services). To effectively deliver these services, research focused on innovative solutions to real-world problems in a number of disciplines including geo-location, mobile spatial interaction, location-based services, rich media interfaces and auditory user interfaces. My original contributions to knowledge are made in the areas of multimodal interaction underpinned by advances in geo-location technology and supported by the proliferation of mobile device technology into modern life. Accurate positioning is a known problem for location-based services, contributions in the area of mobile positioning demonstrate a hybrid positioning technology for mobile devices that uses terrestrial beacons to trilaterate position. Information overload is an active concern for location-based applications that struggle to manage large amounts of data, contributions in the area of egocentric visibility that filter data based on field-of-view demonstrate novel forms of multimodal input. One of the more pertinent characteristics of these applications is the delivery or output modality employed (auditory, visual or tactile). Further contributions in the area of multimodal content delivery are made, where multiple modalities are used to deliver information using graphical user interfaces, tactile interfaces and more notably auditory user interfaces. It is demonstrated how a combination of these interfaces can be used to synergistically deliver context sensitive rich media to users - in a responsive way - based on usage scenarios that consider the affordance of the device, the geographical position and bearing of the device and also the location of the device
Sampling Strategies for Tackling Imbalanced Data in Human Activity Recognition
Human activity recognition (HAR) using wearable sensors is a topic that is being actively researched in machine learning. Smart, sensor-embedded devices, such as smartphones, fitness trackers, or smart watches that collect detailed data on movement, are widely available now. HAR may be applied in areas such as healthcare, physiotherapy, and fitness to assist users of these smart devices in their daily lives. However, one of the main challenges facing HAR, particularly when it is used in supervised learning, is how balanced data may be obtained for algorithm optimisation and testing. Because users engage in some activities more than others, e.g. walking more than running, HAR datasets are typically imbalanced. The lack of dataset representation from minority classes, therefore, hinders the ability of HAR classifiers to sufficiently capture new instances of those activities. Inspired by the concept of data fusion, this thesis will introduce three new hybrid sampling methods. Thus, the diversity of the synthesised samples will be enhanced by combining output from separate sampling methods into three hybrid approaches. The advantage of the hybrid method is that it provides diverse synthetic data that can increase the size of the training data from different sampling approaches. This leads to improvements in the generalisation of a learning activity recognition model. The first strategy, known as the (DBM), combines synthetic minority oversampling techniques (SMOTE) with Random_SMOTE, both of which are built around the k-nearest neighbours algorithm. The second technique, called the noise detection-based method (NDBM), combines Tomek links (SMOTE_Tomeklinks) and the modified synthetic minority oversampling technique (MSMOTE). The third approach, titled the cluster-based method (CBM), combines cluster-based synthetic oversampling (CBSO) and the proximity weighted synthetic oversampling technique (ProWSyn). The performance of the proposed hybrid methods is compared with existing methods using accelerometer data from three commonly used benchmark datasets. The results show that the DBM, NDBM and CBM can significantly reduce the impact of class imbalance and enhance F1 scores of the multilayer perceptron (MLP) by as much as 9 % to 20 % compared with their constituent sampling methods. Also, the Friedman statistical significance test was conducted to compare the effect of the different sampling methods. The test results confirm that the CBM is more effective than the other sampling approaches. This thesis also introduces a method based on the Wasserstein generative adversarial network (WGAN) for generating different types of data on human activity. The WGAN is more stable to train than a generative adversarial network (GAN) and this is due to the use of a stable metric, namely Wasserstein distance, to compare the similarity between the real data distribution with the generated data distribution. WGAN is a deep learning approach, and in contrast to the six existing sampling methods referred to previously, it can operate on raw sensor data as convolutional and recurrent layers can act as feature extractors. WGAN is used to generate raw sensor data to overcome the limitations of the traditional machine learning-based sampling methods that can only operate on extracted features. The synthetic data that is produced by WGAN is then used to oversample the imbalanced training data. This thesis demonstrates that this approach significantly enhances the learning ability of the convolutional neural network(CNN) by as much as 5 % to 6 % from imbalanced human activity datasets. This thesis concludes that the proposed sampling methods based on traditional machine learning are efficient when human activity training data is imbalanced and small. These methods are less complex to implement, require less human activity training data to produce synthetic data and fewer computational resources than the WGAN approach. The proposed WGAN method is effective at producing raw sensor data when a large quantity of human activity training data is available. Additionally, it is time-consuming to optimise the hyperparameters related to the WGAN architecture, which significantly impacts the performance of the method
MediaSync: Handbook on Multimedia Synchronization
This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences