114 research outputs found

    LiveLabs: Building in-situ mobile sensing and behavioural experimentation testbeds

    Get PDF
    © 2016 ACM. In this paper, we present LiveLabs, a first-of-its-kind testbed that is deployed across a university campus, convention centre, and resort island and collects real-time attributes such as location, group context etc., from hundreds of opt-in participants. These venues, data, and participants are then made available for running rich humancentric behavioural experiments that could test new mobile sensing infrastructure, applications, analytics, or more social-science type hypotheses that influence and then observe actual user behaviour. We share case studies of how researchers from around the world have and are using LiveLabs, and our experiences and lessons learned from building, maintaining, and expanding Live- Labs over the last three years.Y

    Empath-D: VR-based empathetic app design for accessibility

    Get PDF
    Singapore National Research Foundation under IDM Futures Funding Initiative; Ministry of Education, Singapore under its Academic Research Funding Tier

    DeepMon: Mobile GPU-based deep learning framework for continuous vision applications

    Get PDF
    © 2017 ACM. The rapid emergence of head-mounted devices such as the Microsoft Holo-lens enables a wide variety of continuous vision applications. Such applications often adopt deep-learning algorithms such as CNN and RNN to extract rich contextual information from the first-person-view video streams. Despite the high accuracy, use of deep learning algorithms in mobile devices raises critical challenges, i.e., high processing latency and power consumption. In this paper, we propose DeepMon, a mobile deep learning inference system to run a variety of deep learning inferences purely on a mobile device in a fast and energy-efficient manner. For this, we designed a suite of optimization techniques to efficiently offload convolutional layers to mobile GPUs and accelerate the processing; note that the convolutional layers are the common performance bottleneck of many deep learning models. Our experimental results show that DeepMon can classify an image over the VGG-VeryDeep-16 deep learning model in 644ms on Samsung Galaxy S7, taking an important step towards continuous vision without imposing any privacy concerns nor networking cost.N

    Demo: Multi-device gestural interfaces

    Get PDF

    A Meta-Review of Indoor Positioning Systems

    Get PDF
    An accurate and reliable Indoor Positioning System (IPS) applicable to most indoor scenarios has been sought for many years. The number of technologies, techniques, and approaches in general used in IPS proposals is remarkable. Such diversity, coupled with the lack of strict and verifiable evaluations, leads to difficulties for appreciating the true value of most proposals. This paper provides a meta-review that performed a comprehensive compilation of 62 survey papers in the area of indoor positioning. The paper provides the reader with an introduction to IPS and the different technologies, techniques, and some methods commonly employed. The introduction is supported by consensus found in the selected surveys and referenced using them. Thus, the meta-review allows the reader to inspect the IPS current state at a glance and serve as a guide for the reader to easily find further details on each technology used in IPS. The analyses of the meta-review contributed with insights on the abundance and academic significance of published IPS proposals using the criterion of the number of citations. Moreover, 75 works are identified as relevant works in the research topic from a selection of about 4000 works cited in the analyzed surveys
    corecore