2 research outputs found

    From head to toe:body movement for human-computer interaction

    Get PDF
    Our bodies are the medium through which we experience the world around us, so human-computer interaction can highly benefit from the richness of body movements and postures as an input modality. In recent years, the widespread availability of inertial measurement units and depth sensors led to the development of a plethora of applications for the body in human-computer interaction. However, the main focus of these works has been on using the upper body for explicit input. This thesis investigates the research space of full-body human-computer interaction through three propositions. The first proposition is that there is more to be inferred by natural users’ movements and postures, such as the quality of activities and psychological states. We develop this proposition in two domains. First, we explore how to support users in performing weight lifting activities. We propose a system that classifies different ways of performing the same activity; an object-oriented model-based framework for formally specifying activities; and a system that automatically extracts an activity model by demonstration. Second, we explore how to automatically capture nonverbal cues for affective computing. We developed a system that annotates motion and gaze data according to the Body Action and Posture coding system. We show that quality analysis can add another layer of information to activity recognition, and that systems that support the communication of quality information should strive to support how we implicitly communicate movement through nonverbal communication. Further, we argue that working at a higher level of abstraction, affect recognition systems can more directly translate findings from other areas into their algorithms, but also contribute new knowledge to these fields. The second proposition is that the lower limbs can provide an effective means of interacting with computers beyond assistive technology To address the problem of the dispersed literature on the topic, we conducted a comprehensive survey on the lower body in HCI, under the lenses of users, systems and interactions. To address the lack of a fundamental understanding of foot-based interactions, we conducted a series of studies that quantitatively characterises several aspects of foot-based interaction, including Fitts’s Law performance models, the effects of movement direction, foot dominance and visual feedback, and the overhead incurred by using the feet together with the hand. To enable all these studies, we developed a foot tracker based on a Kinect mounted under the desk. We show that the lower body can be used as a valuable complementary modality for computing input. Our third proposition is that by treating body movements as multiple modalities, rather than a single one, we can enable novel user experiences. We develop this proposition in the domain of 3D user interfaces, as it requires input with multiple degrees of freedom and offers a rich set of complex tasks. We propose an approach for tracking the whole body up close, by splitting the sensing of different body parts across multiple sensors. Our setup allows tracking gaze, head, mid-air gestures, multi-touch gestures, and foot movements. We investigate specific applications for multimodal combinations in the domain of 3DUI, specifically how gaze and mid-air gestures can be combined to improve selection and manipulation tasks; how the feet can support the canonical 3DUI tasks; and how a multimodal sensing platform can inspire new 3D game mechanics. We show that the combination of multiple modalities can lead to enhanced task performance, that offloading certain tasks to alternative modalities not only frees the hands, but also allows simultaneous control of multiple degrees of freedom, and that by sensing different modalities separately, we achieve a more detailed and precise full body tracking

    Collaborative Techniques for Indoor Positioning Systems

    Get PDF
    The demand for Indoor Positioning Systems (IPSs) developed specifically for mobile and wearable devices is continuously growing as a consequence of the expansion of the global market of Location-based Services (LBS), increasing adoption of mobile LBS applications, and ubiquity of mobile/wearable devices in our daily life. Nevertheless, the design of mobile/wearable devices-based IPSs requires to fulfill additional design requirements, namely low power consumption, reuse of devices’ built-in technologies, and inexpensive and straightforward implementation. Within the available indoor positioning technologies, embedded in mobile/wearable devices, IEEE 802.11 Wireless LAN (Wi-Fi) and Bluetooth Low Energy (BLE) in combination with lateration and fingerprinting have received extensive attention from research communities to meet the requirements. Although these technologies are straightforward to implement in positioning approaches based on Received Signal Strength Indicator (RSSI), the positioning accuracy decreases mainly due to propagation signal fluctuations in Line-of-sight (LOS) and Non-line-of-sight (NLOS), and the heterogeneity of the devices’ hardware. Therefore, providing a solution to achieve the target accuracy within the given constraints remains an open issue. The motivation behind this doctoral thesis is to address the limitations of traditional IPSs for human positioning based on RSSI, which suffer from low accuracy due to signal fluctuations and hardware heterogeneity, and deployment cost constraints, considering the advantages provided by the ubiquity of mobile devices and collaborative and machine learning-based techniques. Therefore, the research undertaken in this doctoral thesis focuses on developing and evaluating mobile device-based collaborative indoor techniques, using Multilayer Perceptron (MLP) Artificial Neural Networks (ANNs), for human positioning to enhance the position accuracy of traditional indoor positioning systems based on RSSI (i.e., lateration and fingerprinting) in real-world conditions. The methodology followed during the research consists of four phases. In the first phase, a comprehensive systematic review of Collaborative Indoor Positioning Systems (CIPSs) was conducted to identify the key design aspects and evaluations used in/for CIPSs and the main concerns, limitations, and gaps reported in the literature. In the second phase, extensive experimental data collections using mobile devices and considering collaborative scenarios were performed. The data collected was used to create a mobile device-based BLE database for testing ranging collaborative indoor positioning approaches, and BLE and Wi-Fi radio maps to estimate devices’ position in the non-collaborative phase. Moreover, a detailed description of the methodology used for collecting and processing data and creating the database, as well as its structure, was provided to guarantee the reproducibility, use, and expansion of the database. In the third phase, the traditional methods to estimate distance (i.e., based on Logarithmic Distance Path Loss (LDPL) and fuzzy logic) and position (i.e., RSSI-lateration and fingerprinting–9-Nearest Neighbors (9-NN)) were described and evaluated in order to present their limitations and challenges. Also, two novel approaches to improve distance and positioning accuracy were proposed. In the last phase, our two proposed variants of collaborative indoor positioning system using MLP ANNs were developed to enhance the accuracy of the traditional indoor positioning approaches (BLE–RSSI lateration-based and fingerprinting) and evaluated them under real-world conditions to demonstrate their feasibility and benefits, and to present their limitations and future research avenues. The findings obtained in each of the aforementioned research phases correspond to the main contributions of this doctoral thesis. Specifically, the results of evaluating our CIPSs demonstrated that the first proposed variant of mobile device-based CIPS outperforms the positioning accuracy of the traditional lateration-based IPSs. Considering the distances among collaborating devices, our CIPS significantly outperforms the lateration baseline in short distances (≤ 4m), medium distances (>4m and ≤ 8m), and large distances (> 8m) with a maximum error reduction of 49.15 %, 19.24 %, and 21.48 % for the “median” metric, respectively. Regarding the second variant, the results demonstrated that for short distances between collaborating devices, our collaborative approach outperforms the traditional IPSs based on BLE–fingerprinting and Wi-Fi–fingerprinting with a maximum error reduction of 23.41% and 19.49% for the “75th percentile” and “90th percentile” metric, respectively. For medium distances, our proposed approach outperforms the traditional IPSs based on BLE–fingerprinting in the first 60% and after the 90% of cases in the Empirical Cumulative Distribution Function (ECDF) and only partially (20% of cases in the ECDF) the traditional IPSs based on Wi-Fi–fingerprinting. For larger distances, the performance of our proposed approach is worse than the traditional IPSs based on fingerprinting. Overall, the results demonstrate the usefulness and usability of our CIPSs to improve the positioning accuracy of traditional IPSs, namely IPSs based on BLE– lateration, BLE–fingerprinting, and Wi-Fi–fingerprinting under specific conditions. Mainly, conditions where the collaborative devices have short and medium distances between them. Moreover, the integration of MLP ANNs model in CIPSs allows us to use our approach under different scenarios and technologies, showing its level of generalizability, usefulness, and feasibility.Cotutelle-yhteistyöväitöskirja
    corecore