92 research outputs found

    Multi-modal on-body sensing of human activities

    Get PDF
    Increased usage and integration of state-of-the-art information technology in our everyday work life aims at increasing the working efficiency. Due to unhandy human-computer-interaction methods this progress does not always result in increased efficiency, for mobile workers in particular. Activity recognition based contextual computing attempts to balance this interaction deficiency. This work investigates wearable, on-body sensing techniques on their applicability in the field of human activity recognition. More precisely we are interested in the spotting and recognition of so-called manipulative hand gestures. In particular the thesis focuses on the question whether the widely used motion sensing based approach can be enhanced through additional information sources. The set of gestures a person usually performs on a specific place is limited -- in the contemplated production and maintenance scenarios in particular. As a consequence this thesis investigates whether the knowledge about the user's hand location provides essential hints for the activity recognition process. In addition, manipulative hand gestures -- due to their object manipulating character -- typically start in the moment the user's hand reaches a specific place, e.g. a specific part of a machinery. And the gestures most likely stop in the moment the hand leaves the position again. Hence this thesis investigates whether hand location can help solving the spotting problem. Moreover, as user-independence is still a major challenge in activity recognition, this thesis investigates location context as a possible key component in a user-independent recognition system. We test a Kalman filter based method to blend absolute position readings with orientation readings based on inertial measurements. A filter structure is suggested which allows up-sampling of slow absolute position readings, and thus introduces higher dynamics to the position estimations. In such a way the position measurement series is made aware of wrist motions in addition to the wrist position. We suggest location based gesture spotting and recognition approaches. Various methods to model the location classes used in the spotting and recognition stages as well as different location distance measures are suggested and evaluated. In addition a rather novel sensing approach in the field of human activity recognition is studied. This aims at compensating drawbacks of the mere motion sensing based approach. To this end we develop a wearable hardware architecture for lower arm muscular activity measurements. The sensing hardware based on force sensing resistors is designed to have a high dynamic range. In contrast to preliminary attempts the proposed new design makes hardware calibration unnecessary. Finally we suggest a modular and multi-modal recognition system; modular with respect to sensors, algorithms, and gesture classes. This means that adding or removing a sensor modality or an additional algorithm has little impact on the rest of the recognition system. Sensors and algorithms used for spotting and recognition can be selected and fine-tuned separately for each single activity. New activities can be added without impact on the recognition rates of the other activities

    Low Energy Physical Activity Recognition System on Smartphones

    Get PDF
    An innovative approach to physical activity recognition based on the use of discrete variables obtained from accelerometer sensors is presented. The system first performs a discretization process for each variable, which allows efficient recognition of activities performed by users using as little energy as possible. To this end, an innovative discretization and classification technique is presented based on the 2 distribution. Furthermore, the entire recognition process is executed on the smartphone, which determines not only the activity performed, but also the frequency at which it is carried out. These techniques and the new classification system presented reduce energy consumption caused by the activity monitoring system. The energy saved increases smartphone usage time to more than 27 h without recharging while maintaining accuracy.Ministerio de Economía y Competitividad TIN2013-46801-C4-1-rJunta de Andalucía TIC-805

    Body-Borne Computers as Extensions of Self

    Get PDF
    The opportunities for wearable technologies go well beyond always-available information displays or health sensing devices. The concept of the cyborg introduced by Clynes and Kline, along with works in various fields of research and the arts, offers a vision of what technology integrated with the body can offer. This paper identifies different categories of research aimed at augmenting humans. The paper specifically focuses on three areas of augmentation of the human body and its sensorimotor capabilities: physical morphology, skin display, and somatosensory extension. We discuss how such digital extensions relate to the malleable nature of our self-image. We argue that body-borne devices are no longer simply functional apparatus, but offer a direct interplay with the mind. Finally, we also showcase some of our own projects in this area and shed light on future challenges

    Move, hold and touch: A framework for Tangible gesture interactive systems

    Get PDF
    © 2015 by the authors. Technology is spreading in our everyday world, and digital interaction beyond the screen, with real objects, allows taking advantage of our natural manipulative and communicative skills. Tangible gesture interaction takes advantage of these skills by bridging two popular domains in Human-Computer Interaction, tangible interaction and gestural interaction. In this paper, we present the Tangible Gesture Interaction Framework (TGIF) for classifying and guiding works in this field. We propose a classification of gestures according to three relationships with objects: move, hold and touch. Following this classification, we analyzed previous work in the literature to obtain guidelines and common practices for designing and building new tangible gesture interactive systems. We describe four interactive systems as application examples of the TGIF guidelines and we discuss the descriptive, evaluative and generative power of TGIF

    A Body-and-Mind-Centric Approach to Wearable Personal Assistants

    Get PDF

    New Trends in Using Augmented Reality Apps for Smart City Contexts

    Get PDF
    The idea of virtuality is not new, as research on visualization and simulation dates back to the early use of ink and paper sketches for alternative design comparisons. As technology has advanced so the way of visualizing simulations as well, but the progress is slow due to difficulties in creating workable simulations models and effectively providing them to the users. Augmented Reality and Virtual Reality, the evolving technologies that have been haunting the tech industry, receiving excessive attention from the media and colossal growing are redefining the way we interact, communicate and work together. From consumer application to manufacturers these technologies are used in different sectors providing huge benefits through several applications. In this work, we demonstrate the potentials of Augmented Reality techniques in a Smart City (Smart Campus) context. A multiplatform mobile app featuring Augmented Reality capabilities connected to GIS services are developed to evaluate different features such as performance, usability, effectiveness and satisfaction of the Augmented Reality technology in the context of a Smart Campus

    Digitizing the chemical senses: possibilities & pitfalls

    Get PDF
    Many people are understandably excited by the suggestion that the chemical senses can be digitized; be it to deliver ambient fragrances (e.g., in virtual reality or health-related applications), or else to transmit flavour experiences via the internet. However, to date, progress in this area has been surprisingly slow. Furthermore, the majority of the attempts at successful commercialization have failed, often in the face of consumer ambivalence over the perceived benefits/utility. In this review, with the focus squarely on the domain of Human-Computer Interaction (HCI), we summarize the state-of-the-art in the area. We highlight the key possibilities and pitfalls as far as stimulating the so-called ‘lower’ senses of taste, smell, and the trigeminal system are concerned. Ultimately, we suggest that mixed reality solutions are currently the most plausible as far as delivering (or rather modulating) flavour experiences digitally is concerned. The key problems with digital fragrance delivery are related to attention and attribution. People often fail to detect fragrances when they are concentrating on something else; And even when they detect that their chemical senses have been stimulated, there is always a danger that they attribute their experience (e.g., pleasure) to one of the other senses – this is what we call ‘the fundamental attribution error’. We conclude with an outlook on digitizing the chemical senses and summarize a set of open-ended questions that the HCI community has to address in future explorations of smell and taste as interaction modalities

    Amorphous Placement and Retrieval of Sensory Data in Sparse Mobile Ad-Hoc Networks

    Full text link
    Abstract—Personal communication devices are increasingly being equipped with sensors that are able to passively collect information from their surroundings – information that could be stored in fairly small local caches. We envision a system in which users of such devices use their collective sensing, storage, and communication resources to query the state of (possibly remote) neighborhoods. The goal of such a system is to achieve the highest query success ratio using the least communication overhead (power). We show that the use of Data Centric Storage (DCS), or directed placement, is a viable approach for achieving this goal, but only when the underlying network is well connected. Alternatively, we propose, amorphous placement, in which sensory samples are cached locally and informed exchanges of cached samples is used to diffuse the sensory data throughout the whole network. In handling queries, the local cache is searched first for potential answers. If unsuccessful, the query is forwarded to one or more direct neighbors for answers. This technique leverages node mobility and caching capabilities to avoid the multi-hop communication overhead of directed placement. Using a simplified mobility model, we provide analytical lower and upper bounds on the ability of amorphous placement to achieve uniform field coverage in one and two dimensions. We show that combining informed shuffling of cached samples upon an encounter between two nodes, with the querying of direct neighbors could lead to significant performance improvements. For instance, under realistic mobility models, our simulation experiments show that amorphous placement achieves 10% to 40% better query answering ratio at a 25% to 35% savings in consumed power over directed placement.National Science Foundation (CNS Cybertrust 0524477, CNS NeTS 0520166, CNS ITR 0205294, EIA RI 0202067

    A multi-channel opto-electronic sensor to accurately monitor heart rate against motion artefact during exercise

    Get PDF
    This study presents the use of a multi-channel opto-electronic sensor (OEPS) to effectively monitor critical physiological parameters whilst preventing motion artefact as increasingly demanded by personal healthcare. The aim of this work was to study how to capture the heart rate (HR) efficiently through a well-constructed OEPS and a 3-axis accelerometer with wireless communication. A protocol was designed to incorporate sitting, standing, walking, running and cycling. The datasets collected from these activities were processed to elaborate sport physiological effects. t-test, Bland-Altman Agreement (BAA), and correlation to evaluate the performance of the OEPS were used against Polar and Mio-Alpha HR monitors. No differences in the HR were found between OEPS, and either Polar or Mio-Alpha (both p > 0.05); a strong correlation was found between Polar and OEPS (r: 0.96, p < 0.001); the bias of BAA 0.85 bpm, the standard deviation (SD) 9.20 bpm, and the limits of agreement (LOA) from −17.18 bpm to +18.88 bpm. For the Mio-Alpha and OEPS, a strong correlation was found (r: 0.96, p < 0.001); the bias of BAA 1.63 bpm, SD 8.62 bpm, LOA from −15.27 bpm to +18.58 bpm. These results demonstrate the OEPS to be capable of carrying out real time and remote monitoring of heart rate

    A review of the role of sensors in mobile context-aware recommendation systems

    Get PDF
    Recommendation systems are specialized in offering suggestions about specific items of different types (e.g., books, movies, restaurants, and hotels) that could be interesting for the user. They have attracted considerable research attention due to their benefits and also their commercial interest. Particularly, in recent years, the concept of context-aware recommendation system has appeared to emphasize the importance of considering the context of the situations in which the user is involved in order to provide more accurate recommendations. The detection of the context requires the use of sensors of different types, which measure different context variables. Despite the relevant role played by sensors in the development of context-aware recommendation systems, sensors and recommendation approaches are two fields usually studied independently. In this paper, we provide a survey on the use of sensors for recommendation systems. Our contribution can be seen from a double perspective. On the one hand, we overview existing techniques used to detect context factors that could be relevant for recommendation. On the other hand, we illustrate the interest of sensors by considering different recommendation use cases and scenarios
    corecore