240 research outputs found

    Context-aware gestural interaction in the smart environments of the ubiquitous computing era

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyTechnology is becoming pervasive and the current interfaces are not adequate for the interaction with the smart environments of the ubiquitous computing era. Recently, researchers have started to address this issue introducing the concept of natural user interface, which is mainly based on gestural interactions. Many issues are still open in this emerging domain and, in particular, there is a lack of common guidelines for coherent implementation of gestural interfaces. This research investigates gestural interactions between humans and smart environments. It proposes a novel framework for the high-level organization of the context information. The framework is conceived to provide the support for a novel approach using functional gestures to reduce the gesture ambiguity and the number of gestures in taxonomies and improve the usability. In order to validate this framework, a proof-of-concept has been developed. A prototype has been developed by implementing a novel method for the view-invariant recognition of deictic and dynamic gestures. Tests have been conducted to assess the gesture recognition accuracy and the usability of the interfaces developed following the proposed framework. The results show that the method provides optimal gesture recognition from very different view-points whilst the usability tests have yielded high scores. Further investigation on the context information has been performed tackling the problem of user status. It is intended as human activity and a technique based on an innovative application of electromyography is proposed. The tests show that the proposed technique has achieved good activity recognition accuracy. The context is treated also as system status. In ubiquitous computing, the system can adopt different paradigms: wearable, environmental and pervasive. A novel paradigm, called synergistic paradigm, is presented combining the advantages of the wearable and environmental paradigms. Moreover, it augments the interaction possibilities of the user and ensures better gesture recognition accuracy than with the other paradigms

    Emerging trends in upper-limb embedded devices: A qualitative research study

    Get PDF
    Framework This paper explores how a qualitative systematic literature review (SLR) can contribute to our understanding of the trends in upper-limb wearable devices. These devices are pieces of electronic equipment that can be worn as accessories, such as watches, or embedded in clothing, including gloves and sleeves, and could play an essential role in subjects' quality of life after any occurrence that affects their possibility to perform basic activities autonomously. Moreover, these devices can be used to improve manual performance tasks like surgical or precision tasks, and even more so when performed under extreme ambient temperature conditions. Goals and Methods: A SLR on upper-limb embedded devices was conducted based on scientific documents retrieved from the Scopus database. Two research questions were outlined: "How has this technology been evolving?" and "What is the trend according to the fields of application?". The combination of keywords (upper-limb* AND wearable* AND device*) was used in the title, abstract, and keywords fields. Results: A total of 555 documents were obtained. Descriptive statistical and bibliometric analyses were conducted, identifying trends, knowledge gaps, and the future direction of research. The free software VOSviewer was used to construct data visualization bibliometric maps of the co-authorship and co-citation network. A subset of 26 documents was considered for the critical qualitative synthesis. This step facilitated the visualization and exploration of the interconnectedness among authors and the citation patterns within the literature. Combining the information gathered enables addressing the extent and the emerging trends in upper-limb embedded devices' development according to the field they are applied. Final considerations: With this research, a starting point in developing a proof of concept of a novel device aimed at improving dexterity in challenging environments is established

    Fused Spectatorship: Designing Bodily Experiences Where Spectators Become Players

    Full text link
    Spectating digital games can be exciting. However, due to its vicarious nature, spectators often wish to engage in the gameplay beyond just watching and cheering. To blur the boundaries between spectators and players, we propose a novel approach called ''Fused Spectatorship'', where spectators watch their hands play games by loaning bodily control to a computational Electrical Muscle Stimulation (EMS) system. To showcase this concept, we designed three games where spectators loan control over both their hands to the EMS system and watch them play these competitive and collaborative games. A study with 12 participants suggested that participants could not distinguish if they were watching their hands play, or if they were playing the games themselves. We used our results to articulate four spectator experience themes and four fused spectator types, the behaviours they elicited and offer one design consideration to support each of these behaviours. We also discuss the ethical design considerations of our approach to help game designers create future fused spectatorship experiences.Comment: This paper is going to be published at Annual Symposium on Computer-Human Interaction in Play (CHI PLAY) 202

    Move, hold and touch: A framework for Tangible gesture interactive systems

    Get PDF
    © 2015 by the authors. Technology is spreading in our everyday world, and digital interaction beyond the screen, with real objects, allows taking advantage of our natural manipulative and communicative skills. Tangible gesture interaction takes advantage of these skills by bridging two popular domains in Human-Computer Interaction, tangible interaction and gestural interaction. In this paper, we present the Tangible Gesture Interaction Framework (TGIF) for classifying and guiding works in this field. We propose a classification of gestures according to three relationships with objects: move, hold and touch. Following this classification, we analyzed previous work in the literature to obtain guidelines and common practices for designing and building new tangible gesture interactive systems. We describe four interactive systems as application examples of the TGIF guidelines and we discuss the descriptive, evaluative and generative power of TGIF

    Design and control of soft rehabilitation robots actuated by pneumatic muscles: State of the art

    Get PDF
    Robot-assisted rehabilitation has become a new mainstream trend for the treatment of stroke patients with movement disability. Pneumatic muscle (PM) is one of the most promising actuators for rehabilitation robots, due to its inherent compliance and safety features. In this paper, we conduct a systematic review on the soft rehabilitation robots driven by pneumatic muscles. This review discusses up to date mechanical structures and control strategies for PMs-actuated rehabilitation robots. A variety of state-of-the-art soft rehabilitation robots are classified and reviewed according to the actuation configurations. Special attentions are paid to control strategies under different mechanical designs, with advanced control approaches to overcome PM’s highly nonlinear and time-varying behaviors and to enhance the adaptability to different patients. Finally, we analyze and highlight the current research gaps and the future directions in this field, which is potential for providing a reliable guidance on the development of advanced soft rehabilitation robots

    Otimização muscle-in-the-loop em tempo real para reabilitação física com um exosqueleto ativo: uma mudança de paradigma

    Get PDF
    Assisting human locomotion with a wearable robotic orthosis is still quite challenging, largely due to the complexity of the neuromusculoskeletal system, the time-varying dynamics that accompany motor adaptation, and the uniqueness of every individual’s response to the assistance given by the robot. To this day, these devices have not met their well-known promise yet, mostly due to the fact that they are not perfectly suitable for the rehabilitation of neuropathologic patients. One of the main challenges hampering this goal still relies on the interface and co-dependency between the human and the machine. Nowadays, most commercial exoskeletons replay pre-defined gait patterns, whereas research exoskeletons are switching to controllers based on optimized torque profiles. In most cases, the dynamics of the human musculoskeletal system are still ignored and do not take into account the optimal conditions for inducing a positive modulation of neuromuscular activity. This is because both rehabilitation strategies are still emphasized on the macro level of the whole joint instead of focusing on the muscles’ dynamics and activity, which are the actual anatomical elements that may need to be rehabilitated. Strategies to keep the human in the loop of the exoskeleton’s control laws in real-time may help to overcome these challenges. The main purpose of the present dissertation is to make a paradigm shift in the approach on how the assistance that is given to a subject by an exoskeleton is modelled and controlled during physical rehabilitation. Therefore, in the scope of the present work, it was intended to design, concede, implement, and validate a real-time muscle-in-the-loop optimization model to find the best assistive support ratio that would induce optimal rehabilitation conditions to a specific group of impaired muscles while having a minimum impact on the other healthy muscles. The developed optimization model was implemented in the form of a plugin and was integrated on a neuromechanical model-based interface for driving a bilateral ankle exoskeleton. Experimental pilot tests evaluated the feasibility and effectiveness of the model. Results of the most significant pilots achieved EMG reductions up to 61 ± 3 % in Soleus and 41 ± 10 % in Gastrocnemius Lateralis. Moreover, results also demonstrated the efficiency of the optimization’s specific reduction on rehabilitation by looking into the muscular fatigue after each experiment. Finally, two parallel preliminary studies emerged from the pilots, which looked at muscle adaptation, after a new assistive condition had been applied, over time and at the effect of the lateral positioning of the exoskeleton’s actuators on the leg muscles.Auxiliar a locomoção humana com uma ortose robótica ainda é bastante desafiante, em grande parte devido à complexidade do sistema neuromusculoesquelético, à dinâmica variável no tempo que acompanha a adaptação motora e à singularidade da resposta de cada indivíduo à assistência dada pelo robô. Até hoje, está por cumprir a promessa inicial destes dispositivos, principalmente devido ao facto de não serem perfeitamente adequados para a reabilitação de pacientes neuropatológicos. Um dos principais desafios que dificultam esse objetivo foca-se ainda na interface e na co-dependência entre o ser humano e a máquina. Hoje em dia, a maioria dos exoesqueletos comerciais reproduz padrões de marcha predefinidos, enquanto que os exoesqueletos em investigação estão só agora a mudar para controladores com base em perfis de binário otimizados. Na maioria dos casos, a dinâmica do sistema musculoesquelético humano ainda é ignorada e não tem em consideração as condições ideais para induzir uma modulação positiva da atividade neuromuscular. Isso ocorre porque ambas as estratégias de reabilitação ainda são enfatizadas no nível macro de toda a articulação, em vez de se concentrar na dinâmica e atividade dos músculos, que são os elementos anatómicos que realmente precisam de ser reabilitados. Estratégias para manter o ser humano em loop nos comandos que controlam o exoesqueleto em tempo real podem ajudar a superar estes desafios. O principal objetivo desta dissertação é fazer uma mudança de paradigma na abordagem em como a assistência que é dada a um sujeito por um exosqueleto é modelada e controlada durante a reabilitação física. Portanto, no contexto do presente trabalho, pretendeu-se projetar, conceder, implementar e validar um modelo de otimização muscle-in-the-loop em tempo real para encontrar a melhor relação de suporte capaz de induzir as condições ideais de reabilitação para um grupo específico de músculos fragilizados, tendo um impacto mínimo nos outros músculos saudáveis. O modelo de otimização desenvolvido foi implementado na forma de um plugin e foi integrado numa interface baseada num modelo neuromecânico para o controlo de um exoesqueleto bilateral de tornozelo. Testes experimentais piloto avaliaram a viabilidade e a eficácia do modelo. Os resultados dos testes mais significativos demonstraram reduções de EMG de até 61 ± 3 % no Soleus e 41 ± 10 % no Gastrocnemius Lateral. Adicionalmente, os resultados demonstraram também a eficiência em reabilitação da redução específica no EMG devido à otimização tendo em conta a fadiga muscular após cada teste. Finalmente, dois estudos preliminares paralelos emergiram dos testes piloto, que analisaram a adaptação muscular após uma nova condição assistiva ter sido definida ao longo do tempo e o efeito do posicionamento lateral dos atuadores do exoesqueleto nos músculos da perna.Mestrado em Engenharia Biomédic

    (re)new configurations:Beyond the HCI/Art Challenge: Curating re-new 2011

    Get PDF

    Re-new - IMAC 2011 Proceedings

    Get PDF

    Portfolio of Electroacoustic Compositions with Commentaries

    Get PDF
    This portfolio consists of electroacoustic compositions which were primarily realised through the use of corporeally informed compositional practices. The manner in which a composer interacts with the compositional tools and musical materials at their disposal is a defining factor in the creation of musical works. Although the use of computers in the practice of electroacoustic composition has extended the range of sonic possibilities afforded to composers, it has also had a negative impact on the level of physical interaction that composers have with these musical materials. This thesis is an investigation into the use of mediation technologies with the aim of circumventing issues relating to the physical performance of electroacoustic music. This line of inquiry has led me to experiment with embedded computers, wearable technologies, and a range of various sensors. The specific tools that were used in the creation of the pieces within this portfolio are examined in detail within this thesis. I also provide commentaries and analysis of the eleven electroacoustic works which comprise this portfolio, describing the thought processes that led to their inception, the materials used in their creation, and the tools and techniques that I employed throughout the compositional process

    A Framework For Abstracting, Designing And Building Tangible Gesture Interactive Systems

    Get PDF
    This thesis discusses tangible gesture interaction, a novel paradigm for interacting with computer that blends concepts from the more popular fields of tangible interaction and gesture interaction. Taking advantage of the human innate abilities to manipulate physical objects and to communicate through gestures, tangible gesture interaction is particularly interesting for interacting in smart environments, bringing the interaction with computer beyond the screen, back to the real world. Since tangible gesture interaction is a relatively new field of research, this thesis presents a conceptual framework that aims at supporting future work in this field. The Tangible Gesture Interaction Framework provides support on three levels. First, it helps reflecting from a theoretical point of view on the different types of tangible gestures that can be designed, physically, through a taxonomy based on three components (move, hold and touch) and additional attributes, and semantically, through a taxonomy of the semantic constructs that can be used to associate meaning to tangible gestures. Second, it helps conceiving new tangible gesture interactive systems and designing new interactions based on gestures with objects, through dedicated guidelines for tangible gesture definition and common practices for different application domains. Third, it helps building new tangible gesture interactive systems supporting the choice between four different technological approaches (embedded and embodied, wearable, environmental or hybrid) and providing general guidance for the different approaches. As an application of this framework, this thesis presents also seven tangible gesture interactive systems for three different application domains, i.e., interacting with the In-Vehicle Infotainment System (IVIS) of the car, the emotional and interpersonal communication, and the interaction in a smart home. For the first application domain, four different systems that use gestures on the steering wheel as interaction means with the IVIS have been designed, developed and evaluated. For the second application domain, an anthropomorphic lamp able to recognize gestures that humans typically perform for interpersonal communication has been conceived and developed. A second system, based on smart t-shirts, recognizes when two people hug and reward the gesture with an exchange of digital information. Finally, a smart watch for recognizing gestures performed with objects held in the hand in the context of the smart home has been investigated. The analysis of existing systems found in literature and of the system developed during this thesis shows that the framework has a good descriptive and evaluative power. The applications developed during this thesis show that the proposed framework has also a good generative power.Questa tesi discute l’interazione gestuale tangibile, un nuovo paradigma per interagire con il computer che unisce i principi dei più comuni campi di studio dell’interazione tangibile e dell’interazione gestuale. Sfruttando le abilità innate dell’uomo di manipolare oggetti fisici e di comunicare con i gesti, l’interazione gestuale tangibile si rivela particolarmente interessante per interagire negli ambienti intelligenti, riportando l’attenzione sul nostro mondo reale, al di là dello schermo dei computer o degli smartphone. Poiché l’interazione gestuale tangibile è un campo di studio relativamente recente, questa tesi presenta un framework (quadro teorico) che ha lo scopo di assistere lavori futuri in questo campo. Il Framework per l’Interazione Gestuale Tangibile fornisce supporto su tre livelli. Per prima cosa, aiuta a riflettere da un punto di vista teorico sui diversi tipi di gesti tangibili che possono essere eseguiti fisicamente, grazie a una tassonomia basata su tre componenti (muovere, tenere, toccare) e attributi addizionali, e che possono essere concepiti semanticamente, grazie a una tassonomia di tutti i costrutti semantici che permettono di associare dei significati ai gesti tangibili. In secondo luogo, il framework proposto aiuta a concepire nuovi sistemi interattivi basati su gesti tangibili e a ideare nuove interazioni basate su gesti con gli oggetti, attraverso linee guida per la definizione di gesti tangibili e una selezione delle migliore pratiche per i differenti campi di applicazione. Infine, il framework aiuta a implementare nuovi sistemi interattivi basati su gesti tangibili, permettendo di scegliere tra quattro differenti approcci tecnologici (incarnato e integrato negli oggetti, indossabile, distribuito nell’ambiente, o ibrido) e fornendo una guida generale per la scelta tra questi differenti approcci. Come applicazione di questo framework, questa tesi presenta anche sette sistemi interattivi basati su gesti tangibili, realizzati per tre differenti campi di applicazione: l’interazione con i sistemi di infotainment degli autoveicoli, la comunicazione interpersonale delle emozioni, e l’interazione nella casa intelligente. Per il primo campo di applicazione, sono stati progettati, sviluppati e testati quattro differenti sistemi che usano gesti tangibili effettuati sul volante come modalità di interazione con il sistema di infotainment. Per il secondo campo di applicazione, è stata concepita e sviluppata una lampada antropomorfica in grado di riconoscere i gesti tipici dell’interazione interpersonale. Per lo stesso campo di applicazione, un secondo sistema, basato su una maglietta intelligente, riconosce quando due persone si abbracciano e ricompensa questo gesto con uno scambio di informazioni digitali. Infine, per l’interazione nella casa intelligente, è stata investigata la realizzazione di uno smart watch per il riconoscimento di gesti eseguiti con oggetti tenuti nella mano. L’analisi dei sistemi interattivi esistenti basati su gesti tangibili permette di dimostrare che il framework ha un buon potere descrittivo e valutativo. Le applicazioni sviluppate durante la tesi mostrano che il framework proposto ha anche un valido potere generativo
    corecore