193 research outputs found

    Study and development of sensorimotor interfaces for robotic human augmentation

    Get PDF
    This thesis presents my research contribution to robotics and haptics in the context of human augmentation. In particular, in this document, we are interested in bodily or sensorimotor augmentation, thus the augmentation of humans by supernumerary robotic limbs (SRL). The field of sensorimotor augmentation is new in robotics and thanks to the combination with neuroscience, great leaps forward have already been made in the past 10 years. All of the research work I produced during my Ph.D. focused on the development and study of fundamental technology for human augmentation by robotics: the sensorimotor interface. This new concept is born to indicate a wearable device which has two main purposes, the first is to extract the input generated by the movement of the user's body, and the second to provide the somatosensory system of the user with an haptic feedback. This thesis starts with an exploratory study of integration between robotic and haptic devices, intending to combine state-of-the-art devices. This allowed us to realize that we still need to understand how to improve the interface that will allow us to feel the agency when using an augmentative robot. At this point, the path of this thesis forks into two alternative ways that have been adopted to improve the interaction between the human and the robot. In this regard, the first path we presented tackles two aspects conerning the haptic feedback of sensorimotor interfaces, which are the choice of the positioning and the effectiveness of the discrete haptic feedback. In the second way we attempted to lighten a supernumerary finger, focusing on the agility of use and the lightness of the device. One of the main findings of this thesis is that haptic feedback is considered to be helpful by stroke patients, but this does not mitigate the fact that the cumbersomeness of the devices is a deterrent to their use. Preliminary results here presented show that both the path we chose to improve sensorimotor augmentation worked: the presence of the haptic feedback improves the performance of sensorimotor interfaces, the co-positioning of haptic feedback and the input taken from the human body can improve the effectiveness of these interfaces, and creating a lightweight version of a SRL is a viable solution for recovering the grasping function

    Ubiquitous haptic feedback in human-computer interaction through electrical muscle stimulation

    Get PDF
    [no abstract

    Artificial Intelligence and Ambient Intelligence

    Get PDF
    This book includes a series of scientific papers published in the Special Issue on Artificial Intelligence and Ambient Intelligence at the journal Electronics MDPI. The book starts with an opinion paper on “Relations between Electronics, Artificial Intelligence and Information Society through Information Society Rules”, presenting relations between information society, electronics and artificial intelligence mainly through twenty-four IS laws. After that, the book continues with a series of technical papers that present applications of Artificial Intelligence and Ambient Intelligence in a variety of fields including affective computing, privacy and security in smart environments, and robotics. More specifically, the first part presents usage of Artificial Intelligence (AI) methods in combination with wearable devices (e.g., smartphones and wristbands) for recognizing human psychological states (e.g., emotions and cognitive load). The second part presents usage of AI methods in combination with laser sensors or Wi-Fi signals for improving security in smart buildings by identifying and counting the number of visitors. The last part presents usage of AI methods in robotics for improving robots’ ability for object gripping manipulation and perception. The language of the book is rather technical, thus the intended audience are scientists and researchers who have at least some basic knowledge in computer science

    User-Defined Gestures with Physical Props in Virtual Reality

    Get PDF
    When building virtual reality (VR) environments, designers use physical props to improve immersion and realism. However, people may want to perform actions that would not be supported by physical objects, for example, duplicating an object in a Computer-Aided Design (CAD) program or darkening the sky in an open-world game. In this thesis, I present an elicitation study where I asked 21 participants to choose from 95 props to perform manipulative gestures for 20 referents (actions), typically found in CAD software or open-world games. I describe the resulting gestures as context-free grammars, capturing the actions taken by our participants, their prop choices, and how the props were used in each gesture. I present agreement scores between gesture choices and prop choices; to accomplish the latter, I developed a generalized agreement score that compares sets of selections rather than a single selection, enabling new types of elicitation studies. I found that props were selected according to their resemblance to virtual objects and the actions they afforded; that gesture and prop agreement depended on the referent, with some referents leading to similar gesture choices, while others led to similar prop choices; and that a small set of carefully chosen props can support a wide variety of gestures

    On the critical role of the sensorimotor loop on the design of interaction techniques and interactive devices

    Get PDF
    People interact with their environment thanks to their perceptual and motor skills. This is the way they both use objects around them and perceive the world around them. Interactive systems are examples of such objects. Therefore to design such objects, we must understand how people perceive them and manipulate them. For example, haptics is both related to the human sense of touch and what I call the motor ability. I address a number of research questions related to the design and implementation of haptic, gestural, and touch interfaces and present examples of contributions on these topics. More interestingly, perception, cognition, and action are not separated processes, but an integrated combination of them called the sensorimotor loop. Interactive systems follow the same overall scheme, with differences that make the complementarity of humans and machines. The interaction phenomenon is a set of connections between human sensorimotor loops, and interactive systems execution loops. It connects inputs with outputs, users and systems, and the physical world with cognition and computing in what I call the Human-System loop. This model provides a complete overview of the interaction phenomenon. It helps to identify the limiting factors of interaction that we can address to improve the design of interaction techniques and interactive devices.Les humains interagissent avec leur environnement grâce à leurs capacités perceptives et motrices. C'est ainsi qu'ils utilisent les objets qui les entourent et perçoivent le monde autour d'eux. Les systèmes interactifs sont des exemples de tels objets. Par conséquent, pour concevoir de tels objets, nous devons comprendre comment les gens les perçoivent et les manipulent. Par exemple, l'haptique est à la fois liée au sens du toucher et à ce que j'appelle la capacité motrice. J'aborde un certain nombre de questions de recherche liées à la conception et à la mise en œuvre d'interfaces haptiques, gestuelles et tactiles et je présente des exemples de contributions sur ces sujets. Plus intéressant encore, la perception, la cognition et l'action ne sont pas des processus séparés, mais une combinaison intégrée d'entre eux appelée la boucle sensorimotrice. Les systèmes interactifs suivent le même schéma global, avec des différences qui forme la complémentarité des humains et des machines. Le phénomène d'interaction est un ensemble de connexions entre les boucles sensorimotrices humaines et les boucles d'exécution des systèmes interactifs. Il relie les entrées aux sorties, les utilisateurs aux systèmes, et le monde physique à la cognition et au calcul dans ce que j'appelle la boucle Humain-Système. Ce modèle fournit un aperçu complet du phénomène d'interaction. Il permet d'identifier les facteurs limitatifs de l'interaction que nous pouvons aborder pour améliorer la conception des techniques d'interaction et des dispositifs interactifs

    Modelado de sensores piezoresistivos y uso de una interfaz basada en guantes de datos para el control de impedancia de manipuladores robĂłticos

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Ciencias Físicas, Departamento de Arquitectura de Computadores y Automática, leída el 21-02-2014Sección Deptal. de Arquitectura de Computadores y Automática (Físicas)Fac. de Ciencias FísicasTRUEunpu

    Tabletop tangible maps and diagrams for visually impaired users

    Get PDF
    En dépit de leur omniprésence et de leur rôle essentiel dans nos vies professionnelles et personnelles, les représentations graphiques, qu'elles soient numériques ou sur papier, ne sont pas accessibles aux personnes déficientes visuelles car elles ne fournissent pas d'informations tactiles. Par ailleurs, les inégalités d'accès à ces représentations ne cessent de s'accroître ; grâce au développement de représentations graphiques dynamiques et disponibles en ligne, les personnes voyantes peuvent non seulement accéder à de grandes quantités de données, mais aussi interagir avec ces données par le biais de fonctionnalités avancées (changement d'échelle, sélection des données à afficher, etc.). En revanche, pour les personnes déficientes visuelles, les techniques actuellement utilisées pour rendre accessibles les cartes et les diagrammes nécessitent l'intervention de spécialistes et ne permettent pas la création de représentations interactives. Cependant, les récentes avancées dans le domaine de l'adaptation automatique de contenus laissent entrevoir, dans les prochaines années, une augmentation de la quantité de contenus adaptés. Cette augmentation doit aller de pair avec le développement de dispositifs utilisables et abordables en mesure de supporter l'affichage de représentations interactives et rapidement modifiables, tout en étant accessibles aux personnes déficientes visuelles. Certains prototypes de recherche s'appuient sur une représentation numérique seulement : ils peuvent être instantanément modifiés mais ne fournissent que très peu de retour tactile, ce qui rend leur exploration complexe d'un point de vue cognitif et impose de fortes contraintes sur le contenu. D'autres prototypes s'appuient sur une représentation numérique et physique : bien qu'ils puissent être explorés tactilement, ce qui est un réel avantage, ils nécessitent un support tactile qui empêche toute modification rapide. Quant aux dispositifs similaires à des tablettes Braille, mais avec des milliers de picots, leur coût est prohibitif. L'objectif de cette thèse est de pallier les limitations de ces approches en étudiant comment développer des cartes et diagrammes interactifs physiques, modifiables et abordables. Pour cela, nous nous appuyons sur un type d'interface qui a rarement été étudié pour des utilisateurs déficients visuels : les interfaces tangibles, et plus particulièrement les interfaces tangibles sur table. Dans ces interfaces, des objets physiques représentent des informations numériques et peuvent être manipulés par l'utilisateur pour interagir avec le système, ou par le système lui-même pour refléter un changement du modèle numérique - on parle alors d'interfaces tangibles sur tables animées, ou actuated. Grâce à la conception, au développement et à l'évaluation de trois interfaces tangibles sur table (les Tangible Reels, la Tangible Box et BotMap), nous proposons un ensemble de solutions techniques répondant aux spécificités des interfaces tangibles pour des personnes déficientes visuelles, ainsi que de nouvelles techniques d'interaction non-visuelles, notamment pour la reconstruction d'une carte ou d'un diagramme et l'exploration de cartes de type " Pan & Zoom ". D'un point de vue théorique, nous proposons aussi une nouvelle classification pour les dispositifs interactifs accessibles.Despite their omnipresence and essential role in our everyday lives, online and printed graphical representations are inaccessible to visually impaired people because they cannot be explored using the sense of touch. The gap between sighted and visually impaired people's access to graphical representations is constantly growing due to the increasing development and availability of online and dynamic representations that not only give sighted people the opportunity to access large amounts of data, but also to interact with them using advanced functionalities such as panning, zooming and filtering. In contrast, the techniques currently used to make maps and diagrams accessible to visually impaired people require the intervention of tactile graphics specialists and result in non-interactive tactile representations. However, based on recent advances in the automatic production of content, we can expect in the coming years a growth in the availability of adapted content, which must go hand-in-hand with the development of affordable and usable devices. In particular, these devices should make full use of visually impaired users' perceptual capacities and support the display of interactive and updatable representations. A number of research prototypes have already been developed. Some rely on digital representation only, and although they have the great advantage of being instantly updatable, they provide very limited tactile feedback, which makes their exploration cognitively demanding and imposes heavy restrictions on content. On the other hand, most prototypes that rely on digital and physical representations allow for a two-handed exploration that is both natural and efficient at retrieving and encoding spatial information, but they are physically limited by the use of a tactile overlay, making them impossible to update. Other alternatives are either extremely expensive (e.g. braille tablets) or offer a slow and limited way to update the representation (e.g. maps that are 3D-printed based on users' inputs). In this thesis, we propose to bridge the gap between these two approaches by investigating how to develop physical interactive maps and diagrams that support two-handed exploration, while at the same time being updatable and affordable. To do so, we build on previous research on Tangible User Interfaces (TUI) and particularly on (actuated) tabletop TUIs, two fields of research that have surprisingly received very little interest concerning visually impaired users. Based on the design, implementation and evaluation of three tabletop TUIs (the Tangible Reels, the Tangible Box and BotMap), we propose innovative non-visual interaction techniques and technical solutions that will hopefully serve as a basis for the design of future TUIs for visually impaired users, and encourage their development and use. We investigate how tangible maps and diagrams can support various tasks, ranging from the (re)construction of diagrams to the exploration of maps by panning and zooming. From a theoretical perspective we contribute to the research on accessible graphical representations by highlighting how research on maps can feed research on diagrams and vice-versa. We also propose a classification and comparison of existing prototypes to deliver a structured overview of current research

    Multimodal, Embodied and Location-Aware Interaction

    Get PDF
    This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case. BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction. GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon- strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their virtual environment

    Multimodal, Embodied and Location-Aware Interaction

    Get PDF
    This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case. BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction. GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon- strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their virtual environment
    • …
    corecore