115 research outputs found

    Shopping Using Gesture-Driven Interaction

    Get PDF

    Tactile and Touchless Sensors Printed on Flexible Textile Substrates for Gesture Recognition

    Full text link
    Tesis por compendio[EN] The main objective of this thesis is the development of new sensors and actuators using Printed Electronics technology. For this, conductive, semiconductor and dielectric polymeric materials are used on flexible and/or elastic substrates. By means of suitable designs and application processes, it is possible to manufacture sensors capable of interacting with the environment. In this way, specific sensing functionalities can be incorporated into the substrates, such as textile fabrics. Additionally, it is necessary to include electronic systems capable of processing the data obtained, as well as its registration. In the development of these sensors and actuators, the physical properties of the different materials are precisely combined. For this, multilayer structures are designed where the properties of some materials interact with those of others. The result is a sensor capable of capturing physical variations of the environment, and convert them into signals that can be processed, and finally transformed into data. On the one hand, a tactile sensor printed on textile substrate for 2D gesture recognition was developed. This sensor consists of a matrix composed of small capacitive sensors based on a capacitor type structure. These sensors were designed in such a way that, if a finger or other object with capacitive properties, gets close enough, its behaviour varies, and it can be measured. The small sensors are arranged in this matrix as in a grid. Each sensor has a position that is determined by a row and a column. The capacity of each small sensor is periodically measured in order to assess whether significant variations have been produced. For this, it is necessary to convert the sensor capacity into a value that is subsequently digitally processed. On the other hand, to improve the effectiveness in the use of the developed 2D touch sensors, the way of incorporating an actuator system was studied. Thereby, the user receives feedback that the order or action was recognized. To achieve this, the capacitive sensor grid was complemented with an electroluminescent screen printed as well. The final prototype offers a solution that combines a 2D tactile sensor with an electroluminescent actuator on a printed textile substrate. Next, the development of a 3D gesture sensor was carried out using a combination of sensors also printed on textile substrate. In this type of 3D sensor, a signal is sent generating an electric field on the sensors. This is done using a transmission electrode located very close to them. The generated field is received by the reception sensors and converted to electrical signals. For this, the sensors are based on electrodes that act as receivers. If a person places their hands within the emission area, a disturbance of the electric field lines is created. This is due to the deviation of the lines to ground using the intrinsic conductivity of the human body. This disturbance affects the signals received by the electrodes. Variations captured by all electrodes are processed together and can determine the position and movement of the hand on the sensor surface. Finally, the development of an improved 3D gesture sensor was carried out. As in the previous development, the sensor allows contactless gesture detection, but increasing the detection range. In addition to printed electronic technology, two other textile manufacturing technologies were evaluated.[ES] La presente tesis doctoral tiene como objetivo fundamental el desarrollo de nuevos sensores y actuadores empleando la tecnología electrónica impresa, también conocida como Printed Electronics. Para ello, se emplean materiales poliméricos conductores, semiconductores y dieléctricos sobre sustratos flexibles y/o elásticos. Por medio de diseños y procesos de aplicación adecuados, es posible fabricar sensores capaces de interactuar con el entorno. De este modo, se pueden incorporar a los sustratos, como puedan ser tejidos textiles, funcionalidades específicas de medición del entorno y de respuesta ante cambios de este. Adicionalmente, es necesario incluir sistemas electrónicos, capaces de realizar el procesado de los datos obtenidos, así como de su registro. En el desarrollo de estos sensores y actuadores se combinan las propiedades físicas de los diferentes materiales de forma precisa. Para ello, se diseñan estructuras multicapa donde las propiedades de unos materiales interaccionan con las de los demás. El resultado es un sensor capaz de captar variaciones físicas del entorno, y convertirlas en señales que pueden ser procesadas y transformadas finalmente en datos. Por una parte, se ha desarrollado un sensor táctil impreso sobre sustrato textil para reconocimiento de gestos en 2D. Este sensor se compone de una matriz formada por pequeños sensores capacitivos basados en estructura de tipo condensador. Estos se han diseñado de forma que, si un dedo u otro objeto con propiedades capacitivas se aproxima suficientemente, su comportamiento varía, pudiendo ser medido. Los pequeños sensores están ordenados en dicha matriz como en una cuadrícula. Cada sensor tiene una posición que viene determinada por una fila y por una columna. Periódicamente se mide la capacidad de cada pequeño sensor con el fin de evaluar si ha sufrido variaciones significativas. Para ello es necesario convertir la capacidad del sensor en un valor que posteriormente es procesado digitalmente. Por otro lado, con el fin de mejorar la efectividad en el uso de los sensores táctiles 2D desarrollados, se ha estudiado el modo de incorporar un sistema actuador. De esta forma, el usuario recibe una retroalimentación indicando que la orden o acción ha sido reconocida. Para ello, se ha complementado la matriz de sensores capacitivos con una pantalla electroluminiscente también impresa. El resultado final ofrece una solución que combina un sensor táctil 2D con un actuador electroluminiscente realizado mediante impresión electrónica sobre sustrato textil. Posteriormente, se ha llevado a cabo el desarrollo de un sensor de gestos 3D empleando una combinación de sensores impresos también sobre sustrato textil. En este tipo de sensor 3D, se envía una señal que genera un campo eléctrico sobre los sensores impresos. Esto se lleva a cabo mediante un electrodo de transmisión situado muy cerca de ellos. El campo generado es recibido por los sensores y convertido a señales eléctricas. Para ello, los sensores se basan en electrodos que actúan de receptores. Si una persona coloca su mano dentro del área de emisión, se crea una perturbación de las líneas de los campos eléctricos. Esto es debido a la desviación de las líneas de campo a tierra utilizando la conductividad intrínseca del cuerpo humano. Esta perturbación cambia/afecta a las señales recibidas por los electrodos. Las variaciones captadas por todos los electrodos son procesadas de forma conjunta pudiendo determinar la posición y el movimiento de la mano sobre la superficie del sensor. Finalmente, se ha llevado a cabo el desarrollo de un sensor de gestos 3D mejorado. Al igual que el desarrollo anterior, permite la detección de gestos sin necesidad de contacto, pero incrementando la distancia de alcance. Además de la tecnología de impresión electrónica, se ha evaluado el empleo de otras dos tecnologías de fabricación textil.[CA] La present tesi doctoral té com a objectiu fonamental el desenvolupament de nous sensors i actuadors fent servir la tecnologia de electrònica impresa, també coneguda com Printed Electronics. Es va fer us de materials polimèrics conductors, semiconductors i dielèctrics sobre substrats flexibles i/o elàstics. Per mitjà de dissenys i processos d'aplicació adequats, és possible fabricar sensors capaços d'interactuar amb l'entorn. D'aquesta manera, es poden incorporar als substrats, com ara teixits tèxtils, funcionalitats específiques de mesurament de l'entorn i de resposta davant canvis d'aquest. Addicionalment, és necessari incloure sistemes electrònics, capaços de realitzar el processament de les dades obtingudes, així com del seu registre. En el desenvolupament d'aquests sensors i actuadors es combinen les propietats físiques dels diferents materials de forma precisa. Cal dissenyar estructures multicapa on les propietats d'uns materials interaccionen amb les de la resta. manera El resultat es un sensor capaç de captar variacions físiques de l'entorn, i convertirles en senyals que poden ser processades i convertides en dades. D'una banda, s'ha desenvolupat un sensor tàctil imprès sobre substrat tèxtil per a reconeixement de gestos en 2D. Aquest sensor es compon d'una matriu formada amb petits sensors capacitius basats en una estructura de tipus condensador. Aquests s'han dissenyat de manera que, si un dit o un altre objecte amb propietats capacitives s'aproxima prou, el seu comportament varia, podent ser mesurat. Els petits sensors estan ordenats en aquesta matriu com en una quadrícula. Cada sensor té una posició que ve determinada per una fila i per una columna. Periòdicament es mesura la capacitat de cada petit sensor per tal d'avaluar si ha sofert variacions significatives. Per a això cal convertir la capacitat del sensor a un valor que posteriorment és processat digitalment. D'altra banda, per tal de millorar l'efectivitat en l'ús dels sensors tàctils 2D desenvolupats, s'ha estudiat la manera d'incorporar un sistema actuador. D'aquesta forma, l'usuari rep una retroalimentació indicant que l'ordre o acció ha estat reconeguda. Per a això, s'ha complementat la matriu de sensors capacitius amb una pantalla electroluminescent també impresa. El resultat final ofereix una solució que combina un sensor tàctil 2D amb un actuador electroluminescent realitzat mitjançant impressió electrònica sobre substrat tèxtil. Posteriorment, s'ha dut a terme el desenvolupament d'un sensor de gestos 3D emprant una combinació d'un mínim de sensors impresos també sobre substrat tèxtil. En aquest tipus de sensor 3D, s'envia un senyal que genera un camp elèctric sobre els sensors impresos. Això es porta a terme mitjançant un elèctrode de transmissió situat molt a proper a ells. El camp generat és rebut pels sensors i convertit a senyals elèctrics. Per això, els sensors es basen en elèctrodes que actuen de receptors. Si una persona col·loca la seva mà dins de l'àrea d'emissió, es crea una pertorbació de les línies dels camps elèctrics. Això és a causa de la desviació de les línies de camp a terra utilitzant la conductivitat intrínseca de el cos humà. Aquesta pertorbació afecta als senyals rebudes pels elèctrodes. Les variacions captades per tots els elèctrodes són processades de manera conjunta per determinar la posició i el moviment de la mà sobre la superfície del sensor. Finalment, s'ha dut a terme el desenvolupament d'un sensor de gestos 3D millorat. A l'igual que el desenvolupament anterior, permet la detecció de gestos sense necessitat de contacte, però incrementant la distància d'abast. A més a més de la tecnologia d'impressió electrònica, s'ha avaluat emprar altres dues tecnologies de fabricació tèxtil.Ferri Pascual, J. (2020). Tactile and Touchless Sensors Printed on Flexible Textile Substrates for Gesture Recognition [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/153075TESISCompendi

    NPS in the News Weekly Media Report - Apr. 12-18, 2022

    Get PDF

    Gesture Based Control of Semi-Autonomous Vehicles

    Get PDF
    The objective of this investigation is to explore the use of hand gestures to control semi-autonomous vehicles, such as quadcopters, using realistic, physics based simulations. This involves identifying natural gestures to control basic functions of a vehicle, such as maneuvering and onboard equipment operation, and building simulations using the Unity game engine to investigate preferred use of those gestures. In addition to creating a realistic operating experience, human factors associated with limitations on physical hand motion and information management are also considered in the simulation development process. Testing with external participants using a recreational quadcopter simulation built in Unity was conducted to assess the suitability of the simulation and preferences between a joystick approach and the gesture-based approach. Initial feedback indicated that the simulation represented the actual vehicle performance well and that the joystick is preferred over the gesture-based approach. Improvements in the gesture-based control are documented as additional features in the simulation, such as basic maneuver training and additional vehicle positioning information, are added to assist the user to better learn the gesture-based interface and implementation of active control concepts to interpret and apply vehicle forces and torques. Tests were also conducted with an actual ground vehicle to investigate if knowledge and skill from the simulated environment transfers to a real-life scenario. To assess this, an immersive virtual reality (VR) simulation was built in Unity as a training environment to learn how to control a remote control car using gestures. This was then followed by a control of the actual ground vehicle. Observations and participant feedback indicated that range of hand movement and hand positions transferred well to the actual demonstration. This illustrated that the VR simulation environment provides a suitable learning experience, and an environment from which to assess human performance; thus, also validating the observations from earlier tests. Overall results indicate that the gesture-based approach holds promise given the emergence of new technology, but additional work needs to be pursued. This includes algorithms to process gesture data to provide more stable and precise vehicle commands and training environments to familiarize users with this new interface concept

    The Perception/Action loop: A Study on the Bandwidth of Human Perception and on Natural Human Computer Interaction for Immersive Virtual Reality Applications

    Get PDF
    Virtual Reality (VR) is an innovating technology which, in the last decade, has had a widespread success, mainly thanks to the release of low cost devices, which have contributed to the diversification of its domains of application. In particular, the current work mainly focuses on the general mechanisms underling perception/action loop in VR, in order to improve the design and implementation of applications for training and simulation in immersive VR, especially in the context of Industry 4.0 and the medical field. On the one hand, we want to understand how humans gather and process all the information presented in a virtual environment, through the evaluation of the visual system bandwidth. On the other hand, since interface has to be a sort of transparent layer allowing trainees to accomplish a task without directing any cognitive effort on the interaction itself, we compare two state of the art solutions for selection and manipulation tasks, a touchful one, the HTC Vive controllers, and a touchless vision-based one, the Leap Motion. To this aim we have developed ad hoc frameworks and methodologies. The software frameworks consist in the creation of VR scenarios, where the experimenter can choose the modality of interaction and the headset to be used and set experimental parameters, guaranteeing experiments repeatability and controlled conditions. The methodology includes the evaluation of performance, user experience and preferences, considering both quantitative and qualitative metrics derived from the collection and the analysis of heterogeneous data, as physiological and inertial sensors measurements, timing and self-assessment questionnaires. In general, VR has been found to be a powerful tool able to simulate specific situations in a realistic and involving way, eliciting user\u2019s sense of presence, without causing severe cybersickness, at least when interaction is limited to the peripersonal and near-action space. Moreover, when designing a VR application, it is possible to manipulate its features in order to trigger or avoid triggering specific emotions and voluntarily create potentially stressful or relaxing situations. Considering the ability of trainees to perceive and process information presented in an immersive virtual environment, results show that, when people are given enough time to build a gist of the scene, they are able to recognize a change with 0.75 accuracy when up to 8 elements are in the scene. For interaction, instead, when selection and manipulation tasks do not require fine movements, controllers and Leap Motion ensure comparable performance; whereas, when tasks are complex, the first solution turns out to be more stable and efficient, also because visual and audio feedback, provided as a substitute of the haptic one, does not substantially contribute to improve performance in the touchless case

    A comprehensive framework for the rapid prototyping of ubiquitous interaction

    Get PDF
    In the interaction between humans and computational systems, many advances have been made in terms of hardware (e.g., smart devices with embedded sensors and multi-touch surfaces) and software (e.g., algorithms for the detection and tracking of touches, gestures and full body movements). Now that we have the computational power and devices to manage interactions between the physical and the digital world, the question is—what should we do? For the Human-Computer Interaction research community answering to this question means to materialize Mark Weiser’s vision of Ubiquitous Computing. In the desktop computing paradigm, the desktop metaphor is implemented by a graphical user interface operated via mouse and keyboard. Users are accustomed to employing artificial control devices whose operation has to be learned and they interact in an environment that inhibits their faculties. For example the mouse is a device that allows movements in a two dimensional space, thus limiting the twenty three degrees of freedom of the human’s hand. The Ubiquitous Computing is an evolution in the history of computation: it aims at making the interface disappear and integrating the information processing into everyday objects with computational capabilities. In this way humans would no more be forced to adapt to machines but, instead, the technology will harmonize with the surrounding environment. Conversely from the desktop case, ubiquitous systems make use of heterogeneous Input/Output devices (e.g., motion sensors, cameras and touch surfaces among others) and interaction techniques such as touchless, multi-touch, and tangible. By reducing the physical constraints in interaction, ubiquitous technologies can enable interfaces that endow more expressive power (e.g., free-hand gestures) and, therefore, such technologies are expected to provide users with better tools to think, create and communicate. It appears clear that approaches based on classical user interfaces from the desktop computing world do not fit with ubiquitous needs, for they were thought for a single user who is interacting with a single computing systems, seated at his workstation and looking at a vertical screen. To overcome the inadequacy of the existing paradigm, new models started to be developed that enable users to employ their skills effortlessly and lower the cognitive burden of interaction with computational machines. Ubiquitous interfaces are pervasive and thus invisible to its users, or they become invisible with successive interactions in which the users feel they are instantly and continuously successful. All the benefits advocated by ubiquitous interaction, like the invisible interface and a more natural interaction, come at a price: the design and development of interactive systems raise new conceptual and practical challenges. Ubiquitous systems communicate with the real world by means of sensors, emitters and actuators. Sensors convert real world inputs into digital data, while emitters and actuators are mostly used to provide digital or physical feedback (e.g., a speaker emitting sounds). Employing such variety of hardware devices in a real application can be difficult because their use requires knowledge of underneath physics and many hours of programming work. Furthermore, data integration can be cumbersome, for any device vendor uses different programming interfaces and communication protocols. All these factors make the rapid prototyping of ubiquitous systems a challenging task. Prototyping is a pivoting activity to foster innovation and creativity through the exploration of a design space. Nevertheless, while there are many prototyping tools and guidelines for traditional user interfaces, very few solutions have been developed for a holistic prototyping of ubiquitous systems. The tremendous amount of different input devices, interaction techniques and physical environments envisioned by researchers produces a severe challenge from the point of view of general and comprehensive development tools. All of this makes it difficult to work in a design and development space where practitioners need to be familiar with different related subjects, involving software and hardware. Moreover, the technological context is further complicated by the fact that many of the ubiquitous technologies have recently grown from an embryonic stage and are still in a process of maturation; thus they lack of stability, reliability and homogeneity. For these reasons, it is compelling to develop tools support to the programming of ubiquitous interaction. In this thesis work this particular topic is addressed. The goal is to develop a general conceptual and software framework that makes use of hardware abstraction to lighten the prototyping process in the design of ubiquitous systems. The thesis is that, by abstracting from low-level details, it is possible to provide unified, coherent and consistent access to interacting devices independently of their implementation or communication protocols. In this dissertation the existing literature is revised and is pointed out that there is a need in the art of frameworks that provide such a comprehensive and integrate support. Moreover, the objectives and the methodology to fulfill them, together with the major contributions of this work are described. Finally, the design of the proposed framework, its development in the form of a set of software libraries, its evaluation with real users and a use case are presented. Through the evaluation and the use case it has been demonstrated that by encompassing heterogeneous devices into a unique design it is possible to reduce user efforts to develop interaction in ubiquitous environments. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------En la interacción entre personas y sistemas de computación se han realizado muchos adelantos por lo que concierne el hardware (p.ej., dispositivos inteligentes con sensores integrados y superficies táctiles) y el software (p.ej., algoritmos para el reconocimiento y rastreo de puntos de contactos, gestos de manos y movimientos corporales). Ahora que se dispone del poder computacional y de los dispositivos para proporcionar una interacción entre el mundo fisico y el mundo digital, la pregunta es—que se debería hacer? Contestar a esta pregunta, para la comunidad de investigación en la Interacción Persona-Ordenador, significa hacer realidad la visión de Mark Weiser sobre la Computación Ubicua. En el paradigma de computación de escritorio, la metáfora del escritorio se implementa a través de la interfaz gráfica de usuario con la que se interactúa a través de teclado y ratón. En este paradigma, los usuarios se adaptan a utilizar dispositivos artificiales, cuyas operaciones deben ser aprendidas, y a interactuar en un entorno que inhibe sus capacidades. Por ejemplo, el ratón es un dispositivo que permite movimientos en dos dimensiones, por tanto limita los veintitrés grados de libertad de una mano. La Computación Ubicua se considera como una evolución en la historia de la computación: su objetivo es hacer que la interfaz desaparezca e integrar el procesamiento de la información en los objetos cotidianos, provistos de capacidad de computo. De esta forma, el usuario no se vería forzado a adaptarse a la maquinas sino que la tecnología se integrarían directamente con el entorno. A diferencia de los sistemas de sobremesa, los sistemas ubicuos utilizan dispositivos de entrada/salida heterogéneos (p.ej., sensores de movimiento, cameras y superficies táctiles entre otros) y técnicas de interacción como la interacción sin tocar, multitáctil o tangible. Reduciendo las limitaciones físicas en la interacción, las tecnologías ubicuas permiten la creación de interfaces con un mayor poder de expresión (p.ej., gestos con las manos) y, por lo tanto, se espera que proporcionen a los usuarios mejores herramientas para pensar, crear y comunicar. Parece claro que las soluciones basadas en las interfaces clásicas no satisfacen las necesidades de la interacción ubicua, porque están pensadas por un único usuario que interactúa con un único sistema de computación, sentado a su mesa de trabajo y mirando una pantalla vertical. Para superar las deficiencias del paradigma de escritorio, se empezaron a desarrollar nuevos modelos de interacción que permitiesen a los usuarios emplear sin esfuerzo sus capacidades innatas y adquiridas y reducir la carga cognitiva de las interfaces clásicas. Las interfaces ubicuas son pervasivas y, por lo tanto, invisibles a sus usuarios, o devienen invisibles a través de interacciones sucesivas en las que los usuarios siempre se sienten que están teniendo éxito. Todos los beneficios propugnados por la interacción ubicua, como la interfaz invisible o una interacción mas natural, tienen un coste: el diseño y el desarrollo de sistemas de interacción ubicua introducen nuevos retos conceptuales y prácticos. Los sistemas ubicuos comunican con el mundo real a través de sensores y emisores. Los sensores convierten las entradas del mundo real en datos digitales, mientras que los emisores se utilizan principalmente para proporcionar una retroalimentación digital o física (p.ej., unos altavoces que emiten un sonido). Emplear una gran variedad de dispositivos hardware en una aplicación real puede ser difícil, porque su uso requiere conocimiento de física y muchas horas de programación. Además, la integración de los datos puede ser complicada, porque cada proveedor de dispositivos utiliza diferentes interfaces de programación y protocolos de comunicación. Todos estos factores hacen que el prototipado rápido de sistemas ubicuos sea una tarea que constituye un difícil reto en la actualidad. El prototipado es una actividad central para promover la innovación y la creatividad a través de la exploración de un espacio de diseño. Sin embargo, a pesar de que existan muchas herramientas y líneas guías para el prototipado de las interfaces de escritorio, a día de hoy han sido desarrolladas muy pocas soluciones para un prototipado holístico de la interacción ubicua. La enorme cantidad de dispositivos de entrada, técnicas de interacción y entornos físicos concebidos por los investigadores supone un gran desafío desde el punto de vista de un entorno general e integral. Todo esto hace que sea difícil trabajar en un espacio de diseño y desarrollo en el que los profesionales necesitan tener conocimiento de diferentes materias relacionadas con temas de software y hardware. Además, el contexto tecnológico se complica por el hecho que muchas de estas tecnologías ubicuas acaban de salir de un estadio embrionario y están todavía en un proceso de desarrollo; por lo tanto faltan de estabilidad, fiabilidad y homogeneidad. Por estos motivos es fundamental desarrollar herramientas que soporten el proceso de prototipado de la interacción ubicua. Este trabajo de tesis doctoral se dedica a este problema. El objetivo es desarrollar una arquitectura conceptual y software que utilice un nivel de abstracción del hardware para hacer mas fácil el proceso de prototipado de sistemas de interacción ubicua. La tesis es que, abstrayendo de los detalles de bajo nivel, es posible proporcionar un acceso unificado, consistente y coherente a los dispositivos de interacción independientemente de su implementación y de los protocolos de comunicación. En esta tesis doctoral se revisa la literatura existente y se pone de manifiesto la necesidad de herramientas y marcos que proporcionen dicho soporte global e integrado. Además, se describen los objetivos propuestos, la metodología para alcanzarlos y las contribuciones principales de este trabajo. Finalmente, se presentan el diseño del marco conceptual, así como su desarrollo en forma de un conjunto de librerías software, su evaluación con usuarios reales y un caso de uso. A través de la evaluación y del caso de uso se ha demostrado que considerando dispositivos heterogéneos en un único diseño es posible reducir los esfuerzos de los usuarios para desarrollar la interacción en entornos ubicuos

    Aerospace medicine and biology: A continuing bibliography with indexes (supplement 406)

    Get PDF
    This bibliography lists 346 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during Oct. 1995. Subject coverage includes: aerospace medicine and physiology, life support systems and man/system technology, protective clothing, exobiology and extraterrestrial life, planetary biology, and flight crew behavior and performance

    Immersive Telerobotic Modular Framework using stereoscopic HMD's

    Get PDF
    Telepresença é o termo utilizado para descrever o conjunto de tecnologias que proporcionam aos utilizadores a sensação de que se encontram num local onde não estão fisicamente. Telepresença imersiva é o próximo passo e o objetivo passa por proporcionar a sensação de que o utilizador se encontra completamente imerso num ambiente remoto, estimulando para isso o maior número possível de sentidos e utilizando novas tecnologias tais como: visão estereoscópica, visão panorâmica, áudio 3D e Head Mounted Displays (HMDs).Telerobótica é um sub-campo da telepresença ligando a mesma à robótica, e que essencialmente consiste em proporcionar ao utilizador a possibilidade de controlar um robô de forma remota. Nas soluções do estado da arte da telerobótica existe uma falha, uma vez que a telerobótica não tem usufruido, no geral, das recentes evoluções em tecnologias de controlo e interfaces de interação pessoa- computador. Além da falta de estudos que apostam em soluções de imersividade, tais como visão estereoscópica, a telerobótica imersiva pode também incluir controlos mais intuitivos, tais como controladores de toque ou baseados em movimentos e gestos. Estes controlos são mais naturais e podem ser traduzidos de forma mais natural no sistema. Neste documento propomos uma abordagem alternativa a métodos mais comuns encontrados na teleoperação de robôs, como, por exemplo, os que se encontram em robôs de busca e salvamento (SAR). O nosso principal foco é testar o impacto que características imersivas, tais como visão estereoscópica e HMDs podem trazer para os robôs de telepresença e sistemas de telerobótica. Além disso, e tendo em conta que este é um novo e crescente campo, vamos mais além estando também a desenvolver uma framework modular que possuí a capacidade de ser extendida com diferentes robôs, com o fim de proporcionar aos investigadores uma plataforma com que podem testar diferentes casos de estudo.Pretendemos provar que adicionando tecnologias imersivas a um sistema de telerobótica é possível obter uma plataforma mais intuitiva, ou seja, menos propensa a erros induzidos por uma perceção e interação errada com o sistema de teleoperação do robô, por parte do operador. A perceção de profundidade e do ambiente em geral são significativamente melhoradas quando se utiliza esta solução de imersão. E o desempenho, tanto em tempo de operação numa tarefa como numa bem-sucedida identificação de objetos de interesse, é também reforçado. Desenvolvemos uma plataforma modular, de baixo/médio custo, de telerobótica imersiva que pode ser estendida com aplicações Android hardware-based no lado do robô. Esta solução tem por objetivo proporcionar a possibilidade de utilizar a mesma plataforma, em qualquer tipo de caso de estudo, estendendo a plataforma com diferentes tipos de robô. Em adição a uma framework modular e extensível, o projeto conta também com três principais módulos de interação, nomeadamente: - Módulo que contém um head mounted display com suporte a head tracking no ambiente do operador - Stream de visão estereoscópica através de Android - E um módulo que proporciona ao utilizador a possibilidade de interagir com o sistema com positional tracking No que respeita ao hardware não apenas a área móvel (e.g. smartphones, tablets, arduino) expandiu de forma avassaladora nos últimos anos, como também assistimos ao despertar de tecnologias de imersão a baixo custo, tais como o Oculus Rift, Google Cardboard ou Leap Motion.Estas soluções de hardware, de custo acessível, associadas aos avanços em stream de vídeo e áudio fornecidas pelas tecnologias WebRTC, principalmente pelo Google, tornam o desenvolvimento de uma solução de software em tempo real possível. Atualmente existe uma falta de métodos de software em tempo real em estereoscopia, mas acreditamos que a chegada de tecnologias WebRTC vai marcar o ponto de viragem, permitindo um plataforma económica e elevando a fasquia em termos de especificações.Telepresence is the term used to describe the set of technologies that enable people to feel or appear as if they were present in a location which they are not physically in. Immersive telepresence is the next step and the objective is to make the operator feel like he is immersed in a remote location, using as many senses as possible and new technologies such as stereoscopic vision, panoramic vision, 3D audio and Head Mounted Displays (HMDs).Telerobotics is a subfield of telepresence and merge it with robotics, providing the operator with the ability to control a robot remotely. In the current state of the art solutions there is a gap, since telerobotics have not enjoyed, in general, of the recent developments in control and human-computer interfaces technology. Besides the lack of studies investing on immersive solutions, such as stereoscopic vision, immersive telerobotics can also include more intuitive control capabilities such as haptic based controls or movement and gestures that would feel more natural and translated more naturally into the system. In this paper we propose an alternative approach to common teleoperation methods. As an example of common solutions, the reader can think about some of the methods found, for instance, in search and rescue (SAR) robots. Our main focus is to test the impact that immersive characteristics like stereoscopic vision and HMDs can bring to telepresence robots and telerobotics systems. Besides that, and since this is a new and growing field, we are also aiming to a modular framework capable of being extended with different robots in order to test different cases and aid researchers with an extensible platform.We claim that with immersive solutions the operator in a telerobotics system will have a more intuitive perception of the remote environment, and will be less error prone induced by a wrong perception and interaction with the teleoperation of the robot. We believe that the operator's depth perception and situational awareness are significantly improved when using immersive solutions, the performance both in terms of operation time and on successful identification, of particular objects, in remote environments are also enhanced.We have developed a low cost immersive telerobotic modular platform, this platform can be extended with hardware based Android applications in slave side (robot side). This solution provides the possibility of using the same platform, in any type of case study, by just extending it with different robots.In addition to the modular and extensible framework, the project will also features three main modules of interaction, namely:* A module that supports an head mounted display and head tracking in the operator environment* Stream of stereoscopic vision through Android with software synchronization* And a module that enables the operator to control the robot with positional tracking In the hardware side not only the mobile area (e.g. smartphones, tablets, arduino) expanded greatly in the last years but we also saw the raise of low cost immersive technologies, like the Oculus Rift DK2, Google Cardboard or Leap Motion. This cost effective hardware solutions associated with the advances in video and audio streaming provided by WebRTC technologies, achieved mostly by Google, make the development of a real-time software solution possible. Currently there is a lack of real-time software methods in stereoscopy, but the arrival of WebRTC technologies can be a game changer.We take advantage of this recent evolution in hardware and software in order to keep the platform economic and low cost, but at same time raising the flag in terms of performance and technical specifications of this kind of platform

    Toward multimodality: gesture and vibrotactile feedback in natural human computer interaction

    Get PDF
    In the present work, users’ interaction with advanced systems has been investigated in different application domains and with respect to different interfaces. The methods employed were carefully devised to respond to the peculiarities of the interfaces under examination. We could extract a set of recommendations for developers. The first application domain examined regards the home. In particular, we addressed the design of a gestural interface for controlling a lighting system embedded into a piece of furniture in the kitchen. A sample of end users was observed while interacting with the virtual simulation of the interface. Based on the videoanalysis of users’ spontaneous behaviors, we could derive a set of significant interaction trends The second application domain involved the exploration of an urban environment in mobility. In a comparative study, a haptic-audio interface and an audio-visual interface were employed for guiding users towards landmarks and for providing them with information. We showed that the two systems were equally efficient in supporting the users and they were both well- received by them. In a navigational task we compared two tactile displays each embedded in a different wearable device, i.e., a glove and a vest. Despite the differences in the shape and size, both systems successfully directed users to the target. The strengths and the flaws of the two devices were pointed out and commented by users. In a similar context, two devices supported Augmented Reality technology, i.e., a pair of smartglasses and a smartphone, were compared. The experiment allowed us to identify the circumstances favoring the use of smartglasses or the smartphone. Considered altogether, our findings suggest a set of recommendations for developers of advanced systems. First, we outline the importance of properly involving end users for unveiling intuitive interaction modalities with gestural interfaces. We also highlight the importance of providing the user the chance to choose the interaction mode better fitting the contextual characteristics and to adjust the features of every interaction mode. Finally, we outline the potential of wearable devices to support interactions on the move and the importance of finding a proper balance between the amount of information conveyed to the user and the size of the device
    corecore