88 research outputs found

    Haptics Rendering and Applications

    Get PDF
    There has been significant progress in haptic technologies but the incorporation of haptics into virtual environments is still in its infancy. A wide range of the new society's human activities including communication, education, art, entertainment, commerce and science would forever change if we learned how to capture, manipulate and reproduce haptic sensory stimuli that are nearly indistinguishable from reality. For the field to move forward, many commercial and technological barriers need to be overcome. By rendering how objects feel through haptic technology, we communicate information that might reflect a desire to speak a physically- based language that has never been explored before. Due to constant improvement in haptics technology and increasing levels of research into and development of haptics-related algorithms, protocols and devices, there is a belief that haptics technology has a promising future

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility

    Haptic Guidance for Extended Range Telepresence

    Get PDF
    A novel navigation assistance for extended range telepresence is presented. The haptic information from the target environment is augmented with guidance commands to assist the user in reaching desired goals in the arbitrarily large target environment from the spatially restricted user environment. Furthermore, a semi-mobile haptic interface was developed, one whose lightweight design and setup configuration atop the user provide for an absolutely safe operation and high force display quality

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility

    Desenvolvimento de ferramentas de treino para teleoperação háptica de um robô humanóide

    Get PDF
    Mestrado emEngenharia MecânicaIn robotics, the teleoperation of biped humanoids is one of the most exciting topics. It has the possibility to bypass complex dynamic models with learning demonstration algorithms using human interaction. For this procedure, the Humanoid Project at the University of Aveiro - PHUA, ingrained in the production of a 27 degree-of-freedom full body humanoid platform teleoperated by means of haptic devices. The current project also comprises a robot model that has be imported into the Virtual Robot Experimentation Platform: V-REP. The usage of the simulator allows multiple exercises with greater speed and shorted setup times, when compared to the teleoperation of the real robot, besides providing more safety for the platform and the operator during the tests. By using the simulator, the user can perform tests and make achievements towards the reproduction of human movement with the interaction of two haptic devices providing force feedback to the operator. The performed maneuvers have their kinematic and dynamic data stored for later application in learning by demonstration algorithms. However, the production of more complex and detailed movements requires large amounts of motor skill from the operator. Due to the continuous change of users in the PHUA, an adaptation period is required for the newly arrived operators to develop an a nity with the complex control system. This work is focused on developing methodologies to lower the required time for the training process. Thanks to the versatility of customization provided by V-REP, it was possible to implement interfaces which utilized visual and haptic guidance to enhance the learning capabilities of the operator. A dedicate workstation, new formulations and support tools that control the simulation were developed in order to create a more intuitive control over the humanoid platform. Operators were instructed to reproduce complex 3D movements under several training conditions (with visual and haptic feedback, only haptic feedback, only visual feedback, with guidance tools and without guidance). Performance was measured in terms of speed, drift from intended trajectory, response to the drift and amplitude of the movement. Findings of this study indicate that, with the newly implemented mechanisms, operators are able to gain control over the humanoid platform within a relatively short period of training. Operators subjected to the guidance programs present an even shorter period of training needed, exhibiting high performance in the overall system. These facts support the role of haptic guidance in acquiring kinesthetic memory in high DOFs systems.Em robótica, a teleoperação de robôs bípede humanóides é um dos tópicos mais emocionante. Tem a possibilidade de contornar modelos dinâmicos rígidos, com algoritmos de aprendizagem por demonstração utilizando interação humana. Para este procedimento, o Projeto Humanóide da Universidade de Aveiro - PHUA, empanha-se na produção de uma plataforma humanóide de corpo inteiro teleoperado com dispositivos hapticos. O estado presente do projeto apresenta um robô humanóide com 27 graus de liberdade. O projeto actual apresenta um modelo do robô importado para a Virtual Robot Exper- imentation Platform: V-REP. O uso do simulador permite vários exercícios com maior velocidade e tempos de preparação curtos, quando comparado com a teleoperação do robô real, além de proporcionar mais segurança para a plataforma e do operador durante os ensaios. Ao utilizar o simulador, o utilizador pode realizar testes à reprodução de movimento humano com a interacção de dois dispositivos de meios hápticos que fornecem força de retorno para o operador. As manobras realizadas têm os seus dados cinemáticos e dinâmicos armazenados para posterior aplicação na aprendizagem por algoritmos de demonstração. No entanto, a produção de movimentos mais complexos e detalhados requer grandes quantidades de habilidade motora do operador. Devido à mudança contínua de usuários no PHUA, um período de adaptação é necessário para os operadores recém-chegados a desenvolver uma a nidade com o complexo sistema de controlo. Este trabalho é focado no desenvolvimento de metodologias para diminuir o tempo necessário para o processo de formação dos utilizadores. Graças à versatilidade de personalização fornecidos pela V-REP, foi possível implementar interfaces que utilizaram orientação visual e haptica para melhorar as capacidades de aprendizagem do operador. Uma estação de trabalho, novas formulações e ferramentas de apoio que controlam a simulação foram desenvolvidos a m de criar um controle mais intuitivo sobre a plataforma humanóide. Os operadores foram instruídos a reproduzir movimentos complexos em 3D sob diversas condições de treino (com feedback visual e haptico, apenas feedback haptico, apenas feedback visual, com ferramentas de orientação e sem orientação). O desempenho foi medido em termos de velocidade, a desvio de trajectória pretendida, a resposta à desvio e o tempo gasto para a criação do movimento. Os resultados deste estudo indicam que, com os mecanismos recém-implementadas, os operadores são capazes de ganhar o controlo sobre a plataforma humanóide dentro de um período relativamente curto de treino. Operadores submetidos a programas de orientação apresentam um período ainda mais curto de formação necessária, exibindo alto desempenho no sistema global. Estes fatos justi cam o papel da orientação haptica em adquirir memória cinestésica em sistemas DOFs elevados

    ISMCR 1994: Topical Workshop on Virtual Reality. Proceedings of the Fourth International Symposium on Measurement and Control in Robotics

    Get PDF
    This symposium on measurement and control in robotics included sessions on: (1) rendering, including tactile perception and applied virtual reality; (2) applications in simulated medical procedures and telerobotics; (3) tracking sensors in a virtual environment; (4) displays for virtual reality applications; (5) sensory feedback including a virtual environment application with partial gravity simulation; and (6) applications in education, entertainment, technical writing, and animation

    Retorno háptico para dispositivos táctiles mediante tecnología Leap Motion

    Get PDF
    Este proyecto se ha dedicado a crear una integración entre el controlador Leap Motion y el dispositivo háptico diseñado a partir de dos Novint Falcon por el grupo de investigación MINT donde la tecnología Leap Motion permite añadir nuevas funcionalidades y mejorar las características generales y el comportamiento de este prototipo. Más precisamente, éste actúa como plataforma de control del dispositivo de interacción háptica de manera actuador final, que es una placa táctil desarrollada en el mismo laboratorio, se puede adaptar de manera continua a la interacción del usuario con respecto a una superficie virtual definida con anterioridad, para proporcionar al usuario una sensación táctil lo más real posible en términos tanto de forma como de textura. De acuerdo con esto, haciendo uso del Leap Motion y del Novint Falcon SDK ha sido desarrollada con éxito una implementación que tiene como resultado una aplicación en tiempo real que satisface el objetivo principal del proyecto. Asimismo ha sido definido un formato general de entrada para las superficies virtuales que se deseen simular. A tal fin el área de interacción del Leap Motion ha sido programado para emular esa superficie virtual. A partir de éste una plataforma para predecir en cada instante el potencial punto de contacto del dedo del usuario con la superficie que se ha definido ha sido igualmente desarrollada, y toda esta información es tratado adecuadamente para que la placa táctil, que permite la simulación de textura, sea adaptada en tiempo real a la interacción del usuario, de modo que una sensación real de tacto es experimentada por el usuario al interactuar con el dispositivo. Durante la fase de prueba del sistema, el código fuente desarrollado presenta resultados reales y fiables. De igual modo el comportamiento de la plataforma hardware cumple correctamente el objetivo principal del proyecto, si embargo algunos eventos inesperados han sido observados en el comportamiento final del sistema desarrollado. De acuerdo con ello, una breve lista de factibles soluciones ha sido proporcionada (Sección 4.5), las cuales, dado un prudencial margen de tiempo y una adecuada financiación, son totalmente realizables. Diversas sugerencias de posibles mejoras y líneas de trabajo futuro sobre el prototipo desarrollado son presentadas en la siguiente sección.Escuela Técnica Superior de Ingeniería de TelecomunicaciónUniversidad Politécnica de Cartagen

    Investigating Embodied Interaction in Near-Field Perception-Action Re-Calibration on Performance in Immersive Virtual Environments

    Get PDF
    Immersive Virtual Environments (IVEs) are becoming more accessible and more widely utilized for training. Previous research has shown that the matching of visual and proprioceptive information is important for calibration. Many state-of-the art Virtual Reality (VR) systems, commonly known as Immersive Virtual Environments (IVE), are created for training users in tasks that require accurate manual dexterity. Unfortunately, these systems can suffer from technical limitations that may force de-coupling of visual and proprioceptive information due to interference, latency, and tracking error. It has also been suggested that closed-loop feedback of travel and locomotion in an IVE can overcome compression of visually perceived depth in medium field distances in the virtual world [33, 47]. Very few experiments have examined the carryover effects of multi-sensory feedback in IVEs during manual dexterous 3D user interaction in overcoming distortions in near-field or interaction space depth perception, and the relative importance of visual and proprioceptive information in calibrating users\u27 distance judgments. In the first part of this work, we examined the recalibration of movements when the visually reached distance is scaled differently than the physically reached distance. We present an empirical evaluation of how visually distorted movements affects users\u27 reach to near field targets in an IVE. In a between subjects design, participants provided manual reaching distance estimates during three sessions; a baseline measure without feedback (open-loop distance estimation), a calibration session with visual and proprioceptive feedback (closed-loop distance estimation), and a post-interaction session without feedback (open-loop distance estimation). Subjects were randomly assigned to one of three visual feedbacks in the closed-loop condition during which they reached to target while holding a tracked stylus: i) Minus condition (-20% gain condition) in which the visual stylus appeared at 80\% of the distance of the physical stylus, ii) Neutral condition (0% or no gain condition) in which the visual stylus was co-located with the physical stylus, and iii) Plus condition (+20% gain condition) in which the visual stylus appeared at 120% of the distance of the physical stylus. In all the conditions, there is evidence of visuo-motor calibration in that users\u27 accuracy in physically reaching to the target locations improved over trials. Scaled visual feedback was shown to calibrate distance judgments within an IVE, with estimates being farthest in the post-interaction session after calibrating to visual information appearing nearer (Minus condition), and nearest after calibrating to visual information appearing further (Plus condition). The same pattern was observed during closed-loop physical reach responses, participants generally tended to physically reach farther in Minus condition and closer in Plus condition to the perceived location of the targets, as compared to Neutral condition in which participants\u27 physical reach was more accurate to the perceived location of the target. We then characterized the properties of human reach motion in the presence or absence of visuo-haptic feedback in real and IVEs within a participant\u27s maximum arm reach. Our goal is to understand how physical reaching actions to the perceived location of targets in the presence or absence of visuo-haptic feedback are different between real and virtual viewing conditions. Typically, participants reach to the perceived location of objects in the 3D environment to perform selection and manipulation actions during 3D interaction in applications such as virtual assembly or rehabilitation. In these tasks, participants typically have distorted perceptual information in the IVE as compared to the real world, in part due to technological limitations such as minimal visual field of view, resolution, latency and jitter. In an empirical evaluation, we asked the following questions; i) how do the perceptual differences between virtual and real world affect our ability to accurately reach to the locations of 3D objects, and ii) how do the motor responses of participants differ between the presence or absence of visual and haptic feedback? We examined factors such as velocity and distance of physical reaching behavior between the real world and IVE, both in the presence or absence of visuo-haptic information. The results suggest that physical reach responses vary systematically between real and virtual environments especially in situations involving presence or absence of visuo-haptic feedback. The implications of our study provide a methodological framework for the analysis of reaching motions for selection and manipulation with novel 3D interaction metaphors and to successfully characterize visuo-haptic versus non-visuo-haptic physical reaches in virtual and real world situations. While research has demonstrated that self-avatars can enhance ones\u27 sense of presence and improve distance perception, the effects of self-avatar fidelity on near field distance estimations has yet to be investigated. Thus, we investigated the effect of visual fidelity of the self-avatar in enhancing the user\u27s depth judgments, reach boundary perception and properties of physical reach motion. Previous research has demonstrated that self-avatar representation of the user enhances the sense of presence [37] and even a static notion of an avatar can improve distance estimation in far distances [59, 48]. In this study, performance with a virtual avatar was also compared to real-world performance. Three levels of fidelity were tested; 1) an immersive self-avatar with realistic limbs, 2) a low-fidelity self-avatar showing only joint locations, and 3) end-effector only. There were four primary hypotheses; First, we hypothesize that just the existence of self-avatar or end-effector position would calibrate users\u27 interaction space depth perception in an IVE. Therefore, participants\u27 distance judgments would be improved after the calibration phase regardless of self-avatars\u27 visual fidelity. Second, the magnitude of the changes from pre-test to post-test would be significantly different based on the visual details of the self-avatar presented to the participants (self-avatar vs low-fidelity self-avatar and end-effector). Third, we predict distance estimation accuracy would be the highest in immersive self-avatar condition and the lowest in end-effector condition. Forth, we predict that the properties of physical reach responses vary systematically between different visual fidelity conditions. The results suggest that reach estimations become more accurate as the visual fidelity of the avatar increases, with accuracy for high fidelity avatars approaching real-world performance as compared to low-fidelity and end-effector conditions. There was also an effect of the phase where the reach estimate became more accurate after receiving feedback in calibration phase. Overall, in all conditions reach estimations became more accurate after receiving feedback during a calibration phase. Lastly, we examined factors such as path length, time to complete the task, average velocity and acceleration of physical reach motion and compared all the IVEs conditions with real-world. The results suggest that physical reach responses vary systematically between the VR viewing conditions and real-world

    A continuum robotic platform for endoscopic non-contact laser surgery: design, control, and preclinical evaluation

    Get PDF
    The application of laser technologies in surgical interventions has been accepted in the clinical domain due to their atraumatic properties. In addition to manual application of fibre-guided lasers with tissue contact, non-contact transoral laser microsurgery (TLM) of laryngeal tumours has been prevailed in ENT surgery. However, TLM requires many years of surgical training for tumour resection in order to preserve the function of adjacent organs and thus preserve the patient’s quality of life. The positioning of the microscopic laser applicator outside the patient can also impede a direct line-of-sight to the target area due to anatomical variability and limit the working space. Further clinical challenges include positioning the laser focus on the tissue surface, imaging, planning and performing laser ablation, and motion of the target area during surgery. This dissertation aims to address the limitations of TLM through robotic approaches and intraoperative assistance. Although a trend towards minimally invasive surgery is apparent, no highly integrated platform for endoscopic delivery of focused laser radiation is available to date. Likewise, there are no known devices that incorporate scene information from endoscopic imaging into ablation planning and execution. For focusing of the laser beam close to the target tissue, this work first presents miniaturised focusing optics that can be integrated into endoscopic systems. Experimental trials characterise the optical properties and the ablation performance. A robotic platform is realised for manipulation of the focusing optics. This is based on a variable-length continuum manipulator. The latter enables movements of the endoscopic end effector in five degrees of freedom with a mechatronic actuation unit. The kinematic modelling and control of the robot are integrated into a modular framework that is evaluated experimentally. The manipulation of focused laser radiation also requires precise adjustment of the focal position on the tissue. For this purpose, visual, haptic and visual-haptic assistance functions are presented. These support the operator during teleoperation to set an optimal working distance. Advantages of visual-haptic assistance are demonstrated in a user study. The system performance and usability of the overall robotic system are assessed in an additional user study. Analogous to a clinical scenario, the subjects follow predefined target patterns with a laser spot. The mean positioning accuracy of the spot is 0.5 mm. Finally, methods of image-guided robot control are introduced to automate laser ablation. Experiments confirm a positive effect of proposed automation concepts on non-contact laser surgery.Die Anwendung von Lasertechnologien in chirurgischen Interventionen hat sich aufgrund der atraumatischen Eigenschaften in der Klinik etabliert. Neben manueller Applikation von fasergeführten Lasern mit Gewebekontakt hat sich die kontaktfreie transorale Lasermikrochirurgie (TLM) von Tumoren des Larynx in der HNO-Chirurgie durchgesetzt. Die TLM erfordert zur Tumorresektion jedoch ein langjähriges chirurgisches Training, um die Funktion der angrenzenden Organe zu sichern und damit die Lebensqualität der Patienten zu erhalten. Die Positionierung des mikroskopis chen Laserapplikators außerhalb des Patienten kann zudem die direkte Sicht auf das Zielgebiet durch anatomische Variabilität erschweren und den Arbeitsraum einschränken. Weitere klinische Herausforderungen betreffen die Positionierung des Laserfokus auf der Gewebeoberfläche, die Bildgebung, die Planung und Ausführung der Laserablation sowie intraoperative Bewegungen des Zielgebietes. Die vorliegende Dissertation zielt darauf ab, die Limitierungen der TLM durch robotische Ansätze und intraoperative Assistenz zu adressieren. Obwohl ein Trend zur minimal invasiven Chirurgie besteht, sind bislang keine hochintegrierten Plattformen für die endoskopische Applikation fokussierter Laserstrahlung verfügbar. Ebenfalls sind keine Systeme bekannt, die Szeneninformationen aus der endoskopischen Bildgebung in die Ablationsplanung und -ausführung einbeziehen. Für eine situsnahe Fokussierung des Laserstrahls wird in dieser Arbeit zunächst eine miniaturisierte Fokussieroptik zur Integration in endoskopische Systeme vorgestellt. Experimentelle Versuche charakterisieren die optischen Eigenschaften und das Ablationsverhalten. Zur Manipulation der Fokussieroptik wird eine robotische Plattform realisiert. Diese basiert auf einem längenveränderlichen Kontinuumsmanipulator. Letzterer ermöglicht in Kombination mit einer mechatronischen Aktuierungseinheit Bewegungen des Endoskopkopfes in fünf Freiheitsgraden. Die kinematische Modellierung und Regelung des Systems werden in ein modulares Framework eingebunden und evaluiert. Die Manipulation fokussierter Laserstrahlung erfordert zudem eine präzise Anpassung der Fokuslage auf das Gewebe. Dafür werden visuelle, haptische und visuell haptische Assistenzfunktionen eingeführt. Diese unterstützen den Anwender bei Teleoperation zur Einstellung eines optimalen Arbeitsabstandes. In einer Anwenderstudie werden Vorteile der visuell-haptischen Assistenz nachgewiesen. Die Systemperformanz und Gebrauchstauglichkeit des robotischen Gesamtsystems werden in einer weiteren Anwenderstudie untersucht. Analog zu einem klinischen Einsatz verfolgen die Probanden mit einem Laserspot vorgegebene Sollpfade. Die mittlere Positioniergenauigkeit des Spots beträgt dabei 0,5 mm. Zur Automatisierung der Ablation werden abschließend Methoden der bildgestützten Regelung vorgestellt. Experimente bestätigen einen positiven Effekt der Automationskonzepte für die kontaktfreie Laserchirurgie
    corecore