14 research outputs found

    Co-design of forward-control and force-feedback methods for teleoperation of an unmanned aerial vehicle

    Full text link
    The core hypothesis of this ongoing research project is that co-designing haptic-feedback and forward-control methods for shared-control teleoperation will enable the operator to more readily understand the shared-control algorithm, better enabling him or her to work collaboratively with the shared-control technology.} This paper presents a novel method that can be used to co-design forward control and force feedback in unmanned aerial vehicle (UAV) teleoperation. In our method, a potential field is developed to quickly calculate the UAV's risk of collision online. We also create a simple proxy to represent the operator's confidence, using the swiftness with which the operator sends commands the to UAV. We use these two factors to generate both a scale factor for a position-control scheme and the magnitude of the force feedback to the operator. Currently, this methodology is being implemented and refined in a 2D-simulated environment. In the future, we will evaluate our methods with user study experiments using a real UAV in a 3D environment.Accepted manuscrip

    Teleoperating a mobile manipulator and a free-flying camera from a single haptic device

    Get PDF
    © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksThe paper presents a novel teleoperation system that allows the simultaneous and continuous command of a ground mobile manipulator and a free flying camera, implemented using an UAV, from which the operator can monitor the task execution in real-time. The proposed decoupled position and orientation workspace mapping allows the teleoperation from a single haptic device with bounded workspace of a complex robot with unbounded workspace. When the operator is reaching the position and orientation boundaries of the haptic workspace, linear and angular velocity components are respectively added to the inputs of the mobile manipulator and the flying camera. A user study on a virtual environment has been conducted to evaluate the performance and the workload on the user before and after proper training. Analysis on the data shows that the system complexity is not an obstacle for an efficient performance. This is a first step towards the implementation of a teleoperation system with a real mobile manipulator and a low-cost quadrotor as the free-flying camera.Accepted versio

    Hybrid Rugosity Mesostructures (HRMs) for fast and accurate rendering of fine haptic detail

    Get PDF
    The haptic rendering of surface mesostructure (fine relief features) in dense triangle meshes requires special structures, equipment, and high sampling rates for detailed perception of rugged models. Low cost approaches render haptic texture at the expense of fidelity of perception. We propose a faster method for surface haptic rendering using image-based Hybrid Rugosity Mesostructures (HRMs), paired maps with per-face heightfield displacements and normal maps, which are layered on top of a much decimated mesh, effectively adding greater surface detail than actually present in the geometry. The haptic probe’s force response algorithm is modulated using the blended HRM coat to render dense surface features at much lower costs. The proposed method solves typical problems at edge crossings, concave foldings and texture transitions. To prove the wellness of the approach, a usability testbed framework was built to measure and compare experimental results of haptic rendering approaches in a common set of specially devised meshes, HRMs, and performance tests. Trial results of user testing evaluations show the goodness of the proposed HRM technique, rendering accurate 3D surface detail at high sampling rates, deriving useful modeling and perception thresholds for this technique.Peer ReviewedPostprint (published version

    Motion Mappings for Continuous Bilateral Teleoperation

    Full text link
    Mapping operator motions to a robot is a key problem in teleoperation. Due to differences between workspaces, such as object locations, it is particularly challenging to derive smooth motion mappings that fulfill different goals (e.g. picking objects with different poses on the two sides or passing through key points). Indeed, most state-of-the-art methods rely on mode switches, leading to a discontinuous, low-transparency experience. In this paper, we propose a unified formulation for position, orientation and velocity mappings based on the poses of objects of interest in the operator and robot workspaces. We apply it in the context of bilateral teleoperation. Two possible implementations to achieve the proposed mappings are studied: an iterative approach based on locally-weighted translations and rotations, and a neural network approach. Evaluations are conducted both in simulation and using two torque-controlled Franka Emika Panda robots. Our results show that, despite longer training times, the neural network approach provides faster mapping evaluations and lower interaction forces for the operator, which are crucial for continuous, real-time teleoperation.Comment: Accepted for publication at the IEEE Robotics and Automation Letters (RA-L

    Haptic Zoom: An Interaction Model for Desktop Haptic Devices with Limited Workspace

    Get PDF
    [EN] Haptic devices can be used to feel through the sense of touch what the user is watching in a virtual scene. Force feedback devices provide kinesthetic information enabling the user to touch virtual objects. However, the most reasonably priced devices of this type are the desktop ones, which have a limited workspace that does not allow a natural and convenient interaction with virtual scenes due to the difference in size between them and the workspace. In this paper, a brand-new interaction model addressing this problem is proposed. It is called Haptic Zoom and it is based on performing visual and haptic amplifications of regions of interest. These amplifications allow the user to decide whether s/he wants more freedom in movements or an accurate interaction with a specific element inside the scene. An evaluation has been carried out comparing this technique and two well-known desktop haptic device techniques. Preliminary results showed that haptic zoom can be more useful than other techniques at accuracy tasks.SIA. G. acknowledges a FPU fellowship provided by the Ministerio de Educación, Cultura y Deporte of Spain

    Sistema y método de interacción en entornos virtuales utilizando dispositivos hápticos

    Get PDF
    [ES] Sistema y método de interacción en entornos virtuales. El método comprende: detectar una orden de zoom de una escena virtual inicial; generar, por parte de una unidad de procesamiento gráfico, una nueva escena virtual a partir de la escena virtual inicial con un nivel de ampliación modificado en función de la orden de zoom detectada; mapear el espacio de trabajo de un dispositivo háptico con el espacio que representa la nueva escena virtual; y representar la nueva escena virtual. La orden de zoom se puede realizar mediante pulsación de un botón del dispositivo háptico o mediante comando de voz. La orden de zoom es una instrucción que permite modificar progresivamente el nivel de ampliación de la escena virtual inicial o modificar puntualmente el nivel de ampliación de la escena virtual inicial según un determinado valor predefinido

    A hybrid rugosity mesostructure (HRM) for rendering fine haptic detail

    Get PDF
    The haptic rendering of surface mesostructure (fine relief features) in dense triangle meshes requires special structures, equipment, and high sampling rates for detailed perception of rugged models. Some approaches simulate haptic texture at a lower processing cost, but at the expense of fidelity of perception. We propose a better method for rendering fine surface detail by using image-based Hybrid Rugosity Mesostructures (HRMs), composed of paired maps of piece-wise heightfield displacements and corresponding normals, which are layered on top of a less complex mesh, adding greater surface detail than the one actually present in the geometry. The core of the algorithm renders surface features by modulating the haptic probe's force response using a blended HRM coat. The proposed method solves typical problems arising at edge crossings, concave foldings and smoothing texture stitching transitions across edges. By establishing a common set of specially devised meshes, HRM mesostructures, and a battery of performance tests, we build a usability testing framework that allows a fair and balanced experimental procedure for comparing haptic rendering approaches. The trial results and user testing evaluations show the goodness of the proposed HRM technique in the accurate rendering of high 3D surface detail at low processing costs, deriving useful modeling and perception thresholds for this technique.Postprint (published version

    Dynamic Performance of Mobile Haptic Interfaces

    Full text link
    corecore