53 research outputs found

    Robotic arm controlled through vision and biomechanical sensors

    Get PDF
    The project aims to design the control of a robotic arm capable of following the movements of a real hand. In order to do that, a number of different technologies were implemented: manufacture by adding material (3D printing), Arduino board programming and implementing and programming of vision and biomechanics sensors. The project was developed in 3 stages: First, making a 3D printing of a robotic arm, which has been acquired with CAD software, for subsequent assembly of all parts. Second, programming of an Arduino Uno, capable of operating actuators (servomotors), to perform the movement of the hand built. Finally, developing a programming code capable of receiving, interpreting and manipulating the data obtained by a mocap device (motion capture), communicating with other devices and sending commands to the Arduino Uno controller chip.Postprint (author's final draft

    Measurements by A LEAP-Based Virtual Glove for the hand rehabilitation

    Get PDF
    Hand rehabilitation is fundamental after stroke or surgery. Traditional rehabilitation requires a therapist and implies high costs, stress for the patient, and subjective evaluation of the therapy effectiveness. Alternative approaches, based on mechanical and tracking-based gloves, can be really effective when used in virtual reality (VR) environments. Mechanical devices are often expensive, cumbersome, patient specific and hand specific, while tracking-based devices are not affected by these limitations but, especially if based on a single tracking sensor, could suffer from occlusions. In this paper, the implementation of a multi-sensors approach, the Virtual Glove (VG), based on the simultaneous use of two orthogonal LEAP motion controllers, is described. The VG is calibrated and static positioning measurements are compared with those collected with an accurate spatial positioning system. The positioning error is lower than 6 mm in a cylindrical region of interest of radius 10 cm and height 21 cm. Real-time hand tracking measurements are also performed, analysed and reported. Hand tracking measurements show that VG operated in real-time (60 fps), reduced occlusions, and managed two LEAP sensors correctly, without any temporal and spatial discontinuity when skipping from one sensor to the other. A video demonstrating the good performance of VG is also collected and presented in the Supplementary Materials. Results are promising but further work must be done to allow the calculation of the forces exerted by each finger when constrained by mechanical tools (e.g., peg-boards) and for reducing occlusions when grasping these tools. Although the VG is proposed for rehabilitation purposes, it could also be used for tele-operation of tools and robots, and for other VR applications

    Integrating virtual reality and augmented reality in a collaborative user interface

    Get PDF
    Application that adopts collaborative system allows multiple users to interact with other users in the same virtual space either in Virtual Reality (VR) or Augmented Reality (AR). This paper aims to integrate the VR and AR space in a Collaborative User Interface that enables the user to cooperate with other users in a different type of interfaces in a single shared space manner. The gesture interaction technique is proposed as the interaction tool in both of the virtual spaces as it can provide a more natural gesture interaction when interacting with the virtual object. The integration of VR and AR space provide a cross-discipline shared data interchange through the network protocol of client-server architecture

    A contactless identification system based on hand shape features

    Get PDF
    This paper aims at studying the viability of setting up a contactless identification system based on hand features, with the objective of integrating this functionality as part of different services for smart spaces. The final identification solution will rely on a commercial 3D sensor (i.e. Leap Motion) for palm feature capture. To evaluate the significance of different hand features and the performance of different classification algorithms, 21 users have contributed to build a testing dataset. For each user, the morphology of each of his/her hands is gathered from 52 features, which include bones length and width, palm characteristics and relative distance relationships among fingers, palm center and wrist. In order to get consistent samples and guarantee the best performance for the device, the data collection system includes sweet spot control; this functionality guides the users to place the hand in the best position and orientation with respect to the device. The selected classification strategies - nearest neighbor, supported vector machine, multilayer perceptron, logistic regression and tree algorithms - have been evaluated through available Weka implementations. We have found that relative distances sketching the hand pose are more significant than pure morphological features. On this feature set, the highest correct classified instances (CCI) rate (>96%) is reached through the multilayer perceptron algorithm, although all the evaluated classifiers provide a CCI rate above 90%. Results also show how these algorithms perform when the number of users in the database change and their sensitivity to the number of training samples. Among the considered algorithms, there are different alternatives that are accurate enough for non-critical, immediate response applications

    Manipulation of virtual objects through a LeapMotion optical sensor

    Get PDF
    Abstract The purpose of this paper is to present a gesture-based approach for controlling and manipulating remotely virtual objects in a virtual reality environment through a LeapMotion optical sensor. The interaction is performed by a separate software module, which is installed on the same system, where the virtual reality application is put into operation. The software architecture, implementation and user experience details are discussed and evaluated. In comparison to similar techniques our solution has the advantage to be intuitive, to provide means for creation of user interfaces without physical contact to the sensor, having minimal delay while performing the control actions
    corecore