854 research outputs found
Implementation of an automated eye-in-hand scanning system using Best-Path planning
In this thesis we implemented an automated scanning system for 3D object reconstruction.
This system is composed of a KUKA LWR 4+ arm with Microsoft Kinect cameras placed
on its extreme and thus, in an eye-in-hand con guration.
We implemented the system in ROS using Kinect Fusion software with extra features
added by R. Monica's previous work [16] and MoveIt! ROS libraries [29] to control the
robot movement with motion planning. To connect these nodes, we have coded a suite using
ROS and MATLAB to easily operate them as well as including new features, such as an
original view planner that outperforms the commonly used Next-Best-View planner. This
suite incorporates a Graphical User Interface that allows new users to easily perform the
reconstruction tasks.
The new view planner developed in this work, called Best-Path planner, o ers a new
approach using a modi ed Dijkstra algorithm. Among its bene ts, Best-Path planner o ers
an optimized way to scan the objects preventing the camera to cross again the areas which
have already been scanned. Moreover, viewpoint location and orientation have been studied
in depth in order to obtain the most natural movements and get the best results. For this
reason, this new planner makes the scanning procedure more robust as it assures trajectories
through these optimized viewpoints, so the camera is always looking towards the object
maintaining the optimal sensing distances.
As this project is focused on its later utility in the Intelligent Robotics Laboratory,
we uploaded all the source code in the Aalto GitLab repositories [37] with installation
instructions and user guides to show the di erent features that the suite o ers
Design and Development of a Robot Guided Rehabilitation Scheme for Upper Extremity Rehabilitation
To rehabilitate individuals with impaired upper-limb function, we have designed and developed a robot guided rehabilitation scheme. A humanoid robot, NAO was used for this purpose. NAO has 25 degrees of freedom. With its sensors and actuators, it can walk forward and backward, can sit down and stand up, can wave his hand, can speak to the audience, can feel the touch sensation, and can recognize the person he is meeting. All these qualities have made NAO a perfect coach to guide the subjects to perform rehabilitation exercises. To demonstrate rehabilitation exercises with NAO, a library of recommended rehabilitation exercises involving shoulder (i.e., abduction/adduction, vertical flexion/extension, and internal/external rotation), and elbow (i.e., flexion/extension) joint movements was formed in Choregraphe (graphical programming interface). In experiments, NAO was maneuvered to instruct and demonstrate the exercises from the NRL. A complex ‘touch and play’ game was also developed where NAO plays with the subject that represents a multi-joint movement’s exercise. To develop the proposed tele-rehabilitation scheme, kinematic model of human upper-extremity was developed based modified Denavit-Hartenberg notations. A complete geometric solution was developed to find a unique inverse kinematic solution of human upper-extremity from the Kinect data. In tele-rehabilitation scheme, a therapist can remotely tele-operate the NAO in real-time to instruct and demonstrate subjects different arm movement exercises. Kinect sensor was used in this scheme to get tele-operator’s kinematics data. Experiments results reveals that NAO can be tele-operated successfully to instruct and demonstrate subjects to perform different arm movement exercises. A control algorithm was developed in MATLAB for the proposed robot guided supervised rehabilitation scheme. Experimental results show that the NAO and Kinect sensor can effectively be used to supervise and guide the subjects in performing active rehabilitation exercises for shoulder and elbow joint movements
Tool for spatial and dynamic representation of artistic performances
This proposal aims to explore the use of available technologies for video representation of sets and performers in order to serve as support for composition processes and artistic performer rehearsals, while focusing in representing the performer’s body and its movements, and its relation with objects belonging to the three-dimensional space of their performances.
This project’s main goal is to design and develop a system that can spatially represent
the performer and its movements, by means of capturing processes and reconstruction
using a camera device, as well as enhance the three-dimensional space where the
performance occurs by allowing interaction with virtual objects and by adding a video
component, either for documentary purposes, or for live performances effects (for example, using video mapping video techniques in captured video or projection during a performance)
Learning Algorithm Design for Human-Robot Skill Transfer
In this research, we develop an intelligent learning scheme for performing human-robot skills transfer. Techniques adopted in the scheme include the Dynamic Movement Prim- itive (DMP) method with Dynamic Time Warping (DTW), Gaussian Mixture Model (G- MM) with Gaussian Mixture Regression (GMR) and the Radical Basis Function Neural Networks (RBFNNs). A series of experiments are conducted on a Baxter robot, a NAO robot and a KUKA iiwa robot to verify the effectiveness of the proposed design.During the design of the intelligent learning scheme, an online tracking system is de- veloped to control the arm and head movement of the NAO robot using a Kinect sensor. The NAO robot is a humanoid robot with 5 degrees of freedom (DOF) for each arm. The joint motions of the operator’s head and arm are captured by a Kinect V2 sensor, and this information is then transferred into the workspace via the forward and inverse kinematics. In addition, to improve the tracking performance, a Kalman filter is further employed to fuse motion signals from the operator sensed by the Kinect V2 sensor and a pair of MYO armbands, so as to teleoperate the Baxter robot. In this regard, a new strategy is developed using the vector approach to accomplish a specific motion capture task. For instance, the arm motion of the operator is captured by a Kinect sensor and programmed through a processing software. Two MYO armbands with embedded inertial measurement units are worn by the operator to aid the robots in detecting and replicating the operator’s arm movements. For this purpose, the armbands help to recognize and calculate the precise velocity of motion of the operator’s arm. Additionally, a neural network based adaptive controller is designed and implemented on the Baxter robot to illustrate the validation forthe teleoperation of the Baxter robot.Subsequently, an enhanced teaching interface has been developed for the robot using DMP and GMR. Motion signals are collected from a human demonstrator via the Kinect v2 sensor, and the data is sent to a remote PC for teleoperating the Baxter robot. At this stage, the DMP is utilized to model and generalize the movements. In order to learn from multiple demonstrations, DTW is used for the preprocessing of the data recorded on the robot platform, and GMM is employed for the evaluation of DMP to generate multiple patterns after the completion of the teaching process. Next, we apply the GMR algorithm to generate a synthesized trajectory to minimize position errors in the three dimensional (3D) space. This approach has been tested by performing tasks on a KUKA iiwa and a Baxter robot, respectively.Finally, an optimized DMP is added to the teaching interface. A character recombination technology based on DMP segmentation that uses verbal command has also been developed and incorporated in a Baxter robot platform. To imitate the recorded motion signals produced by the demonstrator, the operator trains the Baxter robot by physically guiding it to complete the given task. This is repeated five times, and the generated training data set is utilized via the playback system. Subsequently, the DTW is employed to preprocess the experimental data. For modelling and overall movement control, DMP is chosen. The GMM is used to generate multiple patterns after implementing the teaching process. Next, we employ the GMR algorithm to reduce position errors in the 3D space after a synthesized trajectory has been generated. The Baxter robot, remotely controlled by the user datagram protocol (UDP) in a PC, records and reproduces every trajectory. Additionally, Dragon Natural Speaking software is adopted to transcribe the voice data. This proposed approach has been verified by enabling the Baxter robot to perform a writing task of drawing robot has been taught to write only one character
Remote sensing technologies for physiotherapy assessment
The paper presents a set of remote, unobtrusive sensing technologies that can be used in upper and lower limbs rehabilitation monitoring. The advantages of using sensors based on microwave Doppler radar or infrared technologies for physiotherapy assessment are discussed. These technologies allow motion sensing at distance from monitored subject, reducing thus the discomfort produced by some wearable technologies for limbs movement assessment. The microwave radar that may be easily hidden into environment by nonmetallic parts allows remote sensing of human motion, providing information on user movements characteristics and patterns. The infrared technologies - infrared LEDs from Leap-Motion, infrared laser from Kinect depth sensor, and infrared thermography can be used for different movements' parameters evaluation. Visible for users, Leap-motion and Kinect sensors assure higher accuracy on body parts movements' detection at low computation load. These technologies are commonly used for virtual reality (VR) and augmented reality (AR) scenarios, in which the user motion patterns and the muscular activity might be analyzed. Thermography can be employed to evaluate the muscular loading. Muscular activity during movements training in physiotherapy can be estimated through skin temperature measurement before and after physical training. Issues related to the considered remote sensing technologies such as VR serious game for motor rehabilitation, signal processing and experimental results associated with microwave radar, infrared sensors and thermography for physiotherapy sensing are included in the paper.info:eu-repo/semantics/acceptedVersio
Intensive-care unit patients monitoring by computer vision system
Treballs Finals de Grau d'Enginyeria Informà tica, Facultat de Matemà tiques, Universitat de Barcelona, Any: 2013, Director: Santi Seguà MesquidaIn this project, we propose an automatic computer vision system for patient monitoring at the
Intensive-Care Unit (ICU). These patients require constant monitoring and, due to the high costs
associated to equipment and staff necessary, the design of an automatic system would be helpful.
Depth imaging technology has advanced dramatically over the last few years, finally reaching a consumer price point with the launch of Kinect. These depth images are not affected by the lighting conditions and provide us a good vision, even without any light, so we can monitorize the patients 24 hours a day.
In this project, we worked on two of the parts of the object detection systems: the descriptor and
classifier.
Concerning the descriptor, we analyzed the performance of one of the most used descriptors for object detection in RGB images, the Histogram of Oriented Gradients, and we have proposed a
descriptor designed for depth images. It is shown that the combination of these two descriptors
increases system accuracy.
As to the detection, we have done various tests. We analyzed the detection of patient body parts
separately, and we have used a model where the patient is divided into multiple parts and each part is modeled with a set of templates, demonstrating that the use of a model helps to improve detection
Recognition of gestures through artificial intelligence techniques
El reconocimiento de gestos consiste en la interpretación de secuencias de acciones humanas
captadas por cualquier tipo de sensor, ya sea táctil o no requiera de contacto alguno con el
dispositivo, como una cámara. En las últimas décadas ha experimentado un gran avance debido al
auge de la Inteligencia Artificial y al desarrollo de sensores cada vez más complejos y precisos.
Un ejemplo concreto ha sido la publicación y el mantenimiento de un SDK oficinal de Microsoft
Kinect, con el que los desarrolladores han podido acceder a las capacidades de esta cámara para crear
interfaces de usuario más naturales e intuitivas. También se ha incentivado el uso de aplicaciones
que van más allá de la industria del entretenimiento, como aquellas que asisten en los cuidados
médicos o que permiten la automatización de tareas rutinarias.
Es por ello que en este proyecto hemos desarrollado un conjunto de herramientas para la
generación de modelos de aprendizaje capaces de reconocer gestos personalizados para la Kinect
v2. El conjunto de herramientas que se ha diseñado e implementado está orientado a facilitar la
tarea completa de reconocimiento para cualquier gesto, comenzando con la captura de los ejemplos
de entrenamiento, continuando con el pre-procesado y el tratamiento de los datos, y finalizando con
la generación de modelos de aprendizaje mediante técnicas de aprendizaje automático.
Finalmente, para evaluar el funcionamiento de la plataforma se ha propuesto y ejecutado
una experimentación con un gesto sencillo. Los resultados positivos motivan el empleo de las
herramientas desarrolladas para incorporar reconocedores de gestos en cualquier aplicación que
utilice el sensor Kinect v2.The gesture recognition consists of the interpretation of sequences of human actions captured by
any type of sensor either touchable or non-touchable like a camera. It has experimented a high
progress in the last decades, due to the rise of the Artificial Intelligence and the development of
more complex and precise sensors.
One example of this advances was the publish and maintenance of an official SDK of Microsoft
Kinect, which were used by developers to access to the capabilities of this camera, so they could
create more natural and intuitive applications. This has motivated the use of applications that go
beyond the entertainment industry, like those which assists in healthcare or automate routine tasks.
For that reason, this project develops a set of tools for the generation of learning models that
are able to recognize personalized gestures for Kinect v2. The set of designed and implemented
tools is oriented to ease the task of the recognition of any gesture, starting in the capturing of
training examples, continuing with the pre-processing and the treatment of data, and ending with
the generation of the recognition models trough machine learning techniques.
Finally, in order to test the functionality of the complete system, an experimentation with
a simple gesture has been proposed and executed. The positive results motivate to use the set of
developed tools to incorporate gesture recognizers in any application that uses the Kinect v2 sensor.IngenierÃa Informática (Plan 2011
- …