37 research outputs found
LocaliseBot: Multi-view 3D object localisation with differentiable rendering for robot grasping
Robot grasp typically follows five stages: object detection, object
localisation, object pose estimation, grasp pose estimation, and grasp
planning. We focus on object pose estimation. Our approach relies on three
pieces of information: multiple views of the object, the camera's extrinsic
parameters at those viewpoints, and 3D CAD models of objects. The first step
involves a standard deep learning backbone (FCN ResNet) to estimate the object
label, semantic segmentation, and a coarse estimate of the object pose with
respect to the camera. Our novelty is using a refinement module that starts
from the coarse pose estimate and refines it by optimisation through
differentiable rendering. This is a purely vision-based approach that avoids
the need for other information such as point cloud or depth images. We evaluate
our object pose estimation approach on the ShapeNet dataset and show
improvements over the state of the art. We also show that the estimated object
pose results in 99.65% grasp accuracy with the ground truth grasp candidates on
the Object Clutter Indoor Dataset (OCID) Grasp dataset, as computed using
standard practice
Maximizing Manipulation Capabilities of Persons with Disabilities Using a Smart 9-Degree-of-Freedom Wheelchair-Mounted Robotic Arm System
Physical and cognitive disabilities make it difficult or impossible to perform simple personal or job-related tasks. The primary objective of this research and development effort is to assist persons with physical disabilities to perform activities of daily living (ADL) using a smart 9-degrees-of-freedom (DOF) modular wheelchair-mounted robotic arm system (WMRA).
The combination of the wheelchair\u27s 2-DoF mobility control and the robotic arm\u27s 7-DoF manipulation control in a single control mechanism allows people with disabilities to do many activities of daily living (ADL) tasks that are otherwise hard or impossible to accomplish. Different optimization methods for redundancy resolution are explored and modified to fit the new system with combined mobility and manipulation control and to accomplish singularity and obstacle avoidance as well as other optimization criteria to be implemented on the new system. The resulting control algorithm of the system is tested in simulation using C++ and Matlab codes to resolve any issues that might occur during the testing on the physical system. Implementation of the combined control is done on the newly designed robotic arm mounted on a modified power wheelchair and with a custom designed gripper.
The user interface is designed to be modular to accommodate any user preference, including a haptic device with force sensing capability, a spaceball, a joystick, a keypad, a touch screen, head/foot switches, sip and puff devices, and the BCI 2000 that reads the electromagnetic pulses coming out of certain areas of the brain and converting them to control signals after conditioning.
Different sensors (such as a camera, proximity sensors, a laser range finder, a force/torque sensor) can be mounted on the WMRA system for feedback and intelligent control. The user should be able to control the WMRA system autonomously or using teleoperation. Wireless Bluetooth technology is used for remote teleoperation in case the user is not on the wheelchair. Pre-set activities of daily living tasks are programmed for easy and semi-autonomous execution
Maximizing Manipulation Capabilities of Persons with Disabilities Using a Smart 9-Degree-of-Freedom Wheelchair-Mounted Robotic Arm System
Physical and cognitive disabilities make it difficult or impossible to perform simple personal or job-related tasks. The primary objective of this research and development effort is to assist persons with physical disabilities to perform activities of daily living (ADL) using a smart 9-degrees-of-freedom (DOF) modular wheelchair-mounted robotic arm system (WMRA).
The combination of the wheelchair\u27s 2-DoF mobility control and the robotic arm\u27s 7-DoF manipulation control in a single control mechanism allows people with disabilities to do many activities of daily living (ADL) tasks that are otherwise hard or impossible to accomplish. Different optimization methods for redundancy resolution are explored and modified to fit the new system with combined mobility and manipulation control and to accomplish singularity and obstacle avoidance as well as other optimization criteria to be implemented on the new system. The resulting control algorithm of the system is tested in simulation using C++ and Matlab codes to resolve any issues that might occur during the testing on the physical system. Implementation of the combined control is done on the newly designed robotic arm mounted on a modified power wheelchair and with a custom designed gripper.
The user interface is designed to be modular to accommodate any user preference, including a haptic device with force sensing capability, a spaceball, a joystick, a keypad, a touch screen, head/foot switches, sip and puff devices, and the BCI 2000 that reads the electromagnetic pulses coming out of certain areas of the brain and converting them to control signals after conditioning.
Different sensors (such as a camera, proximity sensors, a laser range finder, a force/torque sensor) can be mounted on the WMRA system for feedback and intelligent control. The user should be able to control the WMRA system autonomously or using teleoperation. Wireless Bluetooth technology is used for remote teleoperation in case the user is not on the wheelchair. Pre-set activities of daily living tasks are programmed for easy and semi-autonomous execution
Control of a 9-DoF Wheelchair-Mounted Robotic Arm System
A wheelchair-mounted robotic arm (WMRA) system was designed and built to meet the needs of mobility-impaired persons with limitations of upper extremities, and to exceed the capabilities of current devices of this type. The control of this 9- DoF system expands on the conventional control methods and combines the 7-DoF robotic arm control with the 2-DoF power wheelchair control. The 3-degrees of redundancy are optimized to effectively perform activities of daily living (ADLs) and overcome singularities, joint limits and some workspace limitations. The control system is designed for teleoperated or autonomous coordinated Cartesian control, and it offers expandability for future research, such as voice or sip and puff control operations and sensor assist functions
Programming by demonstration using learning based approach: A Mini review
Wheelchair-mounted robotic arms are used in rehabilitation robotics to help physically impaired people perform ADL (Activity of daily living) tasks. However, the dexterity of manipulation tasks makes the teleoperation of the robotic arm challenging for the user, as it is difficult to control all degrees of freedom with a handheld joystick or a screen touch device. PbD (Programming by demonstration) allows the user to demonstrate the desired behavior and enables the system to learn from the demonstrations and adapt to a new environment. This learned model can perform a new set of actions in a new environment. Learning from a demonstration includes Object identification and recognition, trajectory planning, Obstacle avoidance, and adapting to a new environment, wherever necessary. PbD using a learning-based approach learns the task through a model that captures the underlying structures of the task. The model can be a probabilistic graphical model, a neural network, or a combination of both. PbD with learning can be generalized and applied to new situations as this method enables the robot to learn the model rather than just memorizing and imitating the demonstration. In addition to this, it also helps in efficient learning with a reduced number of demonstrations. This survey focuses on an overview of the recent machine learning techniques used with PbD to perform dexterous manipulation tasks that enable the robot to learn from and apply it to a new set of tasks and a new environment
Enhanced pilot engagement level using the brain machine interface (BMI) in live flight control
Critical battlefield missions require proficient physical and cognitive capabilities for situational awareness and active decision making. Current technologies allow pilots operating unmanned flights to utilize non-invasive electroencephalography (EEG)-based Brain-Machine Interfaces (BMIs) to capture and convert brain signals into actions used for commands and flight control. This project aims to utilize a motor imagery-based BMI system to extract, filter, and condition the pilot\u27s brain signal to control a flight target destination. Additional algorithms were developed to extract GPS data, map the target location to global coordinates using homogeneous transformation matrices, and autonomously navigate to and follow the selected target. Moreover, a graphical user interface (GUI) was developed to provide the user with visual feedback from the flight onboard camera and display commands that can be initiated using the EEG signal. A drone is used for testing and data collection, and the results show that the developed system was able to use the brain\u27s raw BMI data and filter it to (1) mental commands using filtered EEG signal, and (2) facial expression commands using extracted EMG and EOG data. The latter was easier to use and more accurate in conveying the pilot\u27s intentions. Pilot training was conducted for both control categories, and a full control of the drone using brain signals and reaching the target location was performed with an average error of 0.1%. The accuracy of control using brain signal through the BMI was 57.89% at the start of control session, and it decreased to 46.34% after 30 minutes of use
THE DESIGN OF A DEXTEROUS GRIPPER TO BE USED FOR ACTIVITIES OF DAILY LIVING BY PEOPLE WITH UPPER-EXTREMITY DISABILITIES
ABSTRACT A new robotic gripper was designed and constructed for Activities of Daily Living (ADL) to be used with the new Wheelchair-Mounted Robotic Arm developed at USF. Two aspects of the new gripper made it unique; one is the design of the paddles, and the other is the design of the actuation mechanism that produces parallel motion for effective gripping
Enhanced pilot engagement level using the brain machine interface (BMI) in live flight control
Critical battlefield missions require proficient physical and cognitive capabilities for situational awareness and active decision making. Current technologies allow pilots operating unmanned flights to utilize non-invasive electroencephalography (EEG)-based Brain-Machine Interfaces (BMIs) to capture and convert brain signals into actions used for commands and flight control. This project aims to utilize a motor imagery-based BMI system to extract, filter, and condition the pilot\u27s brain signal to control a flight target destination. Additional algorithms were developed to extract GPS data, map the target location to global coordinates using homogeneous transformation matrices, and autonomously navigate to and follow the selected target. Moreover, a graphical user interface (GUI) was developed to provide the user with visual feedback from the flight onboard camera and display commands that can be initiated using the EEG signal. A drone is used for testing and data collection, and the results show that the developed system was able to use the brain\u27s raw BMI data and filter it to (1) mental commands using filtered EEG signal, and (2) facial expression commands using extracted EMG and EOG data. The latter was easier to use and more accurate in conveying the pilot\u27s intentions. Pilot training was conducted for both control categories, and a full control of the drone using brain signals and reaching the target location was performed with an average error of 0.1%. The accuracy of control using brain signal through the BMI was 57.89% at the start of control session, and it decreased to 46.34% after 30 minutes of use