Location of Repository


By Panadda Marayong


An approach to motion guidance, called virtual fixtures, is applied to admittancecontrolled human-machine cooperative systems, which are designed to help a human operator perform tasks that require precision and accuracy near human physical limits. Virtual fixtures create guidance by limiting robot movement into restricted regions (Forbidden-region virtual fixtures) or influence its movement along desired paths (Guidance virtual fixtures). An implementation framework for vision-based virtual fixtures, with an adjustable guidance level, was developed for applications in ophthalmic surgery. Virtual fixtures were defined intraoperatively using a real-time workspace reconstruction obtained from a vision system and were implemented on a scaled-up retinal vein cannulation testbed. Two human-factors studies were performed to address design considerations of such a human-machine system. The first study demonstrates that cooperative manipulation offers superior accuracy to telemanipulation in a Fitts’ Law targeting task; however, they are comparable in task execution time. The second study shows that a large amount of guidance improves performance in path-following tasks; however, it worsens the performance on tasks that require off-path motion. Gain selection criteria were developed to determine an appropriate guidance level. Control methods to improve virtual fixture performance in the presence of robot compliance and human involuntary motion were also developed. To obtain an accurate estimate of end-effector location, positions obtained at discrete intervals from cameras are updated with a robot dynamic model using a Kalman filter. Considering both robot compliance and hand dynamics, the control methods effectively achieve the desired end-effector position under Forbidden-region virtual fixtures and the desired velocity for Guidance virtual fixtures. An experiment on a one-degree-of-freedom compliant human-machine system demonstrates the efficacy of the proposed controllers. A compliance model of the JHU Eye Robot was developed to enable controller implementation on a higher degree-of-freedom humanmachine cooperative system. The presented research provides key insights for virtual fixture design and implementation, particularly for fine manipulation tasks.

Year: 2008
OAI identifier: oai:jscholarship.library.jhu.edu:1774.2/32564
Provided by: JScholarship
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://jhir.library.jhu.edu/ha... (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.