78 research outputs found

    Visual servoing by partitioning degrees of freedom

    Get PDF
    There are many design factors and choices when mounting a vision system for robot control. Such factors may include the kinematic and dynamic characteristics in the robot's degrees of freedom (DOF), which determine what velocities and fields-of-view a camera can achieve. Another factor is that additional motion components (such as pan-tilt units) are often mounted on a robot and introduce synchronization problems. When a task does not require visually servoing every robot DOF, the designer must choose which ones to servo. Questions then arise as to what roles, if any, do the remaining DOF play in the task. Without an analytical framework, the designer resorts to intuition and try-and-see implementations. This paper presents a frequency-based framework that identifies the parameters that factor into tracking. This framework gives design insight which was then used to synthesize a control law that exploits the kinematic and dynamic attributes of each DOF. The resulting multi-input multi-output control law, which we call partitioning, defines an underlying joint coupling to servo camera motions. The net effect is that by employing both visual and kinematic feedback loops, a robot can quickly position and orient a camera in a large assembly workcell. Real-time experiments tracking people and robot hands are presented using a 5-DOF hybrid (3-DOF Cartesian gantry plus 2-DOF pan-tilt unit) robot

    Designing visually servoed tracking to augment camera teleoperators

    Get PDF
    Robots have now far more impact in humans life then ten years ago. Vacuum cleaning robots are already well known. Making today’s robots to work unassisted requires appropriate visual servoing architecture. In the past, a lot of efforts were directed towards designing controllers that relies exclusively on image data. Still most robots are servoed kinematically using joint data. Visual servoing architecture has applications not only in robotics. Video cameras are often mounted on platforms that can move like rovers, booms, gantries and aircrafts. People can operate such platforms to capture desired views of a scene or a target. To avoid collisions, with the environment and occlusions, such platforms demands much skill. Visual-servoing some degrees-of-freedom may reduce the operator burden and improve tracking. We call this concept human-in-the-loop visual servoing. Human-in-the-loop systems involve an operator who manipulates a device for desired tasks based on feedback from the device and environment. For example, devices like rovers gantries and aircrafts possess a video camera. The task is to control maneuver the vehicle and position the camera to obtain desired fields of view. To overcome joint limits, avoid collisions and ensure occlusion-free views, these devices are typically equipped with redundant degrees-of-freedom. Tracking moving subjects with such systems is a challenging task and requires a well skilled operator. In this approach, we use computer vision techniques to visually servo the camera. The net effect is that the operator just focuses on safely manipulating the boom and dolly while computer-control automatically servos the camera.Ph.D., Mechanical Engineering -- Drexel University, 200

    Conceptual design study for an advanced cab and visual system, volume 2

    Get PDF
    The performance, design, construction and testing requirements are defined for developing an advanced cab and visual system. The rotorcraft system integration simulator is composed of the advanced cab and visual system and the rotorcraft system motion generator, and is part of an existing simulation facility. User's applications for the simulator include rotorcraft design development, product improvement, threat assessment, and accident investigation

    Robotic Ultrasound Tomography and Collaborative Control

    Get PDF
    Ultrasound computed tomography (USCT) offers quantitative anatomical tissue characterization for cancer detection, and has shown similar diagnostic power to MRI on ex vivo prostate tissue. While most USCT research and commercial development has focused on submerging target anatomy in a transducer-lined cylindrical water-tank, this approach is not practical for imaging deep anatomy like the prostate and an alternative acquisition system using aligned abdominal and endolumenal ultrasound probes is required. This work outlines a clinical workflow, calibration scheme, and motion framework for an innovative dual-robotic USCT acquisition system specific to in vivo prostate imaging – one arm wielding a linear abdominal probe, the other wielding a linear transrectal ultrasound (TRUS) probe. After a three-way calibration, the robotic system works to autonomously keep the abdominal probe collinear with the physician-rotated TRUS probe using a hybrid force-position convex contour tracking scheme, while impedance control enforces its gentle contact with the patient’s pubic region for capturing the transmission ultrasound slices needed for limited-angle tomographic reconstruction. TRUS rotation was induced by joystick control for precision during testing, however collaborative control via admittance control of hand forces presents a useful workflow option to the physician. An improved robot admittance control algorithm for transparent collaborative control utilizing Kalman filtering was developed and verified to smooth robot hand guidance. Such an improvement additionally has important implications for generally alleviating ultrasonographer musculoskeletal strain through cooperatively controlled robots. The ultimate dual-robotic USCT system proved repeatable and sufficiently accurate for tomography based on pelvic phantom testing. Future steps in system verification and validation are discussed, as is incorporation into feasibility studies to test the potential and utility of the system for future prostate malignancy diagnosis and staging in vivo

    Force feedback in remote tele-manipulation

    Get PDF
    PhD ThesisIt is becoming increasingly necessary to carry out manual operations in environments which are hazardous to humans - using remote manipulator systems that can extend the operators reach. However, manual dexterity can become severely impaired due to the complex relationship that exists between the operator, the remote manipulator system and the task. Under such circumstances, the introduction of force feedback is considered a desirable feature, and is particularly important when attempting to carry out complex assembly operations. The dynamic interaction in the manmachine system can significantly influence performance, and in the past evaluation has been largely by comparative assessment. In this study, an experimental remote manipulator system, or tele-manipulator system, has been developed which consists of three electrically linked planar manipulator arms, each with three degrees of freedom. An articulated 'master' arm is used to control an identical 'slave' arm, and independently, a second kinematically and dynamically dissimilar slave arm. Fully resolved Generalized Control has been demonstrated using a high speed computer to carry out the necessary position and force transformations between dissimilar master and slave arms in realtime. Simulation of a one degree of freedom master-slave system has also been carried out, which includes a simple model of the human operator and a task based upon a rigid stop. The results show good agreement with parallel experimental tests, and have provided a firm foundation for developing a fully resolved position/position control scheme, and a unique way of backdriving the master arm. Preliminary tests were based on a peg-in-hole transfer task, and have identified the effect on performance of force reflection ratio. More recently a novel crank-turning task has been developed to investigate the interaction of system parameters on overall performance. The results obtained from these experimental studies, backed up by simulation, demonstrate the potential of computer augmented control of remote manipulator systems. The directions for future work include development of real-time control of tele-robotic systems and research into the overall man-machine interaction

    A Distributed System for Robot Manipulator Control, NSF Grant ECS-11879 Fourth Report

    Get PDF
    This is the fourth annual report representing our last year\u27s work under the current grant. This work was directed to the development of a distributed computer architecture to function as a force and motion server to a robot system. In the course of this work we developed a compliant contact sensor to provide for transitions between position and force control; developed an end-effector capable of securing a stable grasp on an object and a theory of grasping; developed and built a controller which minimizes control delays; explored a parallel kinematics algorithms for the controller; developed a consistent approach to the definition of motion both in joint coordinates and in Cartesian coordinates; developed a symbolic simplification software package to generate the dynamics equations of a manipulator such that the calculations may be split between background and foreground

    A flexible access platform for robot-assisted minimally invasive surgery

    No full text
    Advances in Minimally Invasive Surgery (MIS) are driven by the clinical demand to reduce the invasiveness of surgical procedures so patients undergo less trauma and experience faster recoveries. These well documented benefits of MIS have been achieved through parallel advances in the technology and instrumentation used during procedures. The new and evolving field of Flexible Access Surgery (FAS), where surgeons access the operative site through a single incision or a natural orifice incision, is being promoted as the next potential step in the evolution of surgery. In order to achieve similar levels of success and adoption as MIS, technology again has its role to play in developing new instruments to solve the unmet clinical challenges of FAS. As procedures become less invasive, these instruments should not just address the challenges presented by the complex access routes of FAS, but should also build on the recent advances in pre- and intraoperative imaging techniques to provide surgeons with new diagnostic and interventional decision making capabilities. The main focus of this thesis is the development and applications of a flexible robotic device that is capable of providing controlled flexibility along curved pathways inside the body. The principal component of the device is its modular mechatronic joint design which utilises an embedded micromotor-tendon actuation scheme to provide independently addressable degrees of freedom and three internal working channels. Connecting multiple modules together allows a seven degree-of-freedom (DoF) flexible access platform to be constructed. The platform is intended for use as a research test-bed to explore engineering and surgical challenges of FAS. Navigation of the platform is realised using a handheld controller optimised for functionality and ergonomics, or in a "hands-free" manner via a gaze contingent control framework. Under this framework, the operator's gaze fixation point is used as feedback to close the servo control loop. The feasibility and potential of integrating multi-spectral imaging capabilities into flexible robotic devices is also demonstrated. A force adaptive servoing mechanism is developed to simplify the deployment, and improve the consistency of probe-based optical imaging techniques by automatically controlling the contact force between the probe tip and target tissue. The thesis concludes with the description of two FAS case studies performed with the platform during in-vivo porcine experiments. These studies demonstrate the ability of the platform to perform large area explorations within the peritoneal cavity and to provide a stable base for the deployment of interventional instruments and imaging probes

    Visual servo control on a humanoid robot

    Get PDF
    Includes bibliographical referencesThis thesis deals with the control of a humanoid robot based on visual servoing. It seeks to confer a degree of autonomy to the robot in the achievement of tasks such as reaching a desired position, tracking or/and grasping an object. The autonomy of humanoid robots is considered as crucial for the success of the numerous services that this kind of robots can render with their ability to associate dexterity and mobility in structured, unstructured or even hazardous environments. To achieve this objective, a humanoid robot is fully modeled and the control of its locomotion, conditioned by postural balance and gait stability, is studied. The presented approach is formulated to account for all the joints of the biped robot. As a way to conform the reference commands from visual servoing to the discrete locomotion mode of the robot, this study exploits a reactive omnidirectional walking pattern generator and a visual task Jacobian redefined with respect to a floating base on the humanoid robot, instead of the stance foot. The redundancy problem stemming from the high number of degrees of freedom coupled with the omnidirectional mobility of the robot is handled within the task priority framework, allowing thus to achieve con- figuration dependent sub-objectives such as improving the reachability, the manipulability and avoiding joint limits. Beyond a kinematic formulation of visual servoing, this thesis explores a dynamic visual approach and proposes two new visual servoing laws. Lyapunov theory is used first to prove the stability and convergence of the visual closed loop, then to derive a robust adaptive controller for the combined robot-vision dynamics, yielding thus an ultimate uniform bounded solution. Finally, all proposed schemes are validated in simulation and experimentally on the humanoid robot NAO

    Some NASA contributions to human factors engineering: A survey

    Get PDF
    This survey presents the NASA contributions to the state of the art of human factors engineering, and indicates that these contributions have a variety of applications to nonaerospace activities. Emphasis is placed on contributions relative to man's sensory, motor, decisionmaking, and cognitive behavior and on applications that advance human factors technology
    • …
    corecore