596 research outputs found
Interactive Perception for Cluttered Environments
Robotics research tends to focus upon either non-contact sensing or machine manipulation, but not both. This paper explores the benefits of combining the two by addressing the problem of extracting and classifying unknown objects within a cluttered environment, such as found in recycling and service robot applications. In the proposed approach, a pile of objects lies on a flat background, and the goal of the robot is to sift through the pile and classify each object so that it can be studied further. One object should be removed at a time with minimal disturbance to the other objects. We propose an algorithm, based upon graph-based segmentation and stereo matching, that automatically computes a desired grasp point that enables the objects to be removed one at a time. The algorithm then isolates each object to be classified by color, shape and flexibility. Experiments on a number of different objects demonstrate the ability of classifying each item through interaction and labeling them for further use and study
GelSlim: A High-Resolution, Compact, Robust, and Calibrated Tactile-sensing Finger
This work describes the development of a high-resolution tactile-sensing
finger for robot grasping. This finger, inspired by previous GelSight sensing
techniques, features an integration that is slimmer, more robust, and with more
homogeneous output than previous vision-based tactile sensors. To achieve a
compact integration, we redesign the optical path from illumination source to
camera by combining light guides and an arrangement of mirror reflections. We
parameterize the optical path with geometric design variables and describe the
tradeoffs between the finger thickness, the depth of field of the camera, and
the size of the tactile sensing area. The sensor sustains the wear from
continuous use -- and abuse -- in grasping tasks by combining tougher materials
for the compliant soft gel, a textured fabric skin, a structurally rigid body,
and a calibration process that maintains homogeneous illumination and contrast
of the tactile images during use. Finally, we evaluate the sensor's durability
along four metrics that track the signal quality during more than 3000 grasping
experiments.Comment: RA-L Pre-print. 8 page
Autonomous Sweet Pepper Harvesting for Protected Cropping Systems
In this letter, we present a new robotic harvester (Harvey) that can
autonomously harvest sweet pepper in protected cropping environments. Our
approach combines effective vision algorithms with a novel end-effector design
to enable successful harvesting of sweet peppers. Initial field trials in
protected cropping environments, with two cultivar, demonstrate the efficacy of
this approach achieving a 46% success rate for unmodified crop, and 58% for
modified crop. Furthermore, for the more favourable cultivar we were also able
to detach 90% of sweet peppers, indicating that improvements in the grasping
success rate would result in greatly improved harvesting performance
Pushbroom Stereo for High-Speed Navigation in Cluttered Environments
We present a novel stereo vision algorithm that is capable of obstacle
detection on a mobile-CPU processor at 120 frames per second. Our system
performs a subset of standard block-matching stereo processing, searching only
for obstacles at a single depth. By using an onboard IMU and state-estimator,
we can recover the position of obstacles at all other depths, building and
updating a full depth-map at framerate.
Here, we describe both the algorithm and our implementation on a high-speed,
small UAV, flying at over 20 MPH (9 m/s) close to obstacles. The system
requires no external sensing or computation and is, to the best of our
knowledge, the first high-framerate stereo detection system running onboard a
small UAV
Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control
This paper provides an overview of the current state-of-the-art in selective
harvesting robots (SHRs) and their potential for addressing the challenges of
global food production. SHRs have the potential to increase productivity,
reduce labour costs, and minimise food waste by selectively harvesting only
ripe fruits and vegetables. The paper discusses the main components of SHRs,
including perception, grasping, cutting, motion planning, and control. It also
highlights the challenges in developing SHR technologies, particularly in the
areas of robot design, motion planning and control. The paper also discusses
the potential benefits of integrating AI and soft robots and data-driven
methods to enhance the performance and robustness of SHR systems. Finally, the
paper identifies several open research questions in the field and highlights
the need for further research and development efforts to advance SHR
technologies to meet the challenges of global food production. Overall, this
paper provides a starting point for researchers and practitioners interested in
developing SHRs and highlights the need for more research in this field.Comment: Preprint: to be appeared in Journal of Field Robotic
Human-Robot Control Strategies for the NASA/DARPA Robonaut
The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose
- …