96 research outputs found
Intelligent vision-based navigation system for mobile robot: A technological review
Vision system is gradually becoming more important. As computing technology advances, it has been widely utilized in many industrial and service sectors. One of the critical applications for vision system is to navigate mobile robot safely. In order to do so, several technological elements are required. This article focuses on reviewing recent researches conducted on the intelligent vision-based navigation system for the mobile robot. These include the utilization of mobile robot in various sectors such as manufacturing, warehouse, agriculture, outdoor navigation and other service sectors. Multiple intelligent algorithms used in developing robot vision system were also reviewed
An intelligent, free-flying robot
The ground based demonstration of the extensive extravehicular activity (EVA) Retriever, a voice-supervised, intelligent, free flying robot, is designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The major objective of the EVA Retriever Project is to design, develop, and evaluate an integrated robotic hardware and on-board software system which autonomously: (1) performs system activation and check-out; (2) searches for and acquires the target; (3) plans and executes a rendezvous while continuously tracking the target; (4) avoids stationary and moving obstacles; (5) reaches for and grapples the target; (6) returns to transfer the object; and (7) returns to base
Survey of computer vision algorithms and applications for unmanned aerial vehicles
This paper presents a complete review of computer vision algorithms and vision-based intelligent applications, that are developed in the field of the Unmanned Aerial Vehicles (UAVs) in the latest decade. During this time, the evolution of relevant technologies for UAVs; such as component miniaturization, the increase of computational capabilities, and the evolution of computer vision techniques have allowed an important advance in the development of UAVs technologies and applications. Particularly, computer vision technologies integrated in UAVs allow to develop cutting-edge technologies to cope with aerial perception difficulties; such as visual navigation algorithms, obstacle detection and avoidance and aerial decision-making. All these expert technologies have developed a wide spectrum of application for UAVs, beyond the classic military and defense purposes. Unmanned Aerial Vehicles and Computer Vision are common topics in expert systems, so thanks to the recent advances in perception technologies, modern intelligent applications are developed to enhance autonomous UAV positioning, or automatic algorithms to avoid aerial collisions, among others. Then, the presented survey is based on artificial perception applications that represent important advances in the latest years in the expert system field related to the Unmanned Aerial Vehicles. In this paper, the most significant advances in this field are presented, able to solve fundamental technical limitations; such as visual odometry, obstacle detection, mapping and localization, et cetera. Besides, they have been analyzed based on their capabilities and potential utility. Moreover, the applications and UAVs are divided and categorized according to different criteria.This research is supported by the Spanish Government through the CICYT projects (TRA2015-63708-R and TRA2013-48314-C3-1-R)
Aerospace medicine and biology: A continuing bibliography with indexes (supplement 344)
This bibliography lists 125 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during January, 1989. Subject coverage includes: aerospace medicine and psychology, life support systems and controlled environments, safety equipment, exobiology and extraterrestrial life, and flight crew behavior and performance
INTELLIGENT VISION-BASED NAVIGATION SYSTEM
This thesis presents a complete vision-based navigation system that can plan and
follow an obstacle-avoiding path to a desired destination on the basis of an internal map
updated with information gathered from its visual sensor.
For vision-based self-localization, the system uses new floor-edges-specific filters
for detecting floor edges and their pose, a new algorithm for determining the orientation of
the robot, and a new procedure for selecting the initial positions in the self-localization
procedure. Self-localization is based on matching visually detected features with those
stored in a prior map.
For planning, the system demonstrates for the first time a real-world application of
the neural-resistive grid method to robot navigation. The neural-resistive grid is modified
with a new connectivity scheme that allows the representation of the collision-free space of
a robot with finite dimensions via divergent connections between the spatial memory layer
and the neuro-resistive grid layer.
A new control system is proposed. It uses a Smith Predictor architecture that has
been modified for navigation applications and for intermittent delayed feedback typical of
artificial vision. A receding horizon control strategy is implemented using Normalised
Radial Basis Function nets as path encoders, to ensure continuous motion during the delay
between measurements.
The system is tested in a simplified environment where an obstacle placed
anywhere is detected visually and is integrated in the path planning process.
The results show the validity of the control concept and the crucial importance of a
robust vision-based self-localization process
Design, Integration, and Field Evaluation of a Robotic Blossom Thinning System for Tree Fruit Crops
The US apple industry relies heavily on semi-skilled manual labor force for
essential field operations such as training, pruning, blossom and green fruit
thinning, and harvesting. Blossom thinning is one of the crucial crop load
management practices to achieve desired crop load, fruit quality, and return
bloom. While several techniques such as chemical, and mechanical thinning are
available for large-scale blossom thinning such approaches often yield
unpredictable thinning results and may cause damage the canopy, spurs, and leaf
tissue. Hence, growers still depend on laborious, labor intensive and expensive
manual hand blossom thinning for desired thinning outcomes. This research
presents a robotic solution for blossom thinning in apple orchards using a
computer vision system with artificial intelligence, a six degrees of freedom
robotic manipulator, and an electrically actuated miniature end-effector for
robotic blossom thinning. The integrated robotic system was evaluated in a
commercial apple orchard which showed promising results for targeted and
selective blossom thinning. Two thinning approaches, center and boundary
thinning, were investigated to evaluate the system ability to remove varying
proportion of flowers from apple flower clusters. During boundary thinning the
end effector was actuated around the cluster boundary while center thinning
involved end-effector actuation only at the cluster centroid for a fixed
duration of 2 seconds. The boundary thinning approach thinned 67.2% of flowers
from the targeted clusters with a cycle time of 9.0 seconds per cluster,
whereas center thinning approach thinned 59.4% of flowers with a cycle time of
7.2 seconds per cluster. When commercially adopted, the proposed system could
help address problems faced by apple growers with current hand, chemical, and
mechanical blossom thinning approaches
MonoSLAM: Real-time single camera SLAM
Published versio
Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles
Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE
- …