7,345 research outputs found
Proceedings of the 4th field robot event 2006, Stuttgart/Hohenheim, Germany, 23-24th June 2006
Zeer uitgebreid verslag van het 4e Fieldrobotevent, dat gehouden werd op 23 en 24 juni 2006 in Stuttgart/Hohenhei
Technology assessment of advanced automation for space missions
Six general classes of technology requirements derived during the mission definition phase of the study were identified as having maximum importance and urgency, including autonomous world model based information systems, learning and hypothesis formation, natural language and other man-machine communication, space manufacturing, teleoperators and robot systems, and computer science and technology
Vision-Based Road Detection in Automotive Systems: A Real-Time Expectation-Driven Approach
The main aim of this work is the development of a vision-based road detection
system fast enough to cope with the difficult real-time constraints imposed by
moving vehicle applications. The hardware platform, a special-purpose massively
parallel system, has been chosen to minimize system production and operational
costs. This paper presents a novel approach to expectation-driven low-level
image segmentation, which can be mapped naturally onto mesh-connected massively
parallel SIMD architectures capable of handling hierarchical data structures.
The input image is assumed to contain a distorted version of a given template;
a multiresolution stretching process is used to reshape the original template
in accordance with the acquired image content, minimizing a potential function.
The distorted template is the process output.Comment: See http://www.jair.org/ for any accompanying file
NASA space station automation: AI-based technology review. Executive summary
Research and Development projects in automation technology for the Space Station are described. Artificial Intelligence (AI) based technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics
NASA space station automation: AI-based technology review
Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures
An intelligent, free-flying robot
The ground based demonstration of the extensive extravehicular activity (EVA) Retriever, a voice-supervised, intelligent, free flying robot, is designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The major objective of the EVA Retriever Project is to design, develop, and evaluate an integrated robotic hardware and on-board software system which autonomously: (1) performs system activation and check-out; (2) searches for and acquires the target; (3) plans and executes a rendezvous while continuously tracking the target; (4) avoids stationary and moving obstacles; (5) reaches for and grapples the target; (6) returns to transfer the object; and (7) returns to base
Development of a Neural Network-Based Camera for Tomato Harvesting Robots
Automated tomato harvesting robots were rapidly developed recently. Most of the
designs were more focused on positioning of the end of robotic arm by using various
methods such as combination of the sensor and vision system. This project
concentrated on the artificial intelligent via the Neural Network, in order to provide a
better decision making system for tomato harvesting robot. The objective of this
study was to develop 3 degree of freedom tomato harvesting robotic system complete
with gripper and motion program. The development of software for tomato pattern
identification, determination of the X and Y coordinates from web camera captured
and the determination of the tomato and tomato ripeness using decision making from
Neural Network also become the main objective. The approach is to detect the
desired object using vision system attached to the cylindrical automation system and
perform image analysis. These features will serve as inputs to a neural net, which
will be trained with a set of predetermined ripe tomato. The output is a command for
harvester arm to make the movement for harvesting. The position determination was
done with a conversion of the distance in pixel into a distance in metric unit (mm) of the tomato image. Whereas the depth of the tomato distance (z direction) was done
by moving the actuator system towards the calculated tomato position until the
object sensor senses the present of the tomato. AWIsoft07 software was developed
to view the harvester vision, display the captured image analysis on the harvester
vision, and display the numerical analysis output and neural network output.
The harvester system with 3 degree of freedoms (3DOF) equips with specially
designed tomato gripper named as AWI2007 Tomato Harvesting Robot was
developed in order to realize the data from the AWISoft07 developed software.
Several calibrations were made to ensure the accuracy of the AWI2007 Tomato
Harvesting Robot. The AWIsoft07 and AWI2007 Tomato Harvesting Robot were
subjected to several harvesting tests under the laboratory environment. The
harvesting result shows the ability of the software and the harvester. Consequently,
AWI2007 Tomato Harvesting Robot with the camera vision was able to recognize
the tomato ripeness intelligently via neural network analysis and moved to the
harvesting position. These situations provided new improvements for tomato
harvesting system compared to the previous findings. Therefore the application of
the neural network based on camera vision was successful perform as artificial
intelligent for tomato harvesting robotic system
Machine Vision for intelligent Semi-Autonomous Transport (MV-iSAT)
AbstractThe primary focus was to develop a vision-based system suitable for the navigation and mapping of an indoor, single-floor environment. Devices incorporating an iSAT system could be used as ‘self-propelled’ shopping carts in high-end retail stores or as automated luggage routing systems in airports. The primary design feature of this system is its Field Programmable Gate Array (FPGA) core, chosen for its strengths in parallelism and pipelining. Image processing has been successfully demonstrated in real-time using FPGA hardware. Remote feedback and monitoring was broadcasted to a host computer via a local area network. Deadlines as short as 40ns have been met by a custom built memory-based arbitration scheme. It is hoped that the iSAT platform will provide the basis for future work on advanced FPGA-based machine-vision algorithms for mobile robotics
Large-scale environment mapping and immersive human-robot interaction for agricultural mobile robot teleoperation
Remote operation is a crucial solution to problems encountered in
agricultural machinery operations. However, traditional video streaming control
methods fall short in overcoming the challenges of single perspective views and
the inability to obtain 3D information. In light of these issues, our research
proposes a large-scale digital map reconstruction and immersive human-machine
remote control framework for agricultural scenarios. In our methodology, a DJI
unmanned aerial vehicle(UAV) was utilized for data collection, and a novel
video segmentation approach based on feature points was introduced. To tackle
texture richness variability, an enhanced Structure from Motion (SfM) using
superpixel segmentation was implemented. This method integrates the open
Multiple View Geometry (openMVG) framework along with Local Features from
Transformers (LoFTR). The enhanced SfM results in a point cloud map, which is
further processed through Multi-View Stereo (MVS) to generate a complete map
model. For control, a closed-loop system utilizing TCP for VR control and
positioning of agricultural machinery was introduced. Our system offers a fully
visual-based immersive control method, where upon connection to the local area
network, operators can utilize VR for immersive remote control. The proposed
method enhances both the robustness and convenience of the reconstruction
process, thereby significantly facilitating operators in acquiring more
comprehensive on-site information and engaging in immersive remote control
operations. The code is available at: https://github.com/LiuTao1126/Enhance-SF
- …