4 research outputs found

    ROS-Based Unmanned Mobile Robot Platform for Agriculture

    No full text
    While the demand for new high-tech technologies is rapidly increasing, difficulties are presented, such as aging and population decline in rural areas. In particular, autonomous mobile robots have been emerging in the agricultural field. Worldwide, huge investment is being made in the development of unmanned agricultural mobile robots; meanwhile with the development of robots, modern farms have high expectations of increased productivity. However, in the agricultural work environment, it is difficult to solve these problems with the existing mobile robot form, due to the difficulties of various environments. Typical problems are space constraints in the agricultural work environment, the high computational complexity of algorithms, and changes in the environment. To solve these problems, in this paper, we propose a method to design and operate a mobile robot platform that can be used in a greenhouse. We represent a robot type with two drive wheels along with four casters that can operate on path and rail. In addition, we propose a technology for a multi-AI deep learning system to operate a robot, an algorithm that can operate such a robot, and a VPN-based communication system for network and security. The proposed method is expected to increase productivity and reduce labor costs in the agricultural work environment

    ROS-Based Unmanned Mobile Robot Platform for Agriculture

    No full text
    While the demand for new high-tech technologies is rapidly increasing, difficulties are presented, such as aging and population decline in rural areas. In particular, autonomous mobile robots have been emerging in the agricultural field. Worldwide, huge investment is being made in the development of unmanned agricultural mobile robots; meanwhile with the development of robots, modern farms have high expectations of increased productivity. However, in the agricultural work environment, it is difficult to solve these problems with the existing mobile robot form, due to the difficulties of various environments. Typical problems are space constraints in the agricultural work environment, the high computational complexity of algorithms, and changes in the environment. To solve these problems, in this paper, we propose a method to design and operate a mobile robot platform that can be used in a greenhouse. We represent a robot type with two drive wheels along with four casters that can operate on path and rail. In addition, we propose a technology for a multi-AI deep learning system to operate a robot, an algorithm that can operate such a robot, and a VPN-based communication system for network and security. The proposed method is expected to increase productivity and reduce labor costs in the agricultural work environment

    Distance Error Correction in Time-of-Flight Cameras Using Asynchronous Integration Time

    No full text
    A distance map captured using a time-of-flight (ToF) depth sensor has fundamental problems, such as ambiguous depth information in shiny or dark surfaces, optical noise, and mismatched boundaries. Severe depth errors exist in shiny and dark surfaces owing to excess reflection and excess absorption of light, respectively. Dealing with this problem has been a challenge due to the inherent hardware limitations of ToF, which measures the distance using the number of reflected photons. This study proposes a distance error correction method using three ToF sensors, set to different integration times to address the ambiguity in depth information. First, the three ToF depth sensors are installed horizontally at different integration times to capture distance maps at different integration times. Given the amplitude maps and error regions are estimated based on the amount of light, the estimated error regions are refined by exploiting the accurate depth information from the neighboring depth sensors that use different integration times. Moreover, we propose a new optical noise reduction filter that considers the distribution of the depth information biased toward one side. Experimental results verified that the proposed method overcomes the drawbacks of ToF cameras and provides enhanced distance maps

    Real-Time Hand Gesture Spotting and Recognition Using RGB-D Camera and 3D Convolutional Neural Network

    No full text
    Using hand gestures is a natural method of interaction between humans and computers. We use gestures to express meaning and thoughts in our everyday conversations. Gesture-based interfaces are used in many applications in a variety of fields, such as smartphones, televisions (TVs), video gaming, and so on. With advancements in technology, hand gesture recognition is becoming an increasingly promising and attractive technique in human–computer interaction. In this paper, we propose a novel method for fingertip detection and hand gesture recognition in real-time using an RGB-D camera and a 3D convolution neural network (3DCNN). This system can accurately and robustly extract fingertip locations and recognize gestures in real-time. We demonstrate the accurateness and robustness of the interface by evaluating hand gesture recognition across a variety of gestures. In addition, we develop a tool to manipulate computer programs to show the possibility of using hand gesture recognition. The experimental results showed that our system has a high level of accuracy of hand gesture recognition. This is thus considered to be a good approach to a gesture-based interface for human–computer interaction by hand in the future
    corecore