1,124 research outputs found

    Actuators and sensors for application in agricultural robots: A review

    Get PDF
    In recent years, with the rapid development of science and technology, agricultural robots have gradually begun to replace humans, to complete various agricultural operations, changing traditional agricultural production methods. Not only is the labor input reduced, but also the production efficiency can be improved, which invariably contributes to the development of smart agriculture. This paper reviews the core technologies used for agricultural robots in non-structural environments. In addition, we review the technological progress of drive systems, control strategies, end-effectors, robotic arms, environmental perception, and other related systems. This research shows that in a non-structured agricultural environment, using cameras and light detection and ranging (LiDAR), as well as ultrasonic and satellite navigation equipment, and by integrating sensing, transmission, control, and operation, different types of actuators can be innovatively designed and developed to drive the advance of agricultural robots, to meet the delicate and complex requirements of agricultural products as operational objects, such that better productivity and standardization of agriculture can be achieved. In summary, agricultural production is developing toward a data-driven, standardized, and unmanned approach, with smart agriculture supported by actuator-driven-based agricultural robots. This paper concludes with a summary of the main existing technologies and challenges in the development of actuators for applications in agricultural robots, and the outlook regarding the primary development directions of agricultural robots in the near future

    Immersive Teleoperation of the Eye Gaze of Social Robots Assessing Gaze-Contingent Control of Vergence, Yaw and Pitch of Robotic Eyes

    Get PDF
    International audienceThis paper presents a new teleoperation system – called stereo gaze-contingent steering (SGCS) – able to seamlessly control the vergence, yaw and pitch of the eyes of a humanoid robot – here an iCub robot – from the actual gaze direction of a remote pilot. The video stream captured by the cameras embedded in the mobile eyes of the iCub are fed into an HTC Vive R Head-Mounted Display equipped with an SMI R binocular eye-tracker. The SGCS achieves the effective coupling between the eye-tracked gaze of the pilot and the robot's eye movements. SGCS both ensures a faithful reproduction of the pilot's eye movements – that is perquisite for the readability of the robot's gaze patterns by its interlocutor – and maintains the pilot's oculomotor visual clues – that avoids fatigue and sickness due to sensorimotor conflicts. We here assess the precision of this servo-control by asking several pilots to gaze towards known objects positioned in the remote environment. We demonstrate that we succeed in controlling vergence with similar precision as eyes' azimuth and elevation. This system opens the way for robot-mediated human interactions in the personal space, notably when objects in the shared working space are involved

    REVIEW OF ROBOTIC TECHNOLOGY FOR STRAWBERRY PRODUCTION

    Get PDF
    With an increasing world population in need of food and a limited amount of land for cultivation, higher efficiency in agricultural production, especially fruits and vegetables, is increasingly required. The success of agricultural production in the marketplace depends on its quality and cost. The cost of labor for crop production, harvesting, and post-harvesting operations is a major portion of the overall production cost, especially for specialty crops such as strawberry. As a result, a multitude of automation technologies involving semi-autonomous and autonomous robots have been utilized, with an aim of minimizing labor costs and operation time to achieve a considerable improvement in farming efficiency and economic performance. Research and technologies for weed control, harvesting, hauling, sorting, grading, and/or packing have been generally reviewed for fruits and vegetables, yet no review has been conducted thus far specifically for robotic technology being used in strawberry production. In this article, studies on strawberry robotics and their associated automation technologies are reviewed in terms of mechanical subsystems (e.g., traveling unit, handling unit, storage unit) and electronic subsystems (e.g., sensors, computer, communication, and control). Additionally, robotic technologies being used in different stages in strawberry production operations are reviewed. The robot designs for strawberry management are also categorized in terms of purpose and environment

    Semi-autonomous wheelchair developed using a unique camera system configuration biologically inspired by equine vision

    Full text link
    This paper is concerned with the design and development of a semi-autonomous wheelchair system using cameras in a system configuration modeled on the vision system of a horse. This new camera configuration utilizes stereoscopic vision for 3-Dimensional (3D) depth perception and mapping ahead of the wheelchair, combined with a spherical camera system for 360-degrees of monocular vision. This unique combination allows for static components of an unknown environment to be mapped and any surrounding dynamic obstacles to be detected, during real-time autonomous navigation, minimizing blind-spots and preventing accidental collisions with people or obstacles. This novel vision system combined with shared control strategies provides intelligent assistive guidance during wheelchair navigation and can accompany any hands-free wheelchair control technology. Leading up to experimental trials with patients at the Royal Rehabilitation Centre (RRC) in Ryde, results have displayed the effectiveness of this system to assist the user in navigating safely within the RRC whilst avoiding potential collisions. © 2011 IEEE

    Ability of head-mounted display technology to improve mobility in people with low vision: a systematic review

    Get PDF
    Purpose: The purpose of this study was to undertake a systematic literature review on how vision enhancements, implemented using head-mounted displays (HMDs), can improve mobility, orientation, and associated aspects of visual function in people with low vision. Methods: The databases Medline, Chinl, Scopus, and Web of Science were searched for potentially relevant studies. Publications from all years until November 2018 were identified based on predefined inclusion and exclusion criteria. The data were tabulated and synthesized to produce a systematic review. Results: The search identified 28 relevant papers describing the performance of vision enhancement techniques on mobility and associated visual tasks. Simplifying visual scenes improved obstacle detection and object recognition but decreased walking speed. Minification techniques increased the size of the visual field by 3 to 5 times and improved visual search performance. However, the impact of minification on mobility has not been studied extensively. Clinical trials with commercially available devices recorded poor results relative to conventional aids. Conclusions: The effects of current vision enhancements using HMDs are mixed. They appear to reduce mobility efficiency but improved obstacle detection and object recognition. The review highlights the lack of controlled studies with robust study designs. To support the evidence base, well-designed trials with larger sample sizes that represent different types of impairments and real-life scenarios are required. Future work should focus on identifying the needs of people with different types of vision impairment and providing targeted enhancements. Translational Relevance: This literature review examines the evidence regarding the ability of HMD technology to improve mobility in people with sight loss

    Global-referenced navigation grids for off-road vehicles and environments

    Full text link
    [EN] The presence of automation and information technology in agricultural environments seems no longer questionable; smart spraying, variable rate fertilizing, or automatic guidance are becoming usual management tools in modern farms. Yet, such techniques are still in their nascence and offer a lively hotbed for innovation. In particular, significant research efforts are being directed toward vehicle navigation and awareness in off-road environments. However, the majority of solutions being developed are based on occupancy grids referenced with odometry and dead-reckoning, or alternatively based on GPS waypoint following, but never based on both. Yet, navigation in off-road environments highly benefits from both approaches: perception data effectively condensed in regular grids, and global references for every cell of the grid. This research proposes a framework to build globally referenced navigation grids by combining three-dimensional stereo vision with satellite-based global positioning. The construction process entails the in-field recording of perceptual information plus the geodetic coordinates of the vehicle at every image acquisition position, in addition to other basic data as velocity, heading, or GPS quality indices. The creation of local grids occurs in real time right after the stereo images have been captured by the vehicle in the field, but the final assembly of universal grids takes place after finishing the acquisition phase. Vehicle-fixed individual grids are then superposed onto the global grid, transferring original perception data to universal cells expressed in Local Tangent Plane coordinates. Global referencing allows the discontinuous appendage of data to succeed in the completion and updating of navigation grids along the time over multiple mapping sessions. This methodology was validated in a commercial vineyard, where several universal grids of the crops were generated. Vine rows were correctly reconstructed, although some difficulties appeared around the headland turns as a consequence of unreliable heading estimations. Navigation information conveyed through globally referenced regular grids turned out to be a powerful tool for upcoming practical implementations within agricultural robotics. (C) 2011 Elsevier B.V. All rights reserved.The author would like to thank Juan Jose Pena Suarez and Montano Perez Teruel for their assistance in the preparation of the prototype vehicle, Veronica Saiz Rubio for her help during most of the field experiments, Ratul Banerjee for his contribution in the development of software, and Luis Gil-Orozco Esteve for granting permission to perform multiple tests in the vineyards of his winery Finca Ardal. Gratitude is also extended to the Spanish Ministry of Science and Innovation for funding this research through project AGL2009-11731.Rovira Más, F. (2011). Global-referenced navigation grids for off-road vehicles and environments. Robotics and Autonomous Systems. 60(2):278-287. https://doi.org/10.1016/j.robot.2011.11.007S27828760

    Application of augmented reality and robotic technology in broadcasting: A survey

    Get PDF
    As an innovation technique, Augmented Reality (AR) has been gradually deployed in the broadcast, videography and cinematography industries. Virtual graphics generated by AR are dynamic and overlap on the surface of the environment so that the original appearance can be greatly enhanced in comparison with traditional broadcasting. In addition, AR enables broadcasters to interact with augmented virtual 3D models on a broadcasting scene in order to enhance the performance of broadcasting. Recently, advanced robotic technologies have been deployed in a camera shooting system to create a robotic cameraman so that the performance of AR broadcasting could be further improved, which is highlighted in the paper
    • …
    corecore