273 research outputs found

    Carnegie Mellon Team Tartan: Mission-level Robustness with Rapidly Deployed Autonomous Aerial Vehicles in the MBZIRC 2020

    Full text link
    For robotics systems to be used in high risk, real-world situations, they have to be quickly deployable and robust to environmental changes, under-performing hardware, and mission subtask failures. Robots are often designed to consider a single sequence of mission events, with complex algorithms lowering individual subtask failure rates under some critical constraints. Our approach is to leverage common techniques in vision and control and encode robustness into mission structure through outcome monitoring and recovery strategies, aided by a system infrastructure that allows for quick mission deployments under tight time constraints and no central communication. We also detail lessons in rapid field robotics development and testing. Systems were developed and evaluated through real-robot experiments at an outdoor test site in Pittsburgh, Pennsylvania, USA, as well as in the 2020 Mohamed Bin Zayed International Robotics Challenge. All competition trials were completed in fully autonomous mode without RTK-GPS. Our system led to 4th place in Challenge 2 and 7th place in the Grand Challenge, and achievements like popping five balloons (Challenge 1), successfully picking and placing a block (Challenge 2), and dispensing the most water autonomously with a UAV of all teams onto an outdoor, real fire (Challenge 3).Comment: 28 pages, 26 figures. To appear in Field Robotics, Special Issues on MBZIRC 202

    Autonomous Agriculture Robot for Smart Farming

    Full text link
    This project aims to develop and demonstrate a ground robot with intelligence capable of conducting semi-autonomous farm operations for different low-heights vegetable crops referred as Agriculture Application Robot(AAR). AAR is a lightweight, solar-electric powered robot that uses intelligent perception for conducting detection and classification of plants and their characteristics. The system also has a robotic arm for the autonomous weed cutting process. The robot can deliver fertilizer spraying, insecticide, herbicide, and other fluids to the targets such as crops, weeds, and other pests. Besides, it provides information for future research into higher-level tasks such as yield estimation, crop, and soil health monitoring. We present the design of robot and the associated experiments which show the promising results in real world environments.Comment: 12 pages, 8 figures and code available.To be publishe

    Design og styring av smarte robotsystemer for applikasjoner innen biovitenskap: biologisk prøvetaking og jordbærhøsting

    Get PDF
    This thesis aims to contribute knowledge to support fully automation in life-science applications, which includes design, development, control and integration of robotic systems for sample preparation and strawberry harvesting, and is divided into two parts. Part I shows the development of robotic systems for the preparation of fungal samples for Fourier transform infrared (FTIR) spectroscopy. The first step in this part developed a fully automated robot for homogenization of fungal samples using ultrasonication. The platform was constructed with a modified inexpensive 3D printer, equipped with a camera to distinguish sample wells and blank wells. Machine vision was also used to quantify the fungi homogenization process using model fitting, suggesting that homogeneity level to ultrasonication time can be well fitted with exponential decay equations. Moreover, a feedback control strategy was proposed that used the standard deviation of local homogeneity values to determine the ultrasonication termination time. The second step extended the first step to develop a fully automated robot for the whole process preparation of fungal samples for FTIR spectroscopy by adding a newly designed centrifuge and liquid-handling module for sample washing, concentration and spotting. The new system used machine vision with deep learning to identify the labware settings, which frees the users from inputting the labware information manually. Part II of the thesis deals with robotic strawberry harvesting. This part can be further divided into three stages. i) The first stage designed a novel cable-driven gripper with sensing capabilities, which has high tolerance to positional errors and can reduce picking time with a storage container. The gripper uses fingers to form a closed space that can open to capture a fruit and close to push the stem to the cutting area. Equipped with internal sensors, the gripper is able to control a robotic arm to correct for positional errors introduced by the vision system, improving the robustness. The gripper and a detection method based on color thresholding were integrated into a complete system for strawberry harvesting. ii) The second stage introduced the improvements and updates to the first stage where the main focus was to address the challenges in unstructured environment by introducing a light-adaptive color thresholding method for vision and a novel obstacle-separation algorithm for manipulation. At this stage, the new fully integrated strawberry-harvesting system with dual-manipulator was capable of picking strawberries continuously in polytunnels. The main scientific contribution of this stage is the novel obstacle-separation path-planning algorithm, which is fundamentally different from traditional path planning where obstacles are typically avoided. The algorithm uses the gripper to push aside surrounding obstacles from an entrance, thus clearing the way for it to swallow the target strawberry. Improvements were also made to the gripper, the arm, and the control. iii) The third stage improved the obstacle-separation method by introducing a zig-zag push for both horizontal and upward directions and a novel dragging operation to separate upper obstacles from the target. The zig-zag push can help the gripper capture a target since the generated shaking motion can break the static contact force between the target and obstacles. The dragging operation is able to address the issue of mis-capturing obstacles located above the target, in which the gripper drags the target to a place with fewer obstacles and then pushes back to move the obstacles aside for further detachment. The separation paths are determined by the number and distribution of obstacles based on the downsampled point cloud in the region of interest.Denne avhandlingen tar sikte på å bidra med kunnskap om automatisering og robotisering av applikasjoner innen livsvitenskap. Avhandlingen er todelt, og tar for seg design, utvikling, styring og integrering av robotsystemer for prøvetaking og jordbærhøsting. Del I omhandler utvikling av robotsystemer til bruk under forberedelse av sopprøver for Fourier-transform infrarød (FTIR) spektroskopi. I første stadium av denne delen ble det utviklet en helautomatisert robot for homogenisering av sopprøver ved bruk av ultralyd-sonikering. Plattformen ble konstruert ved å modifisere en billig 3D-printer og utstyre den med et kamera for å kunne skille prøvebrønner fra kontrollbrønner. Maskinsyn ble også tatt i bruk for å estimere soppens homogeniseringsprosess ved hjelp av matematisk modellering, noe som viste at homogenitetsnivået faller eksponensielt med tiden. Videre ble det foreslått en strategi for regulering i lukker sløyfe som brukte standardavviket for lokale homogenitetsverdier til å bestemme avslutningstidspunkt for sonikeringen. I neste stadium ble den første plattformen videreutviklet til en helautomatisert robot for hele prosessen som forbereder prøver av sopprøver for FTIR-spektroskopi. Dette ble gjort ved å legge til en nyutviklet sentrifuge- og væskehåndteringsmodul for vasking, konsentrering og spotting av prøver. Det nye systemet brukte maskinsyn med dyp læring for å identifisere innstillingene for laboratorieutstyr, noe som gjør at brukerne slipper å registrere innstillingene manuelt.Norwegian University of Life SciencespublishedVersio

    Mobiles Robots - Past Present and Future

    Get PDF

    3D reconstruction in underwater environment using CAD model alignment with images

    Get PDF
    Subsea assets need to be regularly inspected, maintained and repaired. These operations are typically performed using a Remotely Operated Vehicle (ROV) controlled by a pilot that sits in a ship. In order to make operations safer and cheaper, it would be interesting to control the ROVs from land, avoiding the need to hire a ship and crew. As part of these operations, ROVs need to perform high precision actions such as turning valves, which may be hard to perform in this remote setting due to latency. A semi-autonomous vehicle capable of performing high precision tasks could potentiate the transition to fully remote operations, where people stay on land. In order to develop such a system, we need a robust perception model capable of segmenting the assets of interest. Additionally, it is important to fuse that information with 3D models of those same assets in order to have a spatial perception of the environment. This fusion may be useful to, in the future, plan the necessary actions to interact with the given asset. The main goal of this work is to implement a model that: 1) segments different subsea assets of interest, such as valves; and 2) fuse the segmentation information with 3D models of those same assets

    Autonomous Scene Understanding, Motion Planning, and Task Execution for Geometrically Adaptive Robotized Construction Work

    Full text link
    The construction industry suffers from such problems as high cost, poor quality, prolonged duration, and substandard safety. Robots have the potential to help alleviate such problems by becoming construction co-workers, yet they are seldom found operating on today’s construction sites. This is primarily due to the industry’s unstructured nature, substantial scale, and loose tolerances, which present additional challenges for robot operation. To help construction robots overcome such challenges and begin functioning as useful partners in human-robot construction teams, this research focuses on advancing two fundamental capabilities: enabling a robot to determine where it is located as it moves about a construction site, and enabling it to determine the actual pose and geometry of its workpieces so it can adapt its work plan and perform work. Specifically, this research first explores the use of a camera-marker sensor system for construction robot localization. To provide a mobile construction robot with the ability to estimate its own pose, a camera-marker sensor system was developed that is affordable, reconfigurable, and functional in GNSS-denied locations, such as urban areas and indoors. Excavation was used as a case study construction activity, where bucket tooth pose served as the key point of interest. The sensor system underwent several iterations of design and testing, and was found capable of estimating bucket tooth position with centimeter-level accuracy. This research also explores a framework to enable a construction robot to leverage its sensors and Building Information Model (BIM) to perceive and autonomously model the actual pose and geometry of its workpieces. Autonomous motion planning and execution methods were also developed and incorporated into the adaptive framework to enable a robot to adapt its work plan to the circumstances it encounters and perform work. The adaptive framework was implemented on a real robot and evaluated using joint filling as a case study construction task. The robot was found capable of identifying the true pose and geometry of a construction joint with an accuracy of 0.11 millimeters and 1.1 degrees. The robot also demonstrated the ability to autonomously adapt its work plan and successfully fill the joint. In all, this research is expected to serve as a basis for enabling robots to function more effectively in challenging construction environments. In particular, this work focuses on enabling robots to operate with greater functionality and versatility using methods that are generalizable to a range of construction activities. This research establishes the foundational blocks needed for humans and robots to leverage their respective strengths and function together as effective construction partners, which will lead to ubiquitous collaborative human-robot teams operating on actual construction sites, and ultimately bring the industry closer to realizing the extensive benefits of robotics.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/149785/1/klundeen_1.pd

    Process automation for analytical measurements providing high precise sample preparation in life science applications

    Get PDF
    Laboratories providing life science applications will gain on improved analysis´ efficiency and reliability by automating sample pretreatment. However, commercially available automated systems are especially suitable for the standardized MTP-format allowing for biological assays, whereas automating analytical sample pretreatment is still an unsolved challenge. Therefore, the purpose of this presentation is the design, the realization, and evaluation of an automated system that supplies multistep analytical sample pretreatment and high flexibility for easy upgrading and performance adaption

    Towards industrial internet of things: crankshaft monitoring, traceability and tracking using RFID

    Get PDF
    The large number of requirements and opportunities for automatic identification in manufacturing domains such as automotive and electronics has accelerated the demand for item-level tracking using radio-frequency identification technology. End-users are interested in implementing automatic identification systems, which are capable of ensuring full component process history, traceability and tracking preventing costly downtime to rectify processing defects and product recalls. The research outlined in this paper investigates the feasibility of implementing an RFID system for the manufacturing and assembly of crankshafts. The proposed solution involves the attachment of bolts with embedded RFID functionality by fitting a reader antenna reader to an overhead gantry that spans the production line and reads and writes production data to the tags. The manufacturing, assembly and service data captured through RFID tags and stored on a local server, could further be integrated with higher-level business applications facilitating seamless integration within the factory
    • …
    corecore