149 research outputs found

    Design og styring av smarte robotsystemer for applikasjoner innen biovitenskap: biologisk prøvetaking og jordbærhøsting

    Get PDF
    This thesis aims to contribute knowledge to support fully automation in life-science applications, which includes design, development, control and integration of robotic systems for sample preparation and strawberry harvesting, and is divided into two parts. Part I shows the development of robotic systems for the preparation of fungal samples for Fourier transform infrared (FTIR) spectroscopy. The first step in this part developed a fully automated robot for homogenization of fungal samples using ultrasonication. The platform was constructed with a modified inexpensive 3D printer, equipped with a camera to distinguish sample wells and blank wells. Machine vision was also used to quantify the fungi homogenization process using model fitting, suggesting that homogeneity level to ultrasonication time can be well fitted with exponential decay equations. Moreover, a feedback control strategy was proposed that used the standard deviation of local homogeneity values to determine the ultrasonication termination time. The second step extended the first step to develop a fully automated robot for the whole process preparation of fungal samples for FTIR spectroscopy by adding a newly designed centrifuge and liquid-handling module for sample washing, concentration and spotting. The new system used machine vision with deep learning to identify the labware settings, which frees the users from inputting the labware information manually. Part II of the thesis deals with robotic strawberry harvesting. This part can be further divided into three stages. i) The first stage designed a novel cable-driven gripper with sensing capabilities, which has high tolerance to positional errors and can reduce picking time with a storage container. The gripper uses fingers to form a closed space that can open to capture a fruit and close to push the stem to the cutting area. Equipped with internal sensors, the gripper is able to control a robotic arm to correct for positional errors introduced by the vision system, improving the robustness. The gripper and a detection method based on color thresholding were integrated into a complete system for strawberry harvesting. ii) The second stage introduced the improvements and updates to the first stage where the main focus was to address the challenges in unstructured environment by introducing a light-adaptive color thresholding method for vision and a novel obstacle-separation algorithm for manipulation. At this stage, the new fully integrated strawberry-harvesting system with dual-manipulator was capable of picking strawberries continuously in polytunnels. The main scientific contribution of this stage is the novel obstacle-separation path-planning algorithm, which is fundamentally different from traditional path planning where obstacles are typically avoided. The algorithm uses the gripper to push aside surrounding obstacles from an entrance, thus clearing the way for it to swallow the target strawberry. Improvements were also made to the gripper, the arm, and the control. iii) The third stage improved the obstacle-separation method by introducing a zig-zag push for both horizontal and upward directions and a novel dragging operation to separate upper obstacles from the target. The zig-zag push can help the gripper capture a target since the generated shaking motion can break the static contact force between the target and obstacles. The dragging operation is able to address the issue of mis-capturing obstacles located above the target, in which the gripper drags the target to a place with fewer obstacles and then pushes back to move the obstacles aside for further detachment. The separation paths are determined by the number and distribution of obstacles based on the downsampled point cloud in the region of interest.Denne avhandlingen tar sikte på å bidra med kunnskap om automatisering og robotisering av applikasjoner innen livsvitenskap. Avhandlingen er todelt, og tar for seg design, utvikling, styring og integrering av robotsystemer for prøvetaking og jordbærhøsting. Del I omhandler utvikling av robotsystemer til bruk under forberedelse av sopprøver for Fourier-transform infrarød (FTIR) spektroskopi. I første stadium av denne delen ble det utviklet en helautomatisert robot for homogenisering av sopprøver ved bruk av ultralyd-sonikering. Plattformen ble konstruert ved å modifisere en billig 3D-printer og utstyre den med et kamera for å kunne skille prøvebrønner fra kontrollbrønner. Maskinsyn ble også tatt i bruk for å estimere soppens homogeniseringsprosess ved hjelp av matematisk modellering, noe som viste at homogenitetsnivået faller eksponensielt med tiden. Videre ble det foreslått en strategi for regulering i lukker sløyfe som brukte standardavviket for lokale homogenitetsverdier til å bestemme avslutningstidspunkt for sonikeringen. I neste stadium ble den første plattformen videreutviklet til en helautomatisert robot for hele prosessen som forbereder prøver av sopprøver for FTIR-spektroskopi. Dette ble gjort ved å legge til en nyutviklet sentrifuge- og væskehåndteringsmodul for vasking, konsentrering og spotting av prøver. Det nye systemet brukte maskinsyn med dyp læring for å identifisere innstillingene for laboratorieutstyr, noe som gjør at brukerne slipper å registrere innstillingene manuelt.Norwegian University of Life SciencespublishedVersio

    Reasoning and understanding grasp affordances for robot manipulation

    Get PDF
    This doctoral research focuses on developing new methods that enable an artificial agent to grasp and manipulate objects autonomously. More specifically, we are using the concept of affordances to learn and generalise robot grasping and manipulation techniques. [75] defined affordances as the ability of an agent to perform a certain action with an object in a given environment. In robotics, affordances defines the possibility of an agent to perform actions with an object. Therefore, by understanding the relation between actions, objects and the effect of these actions, the agent understands the task at hand, providing the robot with the potential to bridge perception to action. The significance of affordances in robotics has been studied from varied perspectives, such as psychology and cognitive sciences. Many efforts have been made to pragmatically employ the concept of affordances as it provides the potential for an artificial agent to perform tasks autonomously. We start by reviewing and finding common ground amongst different strategies that use affordances for robotic tasks. We build on the identified grounds to provide guidance on including the concept of affordances as a medium to boost autonomy for an artificial agent. To this end, we outline common design choices to build an affordance relation; and their implications on the generalisation capabilities of the agent when facing previously unseen scenarios. Based on our exhaustive review, we conclude that prior research on object affordance detection is effective, however, among others, it has the following technical gaps: (i) the methods are limited to a single object ↔ affordance hypothesis, and (ii) they cannot guarantee task completion or any level of performance for the manipulation task alone nor (iii) in collaboration with other agents. In this research thesis, we propose solutions to these technical challenges. In an incremental fashion, we start by addressing the limited generalisation capabilities of, at the time state-of-the-art methods, by strengthening the perception to action connection through the construction of an Knowledge Base (KB). We then leverage the information encapsulated in the KB to design and implement a reasoning and understanding method based on statistical relational leaner (SRL) that allows us to cope with uncertainty in testing environments, and thus, improve generalisation capabilities in affordance-aware manipulation tasks. The KB in conjunctions with our SRL are the base for our designed solutions that guarantee task completion when the robot is performing a task alone as well as when in collaboration with other agents. We finally expose and discuss a range of interesting avenues that have the potential to thrive the capabilities of a robotic agent through the use of the concept of affordances for manipulation tasks. A summary of the contributions of this thesis can be found at: https://bit.ly/grasp_affordance_reasonin

    Collaborative and Cooperative Robotics Applications using Visual Perception

    Get PDF
    The objective of this Thesis is to develop novel integrated strategies for collaborative and cooperative robotic applications. Commonly, industrial robots operate in structured environments and in work-cell separated from human operators. Nowadays, collaborative robots have the capacity of sharing the workspace and collaborate with humans or other robots to perform complex tasks. These robots often operate in an unstructured environment, whereby they need sensors and algorithms to get information about environment changes. Advanced vision and control techniques have been analyzed to evaluate their performance and their applicability to industrial tasks. Then, some selected techniques have been applied for the first time to an industrial context. A Peg-in-Hole task has been chosen as first case study, since it has been extensively studied but still remains challenging: it requires accuracy both in the determination of the hole poses and in the robot positioning. Two solutions have been developed and tested. Experimental results have been discussed to highlight the advantages and disadvantages of each technique. Grasping partially known objects in unstructured environments is one of the most challenging issues in robotics. It is a complex task and requires to address multiple subproblems, in order to be accomplished, including object localization and grasp pose detection. Also for this class of issues some vision techniques have been analyzed. One of these has been adapted to be used in industrial scenarios. Moreover, as a second case study, a robot-to-robot object handover task in a partially structured environment and in the absence of explicit communication between the robots has been developed and validated. Finally, the two case studies have been integrated in two real industrial setups to demonstrate the applicability of the strategies to solving industrial problems

    Robotic Picking of Tangle-prone Materials (with Applications to Agriculture).

    Get PDF
    The picking of one or more objects from an unsorted pile continues to be non-trivial for robotic systems. This is especially so when the pile consists of individual items that tangle with one another, causing more to be picked out than desired. One of the key features of such tangling-prone materials (e.g., herbs, salads) is the presence of protrusions (e.g., leaves) extending out from the main body of items in the pile.This thesis explores the issue of picking excess mass due to entanglement such as occurs in bins composed of tangling-prone materials (TPs), especially in the context of a one-shot mass-constrained robotic bin-picking task. Specifically, it proposes a human-inspired entanglement reduction method for making the picking of TPs more predictable. The primary approach is to directly counter entanglement through pile interaction with an aim of reducing it to a level where the picked mass is predictable, instead of avoiding entanglement by picking from collision or entanglement-free points or regions. Taking this perspective, several contributions are presented that (i) improve the understanding of the phenomenon of entanglement and (ii) reduce the picking error (PE) by effectively countering entanglement in a TP pile.First, it studies the mechanics of a variety of TPs improving the understanding of the phenomenon of entanglement as observed in TP bins. It reports experiments with a real robot in which picking TPs with different protrusion lengths (PLs) results in up to a 76% increase in picked mass variance, suggesting PL be an informative feature in the design of picking strategies. Moreover, to counter the inherent entanglement in a TP pile, it proposes a new Spread-and-Pick (SnP) approach that significantly reduces entanglement, making picking more consistent. Compared to prior approaches that seek to pick from a tangle-free point in the pile, the proposed method results in a decrease in PE of up to 51% and shows good generalisation to previously unseen TPs

    Tactile Perception And Visuotactile Integration For Robotic Exploration

    Get PDF
    As the close perceptual sibling of vision, the sense of touch has historically received less than deserved attention in both human psychology and robotics. In robotics, this may be attributed to at least two reasons. First, it suffers from the vicious cycle of immature sensor technology, which causes industry demand to be low, and then there is even less incentive to make existing sensors in research labs easy to manufacture and marketable. Second, the situation stems from a fear of making contact with the environment, avoided in every way so that visually perceived states do not change before a carefully estimated and ballistically executed physical interaction. Fortunately, the latter viewpoint is starting to change. Work in interactive perception and contact-rich manipulation are on the rise. Good reasons are steering the manipulation and locomotion communities’ attention towards deliberate physical interaction with the environment prior to, during, and after a task. We approach the problem of perception prior to manipulation, using the sense of touch, for the purpose of understanding the surroundings of an autonomous robot. The overwhelming majority of work in perception for manipulation is based on vision. While vision is a fast and global modality, it is insufficient as the sole modality, especially in environments where the ambient light or the objects therein do not lend themselves to vision, such as in darkness, smoky or dusty rooms in search and rescue, underwater, transparent and reflective objects, and retrieving items inside a bag. Even in normal lighting conditions, during a manipulation task, the target object and fingers are usually occluded from view by the gripper. Moreover, vision-based grasp planners, typically trained in simulation, often make errors that cannot be foreseen until contact. As a step towards addressing these problems, we present first a global shape-based feature descriptor for object recognition using non-prehensile tactile probing alone. Then, we investigate in making the tactile modality, local and slow by nature, more efficient for the task by predicting the most cost-effective moves using active exploration. To combine the local and physical advantages of touch and the fast and global advantages of vision, we propose and evaluate a learning-based method for visuotactile integration for grasping

    Contemporary Robotics

    Get PDF
    This book book is a collection of 18 chapters written by internationally recognized experts and well-known professionals of the field. Chapters contribute to diverse facets of contemporary robotics and autonomous systems. The volume is organized in four thematic parts according to the main subjects, regarding the recent advances in the contemporary robotics. The first thematic topics of the book are devoted to the theoretical issues. This includes development of algorithms for automatic trajectory generation using redudancy resolution scheme, intelligent algorithms for robotic grasping, modelling approach for reactive mode handling of flexible manufacturing and design of an advanced controller for robot manipulators. The second part of the book deals with different aspects of robot calibration and sensing. This includes a geometric and treshold calibration of a multiple robotic line-vision system, robot-based inline 2D/3D quality monitoring using picture-giving and laser triangulation, and a study on prospective polymer composite materials for flexible tactile sensors. The third part addresses issues of mobile robots and multi-agent systems, including SLAM of mobile robots based on fusion of odometry and visual data, configuration of a localization system by a team of mobile robots, development of generic real-time motion controller for differential mobile robots, control of fuel cells of mobile robots, modelling of omni-directional wheeled-based robots, building of hunter- hybrid tracking environment, as well as design of a cooperative control in distributed population-based multi-agent approach. The fourth part presents recent approaches and results in humanoid and bioinspirative robotics. It deals with design of adaptive control of anthropomorphic biped gait, building of dynamic-based simulation for humanoid robot walking, building controller for perceptual motor control dynamics of humans and biomimetic approach to control mechatronic structure using smart materials

    Active Attention for Target Detection and Recognition in Robot Vision

    Get PDF
    In this thesis, we address problems in building an efficient and reliable target detection and recognition system for robot applications, where the vision module is only one component of the overall system executing the task. The different modules interact with each other to achieve the goal. In this interaction, the role of vision is not only to recognize but also to select what and where to process. In other words, attention is an essential process for efficient task execution. We introduce attention mechanisms into the recognition system that serve the overall system at different levels of the integration and formulate four problems as below. At the most basic level of integration, attention interacts with vision only. We consider the problem of detecting a target in an input image using a trained binary classifier of the target and formulate the target detection problem as a sampling process. The goal is to localize the windows containing targets in the image, and attention controls which part of the image to process next. We observe that detectors’ response scores of sampling windows fade gradually from the peak response window in the detection area and approximate this scoring pattern with an exponential de- cay function. Exploiting this property, we propose an active sampling procedure to efficiently detect the target while avoiding an exhaustive and expensive search of all the possible window locations. With more knowledge about the target, we describe the target as template graphs over segmented surfaces. Constraint functions are also defined to find the node and edge’s matching between an input scene graph and target’s template graph. We propose to introduce the recognition early into the traditional candidate proposal process to achieve fast and reliable detection performance. The target detection thence becomes finding subgraphs from the segmented input scene graph that match the template graphs. In this problem, attention provides the order of constraints in checking the graph matching, and a reasonable sequence can help filter out negatives early, thus reducing computational time. We put forward a sub-optimal checking order, and prove that it has bounded time cost compared to the optimal checking sequence, which is not obtainable in polynomial time. Experiments on rigid and non-rigid object detection validate our pipeline. With more freedom in control, we allow the robot to actively choose another viewpoint if the current view cannot deliver a reliable detection and recognition result. We develop a practical viewpoint control system and apply it to two human-robot interaction applications, where the detection task becomes more challenging with the additional randomness from the human. Attention represents an active process of deciding the location of the camera. Our viewpoint selection module not only considers the viewing condition constraints for vision algorithms but also incorporates the low-level robot kinematics to guarantee the reachability of the desired viewpoint. By selecting viewpoints fast using a linear time cost score function, the system can deliver smooth user interaction experience. Additionally, we provide a learning from human demonstration method to obtain the score function parameters that better serves the task’s preference. Finally, when recognition results from multiple sources under different environmental factor are available, attention means how to fuse the observations to get reliable output. We consider the problem of object recognition in 3D using an ensemble of attribute-based classifiers. We propose two new concepts to improve classification in practical situations, and show their implementation in an approach implemented for recognition from point-cloud data. First, we study the impact of the distance between the camera and the object and propose an approach to classifier’s accuracy performance, which incorporates distance into the decision making. Second, to avoid the difficulties arising from lack of representative training examples in learning the optimal threshold, we set in our attribute classifier two threshold values to distinguish a positive, a negative and an uncertainty class, instead of just one threshold value. We prove the theoretical correctness of this approach for an active agent who can observe the object multiple times

    Model-Based Environmental Visual Perception for Humanoid Robots

    Get PDF
    The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling

    Génération automatique de résumés par analyse sélective

    Full text link
    Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal
    • …
    corecore