34 research outputs found

    Visual articulated tracking in cluttered environments

    Get PDF
    This thesis is concerned with the state estimation of an articulated robotic manipulator during interaction with its environment. Traditionally, robot state estimation has relied on proprioceptive sensors as the single source of information about the internal state. In this thesis, we are motivated to shift the focus from proprioceptive to exteroceptive sensing, which is capable to represent a holistic interpretation of the entire manipulation scene. When visually observing grasping tasks, the tracked manipulator is subject to visual distractions caused by the background, the manipulated object and by occlusions from other objects present in the environment. The aim of this thesis is to investigate and develop methods for the robust visual state estimation of articulated kinematic chains in cluttered environments which suffer from partial occlusions. To make these methods widely applicable to a variety of kinematic setups and unseen environments, we intentionally refrain from using prior information about the internal state of the articulated kinematic chain, and we do not explicitly model visual distractions such as the background and manipulated objects in the environment. We approach this problem with model-fitting methods, in which an articulated model is associated to the observed data using discriminative information. We explore model-fitting objectives that are robust to occlusions and unseen environments, methods to generate synthetic training data for data-driven discriminative methods, and robust optimisers to minimise the tracking objective. This thesis contributes (1) an automatic colour and depth image synthesis pipeline for data-driven learning without depending on a real articulated robot; (2) a training strategy for discriminative model-fitting objectives with an implicit representation of objects; (3) a tracking objective that is able to track occluded parts of a kinematic chain; and finally (4) a robust multi-hypotheses optimiser. These contributions are evaluated on two robotic platforms in different environments and with different manipulated and occluding objects. We demonstrate that our image synthesis pipeline generalises well to colour and depth observations of the real robot without requiring real ground truth labelled images. While this synthesis approach introduces a visual simulation-to-reality gap, the combination of our robust tracking objective and optimiser enables stable tracking of an occluded end-effector during manipulation tasks

    The emergence of active perception - seeking conceptual foundations

    Get PDF
    The aim of this thesis is to explain the emergence of active perception. It takes an interdisciplinary approach, by providing the necessary conceptual foundations for active perception research - the key notions that bridge the conceptual gaps remaining in understanding emergent behaviours of active perception in the context of robotic implementations. On the one hand, the autonomous agent approach to mobile robotics claims that perception is active. On the other hand, while explanations of emergence have been extensively pursued in Artificial Life, these explanations have not yet successfully accounted for active perception.The main question dealt with in this thesis is how active perception systems, as behaviour -based autonomous systems, are capable of providing relatively optimal perceptual guidance in response to environmental challenges, which are somewhat unpredictable. The answer is: task -level emergence on grounds of complicatedly combined computational strategies, but this notion needs further explanation.To study the computational strategies undertaken in active perception re- search, the thesis surveys twelve implementations. On the basis of the surveyed implementations, discussions in this thesis show that the perceptual task executed in support of bodily actions does not arise from the intentionality of a homuncu- lus, but is identified automatically on the basis of the dynamic small mod- ules of particular robotic architectures. The identified tasks are accomplished by quasi -functional modules and quasi- action modules, which maintain transformations of perceptual inputs, compute critical variables, and provide guidance of sensory -motor movements to the most relevant positions for fetching further needed information. Given the nature of these modules, active perception emerges in a different fashion from the global behaviour seen in other autonomous agent research.The quasi- functional modules and quasi- action modules cooperate by estimating the internal cohesion of various sources of information in support of the envisaged task. Specifically, such modules basically reflect various computational facilities for a species to single out the most important characteristics of its ecological niche. These facilities help to achieve internal cohesion, by maintaining a stepwise evaluation over the previously computed information, the required task, and the most relevant features presented in the environment.Apart from the above exposition of active perception, the process of task - level emergence is understood with certain principles extracted from four models of life origin. First, the fundamental structure of active perception is identified as the stepwise computation. Second, stepwise computation is promoted from baseline to elaborate patterns, i.e. from a simple system to a combinatory system. Third, a core requirement for all stepwise computational processes is the comparison between collected and needed information in order to insure the contribution to the required task. Interestingly, this point indicates that active perception has an inherent pragmatist dimension.The understanding of emergence in the present thesis goes beyond the distinc- tion between external processes and internal representations, which some current philosophers argue is required to explain emergence. The additional factors are links of various knowledge sources, in which the role of conceptual foundations is two -fold. On the one hand, those conceptual foundations elucidate how various knowledge sources can be linked. On the other, they make possible an interdisci- plinary view of emergence. Given this two -fold role, this thesis shows the unity of task -level emergence. Thus, the thesis demonstrates a cooperation between sci- ence and philosophy for the purpose of understanding the integrity of emergent cognitive phenomena

    Identifying relevant feature-action associations for grasping unmodelled objects

    No full text
    Action affordance learning based on visual sensory information is a crucial problem within the development of cognitive agents. In this paper, we present a method for learning action affordances based on basic visual features, which can vary in their granularity, order of combination and semantic content. The method is provided with a large and structured set of visual features, motivated by the visual hierarchy in primates and finds relevant feature action associations automatically. We apply our method in a simulated environment on three different object sets for the case of grasp affordance learning. For box objects,we achieve a 0.90 success probability, 0.80 for round objects and up to 0.75 for open objects, when presented with novel objects. In thiswork,we demonstrate, in particular, the effect of choosing appropriate feature representations. We demonstrate a significant performance improvement by increasing the complexity of the perceptual representation. By that, we present important insights in how the design of the feature space influences the actual learning problem

    Industrial Robotics

    Get PDF
    This book covers a wide range of topics relating to advanced industrial robotics, sensors and automation technologies. Although being highly technical and complex in nature, the papers presented in this book represent some of the latest cutting edge technologies and advancements in industrial robotics technology. This book covers topics such as networking, properties of manipulators, forward and inverse robot arm kinematics, motion path-planning, machine vision and many other practical topics too numerous to list here. The authors and editor of this book wish to inspire people, especially young ones, to get involved with robotic and mechatronic engineering technology and to develop new and exciting practical applications, perhaps using the ideas and concepts presented herein

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility

    Proceedings of the NASA Conference on Space Telerobotics, volume 4

    Get PDF
    Papers presented at the NASA Conference on Space Telerobotics are compiled. The theme of the conference was man-machine collaboration in space. The conference provided a forum for researchers and engineers to exchange ideas on the research and development required for the application of telerobotic technology to the space systems planned for the 1990's and beyond. Volume 4 contains papers related to the following subject areas: manipulator control; telemanipulation; flight experiments (systems and simulators); sensor-based planning; robot kinematics, dynamics, and control; robot task planning and assembly; and research activities at the NASA Langley Research Center

    Visual motion estimation and tracking of rigid bodies by physical simulation

    Get PDF
    This thesis applies knowledge of the physical dynamics of objects to estimating object motion from vision when estimation from vision alone fails. It differentiates itself from existing physics-based vision by building in robustness to situations where existing visual estimation tends to fail: fast motion, blur, glare, distractors, and partial or full occlusion. A real-time physics simulator is incorporated into a stochastic framework by adding several different models of how noise is injected into the dynamics. Several different algorithms are proposed and experimentally validated on two problems: motion estimation and object tracking. The performance of visual motion estimation from colour histograms of a ball moving in two dimensions is improved considerably when a physics simulator is integrated into a MAP procedure involving non-linear optimisation and RANSAC-like methods. Process noise or initial condition noise in conjunction with a physics-based dynamics results in improved robustness on hard visual problems. A particle filter applied to the task of full 6D visual tracking of the pose an object being pushed by a robot in a table-top environment is improved on difficult visual problems by incorporating a simulator as a dynamics model and injecting noise as forces into the simulator.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility
    corecore