1,085 research outputs found
Biomimetic Algorithms for Coordinated Motion: Theory and Implementation
Drawing inspiration from flight behavior in biological settings (e.g.
territorial battles in dragonflies, and flocking in starlings), this paper
demonstrates two strategies for coverage and flocking. Using earlier
theoretical studies on mutual motion camouflage, an appropriate steering
control law for area coverage has been implemented in a laboratory test-bed
equipped with wheeled mobile robots and a Vicon high speed motion capture
system. The same test-bed is also used to demonstrate another strategy (based
on local information), termed topological velocity alignment, which serves to
make agents move in the same direction. The present work illustrates the
applicability of biological inspiration in the design of multi-agent robotic
collectives
Hybrid Imitative Planning with Geometric and Predictive Costs in Off-road Environments
Geometric methods for solving open-world off-road navigation tasks, by
learning occupancy and metric maps, provide good generalization but can be
brittle in outdoor environments that violate their assumptions (e.g., tall
grass). Learning-based methods can directly learn collision-free behavior from
raw observations, but are difficult to integrate with standard geometry-based
pipelines. This creates an unfortunate conflict -- either use learning and lose
out on well-understood geometric navigational components, or do not use it, in
favor of extensively hand-tuned geometry-based cost maps. In this work, we
reject this dichotomy by designing the learning and non-learning-based
components in a way such that they can be effectively combined in a
self-supervised manner. Both components contribute to a planning criterion: the
learned component contributes predicted traversability as rewards, while the
geometric component contributes obstacle cost information. We instantiate and
comparatively evaluate our system in both in-distribution and
out-of-distribution environments, showing that this approach inherits
complementary gains from the learned and geometric components and significantly
outperforms either of them. Videos of our results are hosted at
https://sites.google.com/view/hybrid-imitative-plannin
Search Methods for Mobile Manipulator Performance Measurement
Mobile manipulators are a potential solution to the increasing need for additional flexibility and mobility in industrial robotics applications. However, they tend to lack the accuracy and precision achieved by fixed manipulators, especially in scenarios where both the manipulator and the autonomous vehicle move simultaneously. This thesis analyzes the problem of dynamically evaluating the positioning error of mobile manipulators. In particular, it investigates the use of Bayesian methods to predict the position of the end-effector in the presence of uncertainty propagated from the mobile platform. Simulations and real-world experiments are carried out to test the proposed method against a deterministic approach. These experiments are carried out on two mobile manipulators - a proof-of-concept research platform and an industrial mobile manipulator - using ROS and Gazebo. The precision of the mobile manipulator is evaluated through its ability to intercept retroreflective markers using a photoelectric sensor attached to the end-effector. Compared to the deterministic search approach, we observed improved interception capability with comparable search times, thereby enabling the effective performance measurement of the mobile manipulator
The emergence of active perception - seeking conceptual foundations
The aim of this thesis is to explain the emergence of active perception. It takes an interdisciplinary approach, by providing the necessary conceptual foundations for active perception research - the key notions that bridge the conceptual gaps remaining in understanding emergent behaviours of active perception in the context of robotic implementations. On the one hand, the autonomous agent approach to mobile robotics claims that perception is active. On the other hand, while explanations of emergence have been extensively pursued in Artificial Life, these explanations have not yet successfully accounted for active perception.The main question dealt with in this thesis is how active perception systems, as behaviour -based autonomous systems, are capable of providing relatively optimal perceptual guidance in response to environmental challenges, which are somewhat unpredictable. The answer is: task -level emergence on grounds of complicatedly combined computational strategies, but this notion needs further explanation.To study the computational strategies undertaken in active perception re- search, the thesis surveys twelve implementations. On the basis of the surveyed implementations, discussions in this thesis show that the perceptual task executed in support of bodily actions does not arise from the intentionality of a homuncu- lus, but is identified automatically on the basis of the dynamic small mod- ules of particular robotic architectures. The identified tasks are accomplished by quasi -functional modules and quasi- action modules, which maintain transformations of perceptual inputs, compute critical variables, and provide guidance of sensory -motor movements to the most relevant positions for fetching further needed information. Given the nature of these modules, active perception emerges in a different fashion from the global behaviour seen in other autonomous agent research.The quasi- functional modules and quasi- action modules cooperate by estimating the internal cohesion of various sources of information in support of the envisaged task. Specifically, such modules basically reflect various computational facilities for a species to single out the most important characteristics of its ecological niche. These facilities help to achieve internal cohesion, by maintaining a stepwise evaluation over the previously computed information, the required task, and the most relevant features presented in the environment.Apart from the above exposition of active perception, the process of task - level emergence is understood with certain principles extracted from four models of life origin. First, the fundamental structure of active perception is identified as the stepwise computation. Second, stepwise computation is promoted from baseline to elaborate patterns, i.e. from a simple system to a combinatory system. Third, a core requirement for all stepwise computational processes is the comparison between collected and needed information in order to insure the contribution to the required task. Interestingly, this point indicates that active perception has an inherent pragmatist dimension.The understanding of emergence in the present thesis goes beyond the distinc- tion between external processes and internal representations, which some current philosophers argue is required to explain emergence. The additional factors are links of various knowledge sources, in which the role of conceptual foundations is two -fold. On the one hand, those conceptual foundations elucidate how various knowledge sources can be linked. On the other, they make possible an interdisci- plinary view of emergence. Given this two -fold role, this thesis shows the unity of task -level emergence. Thus, the thesis demonstrates a cooperation between sci- ence and philosophy for the purpose of understanding the integrity of emergent cognitive phenomena
Achieving reliability using behavioural modules in a robotic assembly system
The research in this thesis looks at improving the reliability of robotic as¬
sembly while still retaining the flexibility to change the system to cope with dif¬
ferent assemblies. The lack of a truly flexible robotic assembly system presents
a problem which current systems have yet to overcome. An experimental sys¬
tem has been designed and implemented to demonstrate the ideas presented in
this work. Runs of this system have also been performed to test and assess the
scheme which has been developed.The Behaviour-based SOMASS system looks at decomposing the task into
modular units, called Behavioural Modules, which reliably perform the as¬
sembly task by using variation reducing strategies. The thesis work looks at
expanding this framework to produce a system which relaxes the constraints of
complete reliability within a Behavioural Module by embedding these in a re¬
liable system architecture. This means that Behavioural Modules do not have
to guarantee to successfully perform their given task but instead can perform it
adequately, with occasional failures dealt with by the appropriate introduction
of alternative actionsTo do this, the concepts of Exit States, the Ideal Execution Path, and Alter¬
native Execution Paths have been described. The Exit State of a Behavioural
Module gives an indication of the control path which has actually been taken
during its execution. This information, along with appropriate information
available to the execution system (such as sensor and planner data), allows the
Ideal Execution Path and Alternative Execution Paths to be defined. These
show, respectively, the best control path through the system (as determined by
the system designer) and alternative control routes which can be taken when
necessary
Telelocomotion—remotely operated legged robots
© 2020 by the authors. Li-censee MDPI, Basel, Switzerland. Teleoperated systems enable human control of robotic proxies and are particularly amenable to inaccessible environments unsuitable for autonomy. Examples include emergency response, underwater manipulation, and robot assisted minimally invasive surgery. However, teleoperation architectures have been predominantly employed in manipulation tasks, and are thus only useful when the robot is within reach of the task. This work introduces the idea of extending teleoperation to enable online human remote control of legged robots, or telelocomotion, to traverse challenging terrain. Traversing unpredictable terrain remains a challenge for autonomous legged locomotion, as demonstrated by robots commonly falling in high-profile robotics contests. Telelocomotion can reduce the risk of mission failure by leveraging the high-level understanding of human operators to command in real-time the gaits of legged robots. In this work, a haptic telelocomotion interface was developed. Two within-user studies validate the proof-of-concept interface: (i) The first compared basic interfaces with the haptic interface for control of a simulated hexapedal robot in various levels of traversal complexity; (ii) the second presents a physical implementation and investigated the efficacy of the proposed haptic virtual fixtures. Results are promising to the use of haptic feedback for telelocomotion for complex traversal tasks
Human-Inspired Multi-Agent Navigation using Knowledge Distillation
Despite significant advancements in the field of multi-agent navigation,
agents still lack the sophistication and intelligence that humans exhibit in
multi-agent settings. In this paper, we propose a framework for learning a
human-like general collision avoidance policy for agent-agent interactions in
fully decentralized, multi-agent environments. Our approach uses knowledge
distillation with reinforcement learning to shape the reward function based on
expert policies extracted from human trajectory demonstrations through behavior
cloning. We show that agents trained with our approach can take human-like
trajectories in collision avoidance and goal-directed steering tasks not
provided by the demonstrations, outperforming the experts as well as
learning-based agents trained without knowledge distillation
Recommended from our members
Improving the safety and efficiency of rail yard operations using robotics
textSignificant efforts have been expended by the railroad industry to make operations safer and more efficient through the intelligent use of sensor data. This work proposes to take the technology one step further to use this data for the control of physical systems designed to automate hazardous railroad operations, particularly those that require humans to interact with moving trains. To accomplish this, application specific requirements must be established to design self-contained machine vision and robotic solutions to eliminate the risks associated with existing manual operations. Present-day rail yard operations have been identified as good candidates to begin development. Manual uncoupling, in particular, of rolling stock in classification yards has been investigated. To automate this process, an intelligent robotic system must be able to detect, track, approach, contact, and manipulate constrained objects on equipment in motion. This work presents multiple prototypes capable of autonomously uncoupling full-scale freight cars using feedback from its surrounding environment. Geometric image processing algorithms and machine learning techniques were implemented to accurately identify cylindrical objects in point clouds generated in real-vi time. Unique methods fusing velocity and vision data were developed to synchronize a pair of moving rigid bodies in real-time. Multiple custom end-effectors with in-built compliance and fault tolerance were designed, fabricated, and tested for grasping and manipulating cylindrical objects. Finally, an event-driven robotic control application was developed to safely and reliably uncouple freight cars using data from 3D cameras, velocity sensors, force/torque transducers, and intelligent end-effector tooling. Experimental results in a lab setting confirm that modern robotic and sensing hardware can be used to reliably separate pairs of rolling stock up to two miles per hour. Additionally, subcomponents of the autonomous pin-pulling system (APPS) were designed to be modular to the point where they could be used to automate other hazardous, labor-intensive tasks found in U.S. classification yards. Overall, this work supports the deployment of autonomous robotic systems in semi-unstructured yard environments to increase the safety and efficiency of rail operations.Mechanical Engineerin
- …