2,304 research outputs found
MATLAB toolboxes: robotics and vision for students and teachers
In this column, Dr. Peter Corke of CSIRO, Australia, gives us a description of MATLAB Toolboxes he has developed. He has been passionately developing tools to enable students and teachers to better understand the theoretical concepts behind classical robotics and computer vision through easy and intuitive simulation and visualization. The results of this labor of love have been packaged as MATLAB Toolboxes: the Robotics Toolbox and the Vision Toolbox. āDaniela Rus, RAS Education Cochai
Multiparton Interactions with an x-dependent Proton Size
Theoretical arguments, supported by other indirect evidence, suggest that the
wave function of high-x partons should be narrower than that of low-x ones. In
this article, we present a modification to the variable impact parameter
framework of Pythia 8 to model this effect. In particular, a Gaussian hadronic
matter profile is introduced, with a width dependent on the x value of the
constituent being probed. Results are compared against the default single- and
double-Gaussian profiles, as well as an intermediate overlap function
Multiparton Interactions and Rescattering
The concept of multiple partonic interactions in hadronic events is vital for
the understanding of both minimum-bias and underlying-event physics. The area
is rather little studied, however, and current models offer a far from complete
coverage, even of the effects we know ought to be there. In this article we
address one such topic, namely that of rescattering, where an already scattered
parton is allowed to take part in another subsequent scattering. A framework
for rescattering is introduced for the Pythia 8 event generator and fully
integrated with normal multiparton interactions and initial- and final-state
radiation. Using this model, the effects on event structure are studied, and
distributions are shown both for minimum-bias and jet events
Multi-View Picking: Next-best-view Reaching for Improved Grasping in Clutter
Camera viewpoint selection is an important aspect of visual grasp detection,
especially in clutter where many occlusions are present. Where other approaches
use a static camera position or fixed data collection routines, our Multi-View
Picking (MVP) controller uses an active perception approach to choose
informative viewpoints based directly on a distribution of grasp pose estimates
in real time, reducing uncertainty in the grasp poses caused by clutter and
occlusions. In trials of grasping 20 objects from clutter, our MVP controller
achieves 80% grasp success, outperforming a single-viewpoint grasp detector by
12%. We also show that our approach is both more accurate and more efficient
than approaches which consider multiple fixed viewpoints.Comment: ICRA 2019 Video: https://youtu.be/Vn3vSPKlaEk Code:
https://github.com/dougsm/mvp_gras
Practical application of pseudospectral optimization to robot path planning
To obtain minimum time or minimum energy trajectories for robots it is necessary to employ planning methods which adequately consider the platformās dynamic properties. A variety of sampling, graph-based or local receding-horizon optimisation methods have previously been proposed. These typically use simpliļ¬ed kino-dynamic models to avoid the signiļ¬cant computational burden of solving this problem in a high dimensional state-space. In this paper we investigate solutions from the class of pseudospectral optimisation methods which have grown in favour amongst the optimal control community in recent years. These methods have high computational efficiency and rapid convergence properties. We present a practical application of such an approach to the robot path planning problem to provide a trajectory considering the robotās dynamic properties. We extend the existing literature by augmenting the path constraints with sensed obstacles rather than predeļ¬ned analytical functions to enable real world application
Modelling Local Deep Convolutional Neural Network Features to Improve Fine-Grained Image Classification
We propose a local modelling approach using deep convolutional neural
networks (CNNs) for fine-grained image classification. Recently, deep CNNs
trained from large datasets have considerably improved the performance of
object recognition. However, to date there has been limited work using these
deep CNNs as local feature extractors. This partly stems from CNNs having
internal representations which are high dimensional, thereby making such
representations difficult to model using stochastic models. To overcome this
issue, we propose to reduce the dimensionality of one of the internal fully
connected layers, in conjunction with layer-restricted retraining to avoid
retraining the entire network. The distribution of low-dimensional features
obtained from the modified layer is then modelled using a Gaussian mixture
model. Comparative experiments show that considerable performance improvements
can be achieved on the challenging Fish and UEC FOOD-100 datasets.Comment: 5 pages, three figure
A generic implementation framework for stereo matching algorithms
Traditional area-based matching techniques make use of similarity metrics such as the Sum of Absolute Differences(SAD), Sum of Squared Differences (SSD) and Normalised Cross Correlation (NCC). Non-parametric matching algorithms such as the rank and census rely on the relative ordering of pixel values rather than the pixels themselves as a similarity measure. Both traditional area-based and non-parametric stereo matching techniques have an algorithmic structure which is amenable to fast hardware realisation. This investigation undertakes a performance assessment of these two families of algorithms for robustness to radiometric distortion and random noise. A generic implementation framework is presented for the stereo matching problem and the relative hardware requirements for the various metrics investigated
Virtual fences for controlling cows
We describe a moving virtual fence algorithm for herding cows. Each animal in the herd is given a smart collar consisting of a GPS, PDA, wireless networking and a sound amplifier. Using the GPS, the animal's location can be verified relative to the fence boundary. When approaching the perimeter, the animal is presented with a sound stimulus whose effect is to move away. We have developed the virtual fence control algorithm for moving a herd. We present simulation results and data from experiments with 8 cows equipped with smart collars
Subset Feature Learning for Fine-Grained Category Classification
Fine-grained categorisation has been a challenging problem due to small
inter-class variation, large intra-class variation and low number of training
images. We propose a learning system which first clusters visually similar
classes and then learns deep convolutional neural network features specific to
each subset. Experiments on the popular fine-grained Caltech-UCSD bird dataset
show that the proposed method outperforms recent fine-grained categorisation
methods under the most difficult setting: no bounding boxes are presented at
test time. It achieves a mean accuracy of 77.5%, compared to the previous best
performance of 73.2%. We also show that progressive transfer learning allows us
to first learn domain-generic features (for bird classification) which can then
be adapted to specific set of bird classes, yielding improvements in accuracy
Robot Navigation in Unseen Spaces using an Abstract Map
Human navigation in built environments depends on symbolic spatial
information which has unrealised potential to enhance robot navigation
capabilities. Information sources such as labels, signs, maps, planners, spoken
directions, and navigational gestures communicate a wealth of spatial
information to the navigators of built environments; a wealth of information
that robots typically ignore. We present a robot navigation system that uses
the same symbolic spatial information employed by humans to purposefully
navigate in unseen built environments with a level of performance comparable to
humans. The navigation system uses a novel data structure called the abstract
map to imagine malleable spatial models for unseen spaces from spatial symbols.
Sensorimotor perceptions from a robot are then employed to provide purposeful
navigation to symbolic goal locations in the unseen environment. We show how a
dynamic system can be used to create malleable spatial models for the abstract
map, and provide an open source implementation to encourage future work in the
area of symbolic navigation. Symbolic navigation performance of humans and a
robot is evaluated in a real-world built environment. The paper concludes with
a qualitative analysis of human navigation strategies, providing further
insights into how the symbolic navigation capabilities of robots in unseen
built environments can be improved in the future.Comment: 15 pages, published in IEEE Transactions on Cognitive and
Developmental Systems (http://doi.org/10.1109/TCDS.2020.2993855), see
https://btalb.github.io/abstract_map/ for access to softwar
- ā¦