16,244 research outputs found
Learning Deployable Navigation Policies at Kilometer Scale from a Single Traversal
Model-free reinforcement learning has recently been shown to be effective at
learning navigation policies from complex image input. However, these
algorithms tend to require large amounts of interaction with the environment,
which can be prohibitively costly to obtain on robots in the real world. We
present an approach for efficiently learning goal-directed navigation policies
on a mobile robot, from only a single coverage traversal of recorded data. The
navigation agent learns an effective policy over a diverse action space in a
large heterogeneous environment consisting of more than 2km of travel, through
buildings and outdoor regions that collectively exhibit large variations in
visual appearance, self-similarity, and connectivity. We compare pretrained
visual encoders that enable precomputation of visual embeddings to achieve a
throughput of tens of thousands of transitions per second at training time on a
commodity desktop computer, allowing agents to learn from millions of
trajectories of experience in a matter of hours. We propose multiple forms of
computationally efficient stochastic augmentation to enable the learned policy
to generalise beyond these precomputed embeddings, and demonstrate successful
deployment of the learned policy on the real robot without fine tuning, despite
environmental appearance differences at test time. The dataset and code
required to reproduce these results and apply the technique to other datasets
and robots is made publicly available at rl-navigation.github.io/deployable
An adaptive appearance-based map for long-term topological localization of mobile robots
This work considers a mobile service robot which uses an appearance-based representation of its workplace as a map, where the current view and the map are used to estimate the current position in the environment. Due to the nature of real-world environments such as houses and offices, where the appearance keeps changing, the internal representation may become out of date after some time. To solve this problem the robot needs to be able to adapt its internal representation continually to the changes in the environment. This paper presents a method for creating an adaptive map for long-term appearance-based localization of a mobile robot using long-term and short-term memory concepts, with omni-directional vision as the external sensor
Object Edge Contour Localisation Based on HexBinary Feature Matching
This paper addresses the issue of localising object
edge contours in cluttered backgrounds to support robotics
tasks such as grasping and manipulation and also to improve
the potential perceptual capabilities of robot vision systems. Our
approach is based on coarse-to-fine matching of a new recursively
constructed hierarchical, dense, edge-localised descriptor,
the HexBinary, based on the HexHog descriptor structure first
proposed in [1]. Since Binary String image descriptors [2]â
[5] require much lower computational resources, but provide
similar or even better matching performance than Histogram
of Orientated Gradient (HoG) descriptors, we have replaced
the HoG base descriptor fields used in HexHog with Binary
Strings generated from first and second order polar derivative
approximations. The ALOI [6] dataset is used to evaluate
the HexBinary descriptors which we demonstrate to achieve
a superior performance to that of HexHoG [1] for pose
refinement. The validation of our object contour localisation
system shows promising results with correctly labelling ~86% of edgel positions and mis-labelling ~3%
Recommended from our members
Commodities and Linkages: Industrialisation in Sub-Saharan Africa
In a complementary Discussion Paper (MMCP DP 12 2011) we set out the reasons why we believe that there is extensive scope for linkage development into and out of SSAâs commodities sectors. In this Discussion Paper, we present the findings of our detailed empirical enquiry into the determinants of the breadth and depth of linkages in eight SSA countries (Angola, Botswana, Gabon, Ghana, Nigeria, South Africa Tanzania, and Zambia) and six sectors (copper, diamonds, gold, oil and gas, mining services and timber). We conclude from this detailed research that the extent of linkages varies as a consequence of four factors which intrinsically affect their progress â the passage of time, the complexity of the sector and the level of capabilities in the domestic economy. However, beyond this we identify three sets of related factors which determined the nature and pace of linkage development. The first is the structure of ownership, both in lead commodity producing firms and in their suppliers and domestic customers. The second is the nature and quality of both hard infrastructure (for example, roads and ports) and soft infrastructure (for example, the efficiency of customs clearance). The third is the availability of skills and the structure and orientation of the National System of Innovation in the domestic economy. The fourth, and overwhelmingly important contextual factor is policy. This reflects policy towards the commodity sector itself, and policy which affects the three contextual drivers, namely ownership, infrastructure and capabilities. As a result of this comparative analysis we provided an explanation of why linkage development was progressive in some economies (such as Botswana) and regressive in others (such as Tanzania). This cluster of factors also explains why the breadth and depth of linkages is relative advanced in some countries (such as South Africa), and at a very nascent stage in other countries (such as Angola)
Smart Specialisation Strategies for Supporting Europe 2020 Vision. Looking at the American Experience: the Case of the Boston Area
These reflections aim to highlight the crucial challenge that European Regions are called to face applying the âResearch and Innovation Strategies for Smart Specializationâ policy for pursuing the virtuous implementation of EU Cohesion Policy and âEurope 2020â Agenda. The original cultural style of the âUS Smart Specialization modelâ, supported by the âcluster theoryâ and the âinnovation paradigmâ, represents a significant lesson in Boston area
Historical collaborative geocoding
The latest developments in digital have provided large data sets that can
increasingly easily be accessed and used. These data sets often contain
indirect localisation information, such as historical addresses. Historical
geocoding is the process of transforming the indirect localisation information
to direct localisation that can be placed on a map, which enables spatial
analysis and cross-referencing. Many efficient geocoders exist for current
addresses, but they do not deal with the temporal aspect and are based on a
strict hierarchy (..., city, street, house number) that is hard or impossible
to use with historical data. Indeed historical data are full of uncertainties
(temporal aspect, semantic aspect, spatial precision, confidence in historical
source, ...) that can not be resolved, as there is no way to go back in time to
check. We propose an open source, open data, extensible solution for geocoding
that is based on the building of gazetteers composed of geohistorical objects
extracted from historical topographical maps. Once the gazetteers are
available, geocoding an historical address is a matter of finding the
geohistorical object in the gazetteers that is the best match to the historical
address. The matching criteriae are customisable and include several dimensions
(fuzzy semantic, fuzzy temporal, scale, spatial precision ...). As the goal is
to facilitate historical work, we also propose web-based user interfaces that
help geocode (one address or batch mode) and display over current or historical
topographical maps, so that they can be checked and collaboratively edited. The
system is tested on Paris city for the 19-20th centuries, shows high returns
rate and is fast enough to be used interactively.Comment: WORKING PAPE
A Review of Codebook Models in Patch-Based Visual Object Recognition
The codebook model-based approach, while ignoring any structural aspect in vision, nonetheless provides state-of-the-art performances on current datasets. The key role of a visual codebook is to provide a way to map the low-level features into a fixed-length vector in histogram space to which standard classifiers can be directly applied. The discriminative power of such a visual codebook determines the quality of the codebook model, whereas the size of the codebook controls the complexity of the model. Thus, the construction of a codebook is an important step which is usually done by cluster analysis. However, clustering is a process that retains regions of high density in a distribution and it follows that the resulting codebook need not have discriminant properties. This is also recognised as a computational bottleneck of such systems. In our recent work, we proposed a resource-allocating codebook, to constructing a discriminant codebook in a one-pass design procedure that slightly outperforms more traditional approaches at drastically reduced computing times. In this review we survey several approaches that have been proposed over the last decade with their use of feature detectors, descriptors, codebook construction schemes, choice of classifiers in recognising objects, and datasets that were used in evaluating the proposed methods
- âŠ