27,499 research outputs found
Learning to See the Wood for the Trees: Deep Laser Localization in Urban and Natural Environments on a CPU
Localization in challenging, natural environments such as forests or
woodlands is an important capability for many applications from guiding a robot
navigating along a forest trail to monitoring vegetation growth with handheld
sensors. In this work we explore laser-based localization in both urban and
natural environments, which is suitable for online applications. We propose a
deep learning approach capable of learning meaningful descriptors directly from
3D point clouds by comparing triplets (anchor, positive and negative examples).
The approach learns a feature space representation for a set of segmented point
clouds that are matched between a current and previous observations. Our
learning method is tailored towards loop closure detection resulting in a small
model which can be deployed using only a CPU. The proposed learning method
would allow the full pipeline to run on robots with limited computational
payload such as drones, quadrupeds or UGVs.Comment: Accepted for publication at RA-L/ICRA 2019. More info:
https://ori.ox.ac.uk/esm-localizatio
Recommended from our members
Education Workforce Initiative: Initial Research
The purpose of this initial research is to offer evidenced possibilities in the key areas of education workforce roles, recruitment, training, deployment and leadership, along with suggested areas for further research to inform innovation in the design and strengthening of the public sector education workforce. The examples described were identified through the process outlined in the methodology section of this report, whilst we recognise that separation of examples from their context is problematic – effective innovations are highly sensitive to context and uncritical transfer of initiatives is rarely successful.
The research aims to support the Education Workforce Initiative (EWI) in moving forward with engaging education leaders and other key actors in radical thinking around the design and strengthening of the education workforce to meet the demands of the 21st century. EWI policy recommendations will be drawn from a number of country level workforce reform activities and research activity associated with the production of an Education Workforce Report (EWR). This research has informed the key questions, approach and structure of the EWR as outlined in the Education Workforce Report Proposal.
Issues pertaining to teaching and learning in primary and secondary education are at the centre of the research reported here; the focus is on moving towards schools as safe places where all children/ young people are able to engage in meaningful activity. The majority of the evidence shared here relates to teachers and school leaders; evidence on learning support staff, district officials and the wider education workforce is scant. Many of the issues examined are also pertinent to the early childhood care and education sector but these are being examined in depth by the Early Childhood Workforce Initiative. Resourcing for the Education Workforce was out of scope of this initial research but the EC recognises, as outlined in the Learning Generation Report, that provision of additional finance is a critical factor in achieving a sustainable, strong and well-motivated education workforce, particularly but not exclusively, in low and middle income countries. The next stage of EWI work will consider the relative costs of current initiatives and modelling of the cost implications of proposed reforms.
EWI aims to complement the work on teacher policy design and teacher career frameworks (including salary structures) being undertaken by other bodies and institutions such as Education International, the International Task Force on Teachers for 2030 and the Teachers’ Alliance, most particularly by bringing a focus on school and district leadership, the role of Education Support Professionals (ESPs) and inter-agency working
Change of Scenery: Unsupervised LiDAR Change Detection for Mobile Robots
This paper presents a fully unsupervised deep change detection approach for
mobile robots with 3D LiDAR. In unstructured environments, it is infeasible to
define a closed set of semantic classes. Instead, semantic segmentation is
reformulated as binary change detection. We develop a neural network,
RangeNetCD, that uses an existing point-cloud map and a live LiDAR scan to
detect scene changes with respect to the map. Using a novel loss function,
existing point-cloud semantic segmentation networks can be trained to perform
change detection without any labels or assumptions about local semantics. We
demonstrate the performance of this approach on data from challenging terrains;
mean intersection over union (mIoU) scores range between 67.4% and 82.2%
depending on the amount of environmental structure. This outperforms the
geometric baseline used in all experiments. The neural network runs faster than
10Hz and is integrated into a robot's autonomy stack to allow safe navigation
around obstacles that intersect the planned path. In addition, a novel method
for the rapid automated acquisition of per-point ground-truth labels is
described. Covering changed parts of the scene with retroreflective materials
and applying a threshold filter to the intensity channel of the LiDAR allows
for quantitative evaluation of the change detector.Comment: 7 pages (6 content, 1 references). 7 figures, submitted to the 2024
IEEE International Conference on Robotics and Automation (ICRA
Leadership for Learning Improvement in Urban Schools
Examines urban school leaders' efforts to improve the quality of teaching and learning by supporting progress for diverse students, sharing leadership work, and aligning resources. Analyzes school environments and coordination of various leadership roles
Visual road following using intrinsic images
We present a real-time visual-based road following method for mobile robots in outdoor environments. The approach combines an image processing method, that allows to retrieve illumination invariant images, with an efficient path following algorithm. The method allows a mobile robot to autonomously navigate along pathways of different types in adverse lighting conditions using monocular vision
Robot Mapping and Navigation in Real-World Environments
Robots can perform various tasks, such as mapping hazardous sites, taking part in search-and-rescue scenarios, or delivering goods and people. Robots operating in the real world face many challenges on the way to the completion of their mission. Essential capabilities required for the operation of such robots are mapping, localization and navigation. Solving all of these tasks robustly presents a substantial difficulty as these components are usually interconnected, i.e., a robot that starts without any knowledge about the environment must simultaneously build a map, localize itself in it, analyze the surroundings and plan a path to efficiently explore an unknown environment. In addition to the interconnections between these tasks, they highly depend on the sensors used by the robot and on the type of the environment in which the robot operates. For example, an RGB camera can be used in an outdoor scene for computing visual odometry, or to detect dynamic objects but becomes less useful in an environment that does not have enough light for cameras to operate. The software that controls the behavior of the robot must seamlessly process all the data coming from different sensors. This often leads to systems that are tailored to a particular robot and a particular set of sensors. In this thesis, we challenge this concept by developing and implementing methods for a typical robot navigation pipeline that can work with different types of the sensors seamlessly both, in indoor and outdoor environments. With the emergence of new range-sensing RGBD and LiDAR sensors, there is an opportunity to build a single system that can operate robustly both in indoor and outdoor environments equally well and, thus, extends the application areas of mobile robots. The techniques presented in this thesis aim to be used with both RGBD and LiDAR sensors without adaptations for individual sensor models by using range image representation and aim to provide methods for navigation and scene interpretation in both static and dynamic environments. For a static world, we present a number of approaches that address the core components of a typical robot navigation pipeline. At the core of building a consistent map of the environment using a mobile robot lies point cloud matching. To this end, we present a method for photometric point cloud matching that treats RGBD and LiDAR sensors in a uniform fashion and is able to accurately register point clouds at the frame rate of the sensor. This method serves as a building block for the further mapping pipeline. In addition to the matching algorithm, we present a method for traversability analysis of the currently observed terrain in order to guide an autonomous robot to the safe parts of the surrounding environment. A source of danger when navigating difficult to access sites is the fact that the robot may fail in building a correct map of the environment. This dramatically impacts the ability of an autonomous robot to navigate towards its goal in a robust way, thus, it is important for the robot to be able to detect these situations and to find its way home not relying on any kind of map. To address this challenge, we present a method for analyzing the quality of the map that the robot has built to date, and safely returning the robot to the starting point in case the map is found to be in an inconsistent state. The scenes in dynamic environments are vastly different from the ones experienced in static ones. In a dynamic setting, objects can be moving, thus making static traversability estimates not enough. With the approaches developed in this thesis, we aim at identifying distinct objects and tracking them to aid navigation and scene understanding. We target these challenges by providing a method for clustering a scene taken with a LiDAR scanner and a measure that can be used to determine if two clustered objects are similar that can aid the tracking performance. All methods presented in this thesis are capable of supporting real-time robot operation, rely on RGBD or LiDAR sensors and have been tested on real robots in real-world environments and on real-world datasets. All approaches have been published in peer-reviewed conference papers and journal articles. In addition to that, most of the presented contributions have been released publicly as open source software
- …