CORE
🇺🇦
make metadata, not war
Services
Services overview
Explore all CORE services
Access to raw data
API
Dataset
FastSync
Content discovery
Recommender
Discovery
OAI identifiers
OAI Resolver
Managing content
Dashboard
Bespoke contracts
Consultancy services
Support us
Support us
Membership
Sponsorship
Community governance
Advisory Board
Board of supporters
Research network
About
About us
Our mission
Team
Blog
FAQs
Contact us
A membrane parallel rapidly-exploring random tree algorithm for robotic motion planning
Authors
Miguel A. Martínez-Del-Amor
Migueí A. Martínez-Del-Amor
+4 more
Ferrante Neri
Ignacio Pérez-Hurtado
Mario J. Pérez-Jiménez
Gexiang Zhang
Publication date
27 February 2020
Publisher
'IOS Press'
Doi
Cite
Abstract
© 2020-IOS Press and the authors. All rights reserved. In recent years, incremental sampling-based motion planning algorithms have been widely used to solve robot motion planning problems in high-dimensional configuration spaces. In particular, the Rapidly-exploring Random Tree (RRT) algorithm and its asymptotically-optimal counterpart called RRT∗ are popular algorithms used in real-life applications due to its desirable properties. Such algorithms are inherently iterative, but certain modules such as the collision-checking procedure can be parallelized providing significant speedup with respect to sequential implementations. In this paper, the RRT and RRT∗ algorithms have been adapted to a bioinspired computational framework called Membrane Computing whose models of computation, a.k.a. P systems, run in a non-deterministic and massively parallel way. A large number of robotic applications are currently using a variant of P systems called Enzymatic Numerical P systems (ENPS) for reactive controlling, but there is a lack of solutions for motion planning in the framework. The novel models in this work have been designed using the ENPS framework. In order to test and validate the ENPS models for RRT and RRT*, we present two ad-hoc implementations able to emulate the computation of the models using OpenMP and CUDA. Finally, we show the speedup of our solutions with respect to sequential baseline implementations. The results show a speedup up to 6x using OpenMP with 8 cores against the sequential implementation and up to 24x using CUDA against the best multi-threading configuration
Similar works
Full text
Open in the Core reader
Download PDF
Available Versions
Repository@Nottingham
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:nottingham-repository.work...
Last time updated on 27/03/2020