393 research outputs found
Optimisation of Product Recovery Options in End-of-Life Product Disassembly by Robots
In a circular economy, strategies for product recovery, such as reuse, recycling, and remanufacturing, play an important role at the end of a product’s life. A sustainability model was developed to solve the problem of sequence-dependent robotic disassembly line balancing. This research aimed to assess the viability of the model, which was optimised using the Multi-Objective Bees Algorithm in a robotic disassembly setting. Two industrial gear pumps were used as case studies. Four objectives (maximising profit, energy savings, emissions reductions and minimising line imbalance) were set. Several product recovery scenarios were developed to find the best recovery plans for each component. An efficient metaheuristic, the Bees Algorithm, was used to find the best solution. The robotic disassembly plans were generated and assigned to robotic workstations simultaneously. Using the proposed sustainability model on end-of-life industrial gear pumps shows the applicability of the model to real-world problems. The Multi-Objective Bees Algorithm was able to find the best scenario for product recovery by assigning each component to recycling, reuse, remanufacturing, or disposal. The performance of the algorithm is consistent, producing a similar performance for all sustainable strategies. This study addresses issues that arise with product recovery options for end-of-life products and provides optimal solutions through case studies
Human–machine network through bio‑inspired decentralized swarm intelligence and heterogeneous teaming in SAR operations
Disaster management has always been a struggle due to unpredictable changing conditions and chaotic occurrences that require real-time adaption. Highly optimized missions and robust systems mitigate uncertainty effects and improve notoriously success rates. This paper brings a niching hybrid human–machine system that combines UAVs fast responsiveness with two robust, decentralized, and scalable bio-inspired techniques. Cloud-Sharing Network (CSN) and Pseudo-Central Network (PCN), based on Bacterial and Honeybee behaviors, are presented, and applied to Safe and Rescue (SAR) operations. A post-earthquake scenario is proposed, where a heterogeneous fleet of UAVs cooperates with human rescue teams to detect and locate victims distributed along the map. Monte Carlo simulations are carried out to test both approaches through state-of-the-art metrics. This paper introduces two hybrid and bio-inspired schemes to deal with critical scouting stages, poor communications environments and high uncertainly levels in disaster release operations. Role heterogeneity, path optimization and hive data-sharing structure give PCN an efficient performance as far as task allocation and communications are concerned. Cloud-sharing network gains strength when the allocated agents per victim and square meter is high, allowing fast data transmission. Potential applications of these algorithms are not only comprehended in SAR field, but also in surveillance, geophysical mapping, security and planetary exploration
Living IoT: A Flying Wireless Platform on Live Insects
Sensor networks with devices capable of moving could enable applications
ranging from precision irrigation to environmental sensing. Using mechanical
drones to move sensors, however, severely limits operation time since flight
time is limited by the energy density of current battery technology. We explore
an alternative, biology-based solution: integrate sensing, computing and
communication functionalities onto live flying insects to create a mobile IoT
platform.
Such an approach takes advantage of these tiny, highly efficient biological
insects which are ubiquitous in many outdoor ecosystems, to essentially provide
mobility for free. Doing so however requires addressing key technical
challenges of power, size, weight and self-localization in order for the
insects to perform location-dependent sensing operations as they carry our IoT
payload through the environment. We develop and deploy our platform on
bumblebees which includes backscatter communication, low-power
self-localization hardware, sensors, and a power source. We show that our
platform is capable of sensing, backscattering data at 1 kbps when the insects
are back at the hive, and localizing itself up to distances of 80 m from the
access points, all within a total weight budget of 102 mg.Comment: Co-primary authors: Vikram Iyer, Rajalakshmi Nandakumar, Anran Wang,
In Proceedings of Mobicom. ACM, New York, NY, USA, 15 pages, 201
A face - off - classical and heuristic - based path planning approaches
Robot path planning is a computational problem to find a valid sequence of configurations to move a robot from an initial to a final destination. Several classical and heuristic-based methods exist that can be used to solve the problem. This paper compares the performance of a classical method based on potential field, Lyapunov-based Control Scheme, with those of the standard and stepping ahead Firefly Algorithms. The performance comparison is based on the optimal path distance and time. The results show that the stepping ahead Firefly algorithm finds a shorter path in lesser duration when compared with the Lyapunov-based method. The LbCS also inherently faces the local minima problem when the start, target, and obstacle’s center coordinates are collinear. This problem is solved using the firefly algorithm where the diversification of the fireflies helps escape local minima
Global Localization based on Evolutionary Optimization Algorithms for Indoor and Underground Environments
Mención Internacional en el título de doctorA fully autonomous robot is defined by its capability to sense, understand and move
within the environment to perform a specific task. These qualities are included within
the concept of navigation. However, among them, a basic transcendent one is localization,
the capacity of the system to know its position regarding its surroundings.
Therefore, the localization issue could be defined as searching the robot’s coordinates
and rotation angles within a known environment. In this thesis, the particular case
of Global Localization is addressed, when no information about the initial position
is known, and the robot relies only on its sensors. This work aims to develop several
tools that allow the system to locate in the two most usual geometric map representations:
occupancy maps and Point Clouds. The former divides the dimensional
space into equally-sized cells coded with a binary value distinguishing between free
and occupied space. Point Clouds define obstacles and environment features as a
sparse set of points in the space, commonly measured through a laser sensor.
In this work, various algorithms are presented to search for that position through
laser measurements only, in contrast with more usual methods that combine external
information with motion information of the robot, odometry. Therefore, the system
is capable of finding its own position in indoor environments, with no necessity of
external positioning and without the influence of the uncertainty that motion sensors
typically induce. Our solution is addressed by implementing various stochastic optimization
algorithms or Meta-heuristics, specifically those bio-inspired or commonly
known as Evolutionary Algorithms. Inspired by natural phenomena, these algorithms
are based on the evolution of a series of particles or population members towards a
solution through the optimization of a cost or fitness function that defines the problem.
The implemented algorithms are Differential Evolution, Particle Swarm Optimization,
and Invasive Weed Optimization, which try to mimic the behavior of evolution
through mutation, the movement of swarms or flocks of animals, and the colonizing
behavior of invasive species of plants respectively. The different implementations
address the necessity to parameterize these algorithms for a wide search space as
a complete three-dimensional map, with exploratory behavior and the convergence
conditions that terminate the search. The process is a recursive optimum estimation search, so the solution is unknown. These implementations address the optimum
localization search procedure by comparing the laser measurements from the real position
with the one obtained from each candidate particle in the known map. The
cost function evaluates this similarity between real and estimated measurements and,
therefore, is the function that defines the problem to optimize.
The common approach in localization or mapping using laser sensors is to establish
the mean square error or the absolute error between laser measurements as an
optimization function. In this work, a different perspective is introduced by benefiting
from statistical distance or divergences, utilized to describe the similarity between
probability distributions. By modeling the laser sensor as a probability distribution
over the measured distance, the algorithm can benefit from the asymmetries provided
by these divergences to favor or penalize different situations. Hence, how the laser
scans differ and not only how much can be evaluated. The results obtained in different
maps, simulated and real, prove that the Global Localization issue is successfully
solved through these methods, both in position and orientation. The implementation
of divergence-based weighted cost functions provides great robustness and accuracy
to the localization filters and optimal response before different sources and noise levels
from sensor measurements, the environment, or the presence of obstacles that are not
registered in the map.Lo que define a un robot completamente autónomo es su capacidad para percibir el entorno,
comprenderlo y poder desplazarse en ´el para realizar las tareas encomendadas.
Estas cualidades se engloban dentro del concepto de la navegación, pero entre todas
ellas la más básica y de la que dependen en buena parte el resto es la localización,
la capacidad del sistema de conocer su posición respecto al entorno que lo rodea. De
esta forma el problema de la localización se podría definir como la búsqueda de las
coordenadas de posición y los ángulos de orientación de un robot móvil dentro de un
entorno conocido. En esta tesis se aborda el caso particular de la localización global,
cuando no existe información inicial alguna y el sistema depende únicamente de sus
sensores. El objetivo de este trabajo es el desarrollo de varias herramientas que permitan
que el sistema encuentre la localización en la que se encuentra respecto a los
dos tipos de mapa más comúnmente utilizados para representar el entorno: los mapas
de ocupación y las nubes de puntos. Los primeros subdividen el espacio en celdas
de igual tamaño cuyo valor se define de forma binaria entre espacio libre y ocupado.
Las nubes de puntos definen los obstáculos como una serie dispersa de puntos en el
espacio comúnmente medidos a través de un láser.
En este trabajo se presentan varios algoritmos para la búsqueda de esa posición utilizando únicamente las medidas de este sensor láser, en contraste con los métodos más
habituales que combinan información externa con información propia del movimiento
del robot, la odometría. De esta forma el sistema es capaz de encontrar su posición
en entornos interiores sin depender de posicionamiento externo y sin verse influenciado
por la deriva típica que inducen los sensores de movimiento. La solución se
afronta mediante la implementación de varios tipos de algoritmos estocásticos de optimización o Meta-heurísticas, en concreto entre los denominados bio-inspirados o
comúnmente conocidos como Algoritmos Evolutivos. Estos algoritmos, inspirados en
varios fenómenos de la naturaleza, se basan en la evolución de una serie de partículas
o población hacia una solución en base a la optimización de una función de coste que
define el problema.
Los algoritmos implementados en este trabajo son Differential Evolution, Particle
Swarm Optimization e Invasive Weed Optimization, que tratan de imitar el comportamiento
de la evolución por mutación, el movimiento de enjambres o bandas de animales y la colonización por parte de especies invasivas de plantas respectivamente.
Las distintas implementaciones abordan la necesidad de parametrizar estos algoritmos
para un espacio de búsqueda muy amplio como es un mapa completo, con la
necesidad de que su comportamiento sea muy exploratorio, así como las condiciones
de convergencia que definen el fin de la búsqueda ya que al ser un proceso recursivo
de estimación la solución no es conocida. Estos algoritmos plantean la forma de
buscar la localización ´optima del robot mediante la comparación de las medidas del
láser en la posición real con lo esperado en la posición de cada una de esas partículas
teniendo en cuenta el mapa conocido. La función de coste evalúa esa semejanza entre
las medidas reales y estimadas y por tanto, es la función que define el problema.
Las funciones típicamente utilizadas tanto en mapeado como localización mediante
el uso de sensores láser de distancia son el error cuadrático medio o el error
absoluto entre distancia estimada y real. En este trabajo se presenta una perspectiva
diferente, aprovechando las distancias estadísticas o divergencias, utilizadas para
establecer la semejanza entre distribuciones probabilísticas. Modelando el sensor
como una distribución de probabilidad entorno a la medida aportada por el láser, se
puede aprovechar la asimetría de esas divergencias para favorecer o penalizar distintas
situaciones. De esta forma se evalúa como difieren las medias y no solo cuanto. Los
resultados obtenidos en distintos mapas tanto simulados como reales demuestran que
el problema de la localización se resuelve con éxito mediante estos métodos tanto respecto
al error de estimación de la posición como de la orientación del robot. El uso de
las divergencias y su implementación en una función de coste ponderada proporciona
gran robustez y precisión al filtro de localización y gran respuesta ante diferentes
fuentes y niveles de ruido, tanto de la propia medida del sensor, del ambiente y de
obstáculos no modelados en el mapa del entorno.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Fabio Bonsignorio.- Secretario: María Dolores Blanco Rojas.- Vocal: Alberto Brunete Gonzále
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
- …