33 research outputs found
SwarmLab: a Matlab Drone Swarm Simulator
Among the available solutions for drone swarm simulations, we identified a
gap in simulation frameworks that allow easy algorithms prototyping, tuning,
debugging and performance analysis, and do not require the user to interface
with multiple programming languages. We present SwarmLab, a software entirely
written in Matlab, that aims at the creation of standardized processes and
metrics to quantify the performance and robustness of swarm algorithms, and in
particular, it focuses on drones. We showcase the functionalities of SwarmLab
by comparing two state-of-the-art algorithms for the navigation of aerial
swarms in cluttered environments, Olfati-Saber's and Vasarhelyi's. We analyze
the variability of the inter-agent distances and agents' speeds during flight.
We also study some of the performance metrics presented, i.e. order, inter and
extra-agent safety, union, and connectivity. While Olfati-Saber's approach
results in a faster crossing of the obstacle field, Vasarhelyi's approach
allows the agents to fly smoother trajectories, without oscillations. We
believe that SwarmLab is relevant for both the biological and robotics research
communities, and for education, since it allows fast algorithm development, the
automatic collection of simulated data, the systematic analysis of swarming
behaviors with performance metrics inherited from the state of the art.Comment: Accepted to the 2020 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS
Data-Driven Analytic Differentiation via High Gain Observers and Gaussian Process Priors
The presented paper tackles the problem of modeling an unknown function, and
its first derivatives, out of scattered and poor-quality data. The
considered setting embraces a large number of use cases addressed in the
literature and fits especially well in the context of control barrier
functions, where high-order derivatives of the safe set are required to
preserve the safety of the controlled system. The approach builds on a cascade
of high-gain observers and a set of Gaussian process regressors trained on the
observers' data. The proposed structure allows for high robustness against
measurement noise and flexibility with respect to the employed sampling law.
Unlike previous approaches in the field, where a large number of samples are
required to fit correctly the unknown function derivatives, here we suppose to
have access only to a small window of samples, sliding in time. The paper
presents performance bounds on the attained regression error and numerical
simulations showing how the proposed method outperforms previous approaches
Hand-worn Haptic Interface for Drone Teleoperation
Drone teleoperation is usually accomplished using remote radio controllers,
devices that can be hard to master for inexperienced users. Moreover, the
limited amount of information fed back to the user about the robot's state,
often limited to vision, can represent a bottleneck for operation in several
conditions. In this work, we present a wearable interface for drone
teleoperation and its evaluation through a user study. The two main features of
the proposed system are a data glove to allow the user to control the drone
trajectory by hand motion and a haptic system used to augment their awareness
of the environment surrounding the robot. This interface can be employed for
the operation of robotic systems in line of sight (LoS) by inexperienced
operators and allows them to safely perform tasks common in inspection and
search-and-rescue missions such as approaching walls and crossing narrow
passages with limited visibility conditions. In addition to the design and
implementation of the wearable interface, we performed a systematic study to
assess the effectiveness of the system through three user studies (n = 36) to
evaluate the users' learning path and their ability to perform tasks with
limited visibility. We validated our ideas in both a simulated and a real-world
environment. Our results demonstrate that the proposed system can improve
teleoperation performance in different cases compared to standard remote
controllers, making it a viable alternative to standard Human-Robot Interfaces.Comment: Accepted at the IEEE International Conference on Robotics and
Automation (ICRA) 202
The influence of limited visual sensing on the Reynolds ïŹocking algorithm
The interest in multi-drone systems flourished in the last decade and their application is promising in many fields. We believe that in order to make drone swarms flying smoothly and reliably in real-world scenarios we need a first intermediate step which consists in the analysis of the effects of limited sensing on the behavior of the swarm. In nature, the central sensor modality often used for achieving flocking is vision. In this work, we study how the reduction in the field of view and the orientation of the visual sensors affect the performance of the Reynolds flocking algorithm used to control the swarm. To quantify the impact of limited visual sensing, we introduce different metrics such as (i) order, (ii) safety, (iii) union and (iv) connectivity. As Nature suggests, our results confirm that lateral vision is essential for coordinating the movements of the individuals. Moreover, the analysis we provide will simplify the tuning of the Reynolds flocking algorithm which is crucial for real-world deployment and, especially for aerial swarms, it depends on the envisioned application. We achieve the results presented in this paper through extensive Monte-Carlo simulations and integrate them with the use of genetic algorithm optimization
UWB-based system for UAV Localization in GNSS-Denied Environments: Characterization and Dataset
Small unmanned aerial vehicles (UAV) have penetrated multiple domains over
the past years. In GNSS-denied or indoor environments, aerial robots require a
robust and stable localization system, often with external feedback, in order
to fly safely. Motion capture systems are typically utilized indoors when
accurate localization is needed. However, these systems are expensive and most
require a fixed setup. Recently, visual-inertial odometry and similar methods
have advanced to a point where autonomous UAVs can rely on them for
localization. The main limitation in this case comes from the environment, as
well as in long-term autonomy due to accumulating error if loop closure cannot
be performed efficiently. For instance, the impact of low visibility due to
dust or smoke in post-disaster scenarios might render the odometry methods
inapplicable. In this paper, we study and characterize an ultra-wideband (UWB)
system for navigation and localization of aerial robots indoors based on
Decawave's DWM1001 UWB node. The system is portable, inexpensive and can be
battery powered in its totality. We show the viability of this system for
autonomous flight of UAVs, and provide open-source methods and data that enable
its widespread application even with movable anchor systems. We characterize
the accuracy based on the position of the UAV with respect to the anchors, its
altitude and speed, and the distribution of the anchors in space. Finally, we
analyze the accuracy of the self-calibration of the anchors' positions.Comment: Accepted to the 2020 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS 2020
Recommended from our members
Integration of Earth observation, aerosol and people evacuation modelling for preparedness and emergency response
N/