345 research outputs found

    Audio-based Localization for Swarms of Micro Air Vehicles

    Get PDF
    Localization is one of the key challenges that needs to be considered beforehand to design truly autonomous MAV teams. In this paper, we present a cooperative method to address the localization problem for a team of MAVs, where individuals obtain their position through perceiving a sound-emitting beacon MAV that is flying relative to a reference point in the environment. For this purpose, an on-board audio-based localization system is proposed that allows individuals to measure the relative bearing to the beacon robot and furthermore to localize themselves and the beacon robot simultaneously, without the need for a communication network. Our method is based on coherence testing among signals of a small on-board microphone array, to obtain the relative bearing measurements, and an estimator, to fuse these measurements with sensory information about the motion of the robot throughout time, to estimate robustly the MAV positions. The proposed method is evaluated both in simulation and in real world experiments

    2007 Annual Report of the Graduate School of Engineering and Management, Air Force Institute of Technology

    Get PDF
    The Graduate School\u27s Annual Report highlights research focus areas, new academic programs, faculty accomplishments and news, and provides top-level sponsor-funded research data and information

    Dronevision: An Experimental 3D Testbed for Flying Light Specks

    Full text link
    Today's robotic laboratories for drones are housed in a large room. At times, they are the size of a warehouse. These spaces are typically equipped with permanent devices to localize the drones, e.g., Vicon Infrared cameras. Significant time is invested to fine-tune the localization apparatus to compute and control the position of the drones. One may use these laboratories to develop a 3D multimedia system with miniature sized drones configured with light sources. As an alternative, this brave new idea paper envisions shrinking these room-sized laboratories to the size of a cube or cuboid that sits on a desk and costs less than 10K dollars. The resulting Dronevision (DV) will be the size of a 1990s Television. In addition to light sources, its Flying Light Specks (FLSs) will be network-enabled drones with storage and processing capability to implement decentralized algorithms. The DV will include a localization technique to expedite development of 3D displays. It will act as a haptic interface for a user to interact with and manipulate the 3D virtual illuminations. It will empower an experimenter to design, implement, test, debug, and maintain software and hardware that realize novel algorithms in the comfort of their office without having to reserve a laboratory. In addition to enhancing productivity, it will improve safety of the experimenter by minimizing the likelihood of accidents. This paper introduces the concept of a DV, the research agenda one may pursue using this device, and our plans to realize one

    Audio-based Relative Positioning System for Multiple Micro Air Vehicle Systems

    Get PDF
    Employing a group of independently controlled flying micro air vehicles (MAVs) for aerial coverage missions, instead of a single flying robot, increases the robustness and efficiency of the missions. Designing a group of MAVs requires addressing new challenges, such as inter-robot collision avoidance and formation control, where individual's knowledge about the relative location of their local group members is essential. A relative positioning system for a MAV needs to satisfy severe constraints in terms of size, weight, processing power, power consumption, three-dimensional coverage and price. In this paper we present an on-board audio based system that is capable of providing individuals with relative positioning information of their neighbouring sound emitting MAVs. We propose a method based on coherence testing among signals of a small onboard microphone array to obtain relative bearing measurements, and a particle filter estimator to fuse these measurements with information about the motion of robots throughout time to obtain the desired relative location estimates. A method based on fractional Fourier transform (FrFT) is used to identify and extract sounds of simultaneous chirping robots in the neighbourhood. Furthermore, we evaluate our proposed method in a real world experiment with three simultaneously flying micro air vehicles

    Self-Supervised Learning of Visual Robot Localization Using LED State Prediction as a Pretext Task

    Full text link
    We propose a novel self-supervised approach for learning to visually localize robots equipped with controllable LEDs. We rely on a few training samples labeled with position ground truth and many training samples in which only the LED state is known, whose collection is cheap. We show that using LED state prediction as a pretext task significantly helps to learn the visual localization end task. The resulting model does not require knowledge of LED states during inference. We instantiate the approach to visual relative localization of nano-quadrotors: experimental results show that using our pretext task significantly improves localization accuracy (from 68.3% to 76.2%) and outperforms alternative strategies, such as a supervised baseline, model pre-training, and an autoencoding pretext task. We deploy our model aboard a 27-g Crazyflie nano-drone, running at 21 fps, in a position-tracking task of a peer nano-drone. Our approach, relying on position labels for only 300 images, yields a mean tracking error of 4.2 cm versus 11.9 cm of a supervised baseline model trained without our pretext task. Videos and code of the proposed approach are available at https://github.com/idsia-robotics/leds-as-pretex

    Adaptive and learning-based formation control of swarm robots

    Get PDF
    Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation

    Particle Swarm Optimization Based Source Seeking

    Get PDF
    Signal source seeking using autonomous vehicles is a complex problem. The complexity increases manifold when signal intensities captured by physical sensors onboard are noisy and unreliable. Added to the fact that signal strength decays with distance, noisy environments make it extremely difficult to describe and model a decay function. This paper addresses our work with seeking maximum signal strength in a continuous electromagnetic signal source with mobile robots, using Particle Swarm Optimization (PSO). A one to one correspondence with swarm members in a PSO and physical Mobile robots is established and the positions of the robots are iteratively updated as the PSO algorithm proceeds forward. Since physical robots are responsive to swarm position updates, modifications were required to implement the interaction between real robots and the PSO algorithm. The development of modifications necessary to implement PSO on mobile robots, and strategies to adapt to real life environments such as obstacles and collision objects are presented in this paper. Our findings are also validated using experimental testbeds.Comment: 13 pages, 12 figure

    On-Board Relative Bearing Estimation for Teams of Drones Using Sound

    Get PDF
    In a team of autonomous drones, individual knowledge about the relative location of teammates is essential. Existing relative positioning solutions for teams of small drones mostly rely on external systems such as motion tracking cameras or GPS satellites that might not always be accessible. In this letter, we describe an onboard solution to measure the 3-D relative direction between drones using sound as the main source of information. First, we describe a method to measure the directions of other robots from perceiving their engine sounds in the absence of self-engine noise. We then extend the method to use active acoustic signaling to obtain the relative directions in the presence of self-engine noise, to increase the detection range, and to discriminate the identity of robots. Methods are evaluated in real world experiments and a fully autonomous leader-following behavior is illustrated with two drones using the proposed system
    • …
    corecore