3 research outputs found

    Secure and Decentralized Swarm Behavior with Autonomous Agents for Smart Cities

    Full text link
    Unmanned Aerial Vehicles (UAVs), referenced as drones, have advanced to consumer adoption for hobby and business use. Drone applications, such as infrastructure technology, security mechanisms, and resource delivery, are just the starting point. More complex tasks are possible through the use of UAV swarms. These tasks increase the potential impacts that drones will have on smart cities, modern cities which have fully adopted technology in order to enhance daily operations as well as the welfare of it's citizens. Smart cities not only consist of static mesh networks of sensors, but can contain dynamic aspects as well including both ground and air based autonomous vehicles. Networked computational devices require paramount security to ensure the safety of a city. To accomplish such high levels of security, services rely on secure-by-design protocols, impervious to security threats. Given the large number of sensors, autonomous vehicles, and other advancements, smart cities necessitates this level of security. The SHARK protocol (Secure, Heterogeneous, Autonomous, and Rotational Knowledge for Swarms) ensures this kind of security by allowing for new applications for UAV swarm technology. Enabling drones to circle a target without a centralized control or selecting lead agents, the SHARKS protocol performs organized movement among agents without creating a central point for attackers to target. Through comparisons on the stability of the protocol in different settings, experiments demonstrate the efficiency and capacity of the SHARKS protocol.Comment: 8 pages, 1 figure, 1 chart, 8 table

    Vision-based multirotor following using synthetic learning techniques

    Get PDF
    Deep-and reinforcement-learning techniques have increasingly required large sets of real data to achieve stable convergence and generalization, in the context of image-recognition, object-detection or motion-control strategies. On this subject, the research community lacks robust approaches to overcome unavailable real-world extensive data by means of realistic synthetic-information and domain-adaptation techniques. In this work, synthetic-learning strategies have been used for the vision-based autonomous following of a noncooperative multirotor. The complete maneuver was learned with synthetic images and high-dimensional low-level continuous robot states, with deep-and reinforcement-learning techniques for object detection and motion control, respectively. A novel motion-control strategy for object following is introduced where the camera gimbal movement is coupled with the multirotor motion during the multirotor following. Results confirm that our present framework can be used to deploy a vision-based task in real flight using synthetic data. It was extensively validated in both simulated and real-flight scenarios, providing proper results (following a multirotor up to 1.3 m/s in simulation and 0.3 m/s in real flights)

    Computer vision in target pursuit using a UAV

    Get PDF
    Research in target pursuit using Unmanned Aerial Vehicle (UAV) has gained attention in recent years, this is primarily due to decrease in cost and increase in demand of small UAVs in many sectors. In computer vision, target pursuit is a complex problem as it involves the solving of many sub-problems which are typically concerned with the detection, tracking and following of the object of interest. At present, the majority of related existing methods are developed using computer simulation with the assumption of ideal environmental factors, while the remaining few practical methods are mainly developed to track and follow simple objects that contain monochromatic colours with very little texture variances. Current research in this topic is lacking of practical vision based approaches. Thus the aim of this research is to fill the gap by developing a real-time algorithm capable of following a person continuously given only a photo input. As this research considers the whole procedure as an autonomous system, therefore the drone is activated automatically upon receiving a photo of a person through Wi-Fi. This means that the whole system can be triggered by simply emailing a single photo from any device anywhere. This is done by first implementing image fetching to automatically connect to WIFI, download the image and decode it. Then, human detection is performed to extract the template from the upper body of the person, the intended target is acquired using both human detection and template matching. Finally, target pursuit is achieved by tracking the template continuously while sending the motion commands to the drone. In the target pursuit system, the detection is mainly accomplished using a proposed human detection method that is capable of detecting, extracting and segmenting the human body figure robustly from the background without prior training. This involves detecting face, head and shoulder separately, mainly using gradient maps. While the tracking is mainly accomplished using a proposed generic and non-learning template matching method, this involves combining intensity template matching with colour histogram model and employing a three-tier system for template management. A flight controller is also developed, it supports three types of controls: keyboard, mouse and text messages. Furthermore, the drone is programmed with three different modes: standby, sentry and search. To improve the detection and tracking of colour objects, this research has also proposed several colour related methods. One of them is a colour model for colour detection which consists of three colour components: hue, purity and brightness. Hue represents the colour angle, purity represents the colourfulness and brightness represents intensity. It can be represented in three different geometric shapes: sphere, hemisphere and cylinder, each of these shapes also contains two variations. Experimental results have shown that the target pursuit algorithm is capable of identifying and following the target person robustly given only a photo input. This can be evidenced by the live tracking and mapping of the intended targets with different clothing in both indoor and outdoor environments. Additionally, the various methods developed in this research could enhance the performance of practical vision based applications especially in detecting and tracking of objects
    corecore