302 research outputs found

    Detection of Cattle Using Drones and Convolutional Neural Networks

    Get PDF
    [EN] Multirotor drones have been one of the most important technological advances of the last decade. Their mechanics are simple compared to other types of drones and their possibilities in flight are greater. For example, they can take-off vertically. Their capabilities have therefore brought progress to many professional activities. Moreover, advances in computing and telecommunications have also broadened the range of activities in which drones may be used. Currently, artificial intelligence and information analysis are the main areas of research in the field of computing. The case study presented in this article employed artificial intelligence techniques in the analysis of information captured by drones. More specifically, the camera installed in the drone took images which were later analyzed using Convolutional Neural Networks (CNNs) to identify the objects captured in the images. In this research, a CNN was trained to detect cattle, however the same training process could be followed to develop a CNN for the detection of any other object. This article describes the design of the platform for real-time analysis of information and its performance in the detection of cattle

    Underwater Fish Detection with Weak Multi-Domain Supervision

    Full text link
    Given a sufficiently large training dataset, it is relatively easy to train a modern convolution neural network (CNN) as a required image classifier. However, for the task of fish classification and/or fish detection, if a CNN was trained to detect or classify particular fish species in particular background habitats, the same CNN exhibits much lower accuracy when applied to new/unseen fish species and/or fish habitats. Therefore, in practice, the CNN needs to be continuously fine-tuned to improve its classification accuracy to handle new project-specific fish species or habitats. In this work we present a labelling-efficient method of training a CNN-based fish-detector (the Xception CNN was used as the base) on relatively small numbers (4,000) of project-domain underwater fish/no-fish images from 20 different habitats. Additionally, 17,000 of known negative (that is, missing fish) general-domain (VOC2012) above-water images were used. Two publicly available fish-domain datasets supplied additional 27,000 of above-water and underwater positive/fish images. By using this multi-domain collection of images, the trained Xception-based binary (fish/not-fish) classifier achieved 0.17% false-positives and 0.61% false-negatives on the project's 20,000 negative and 16,000 positive holdout test images, respectively. The area under the ROC curve (AUC) was 99.94%.Comment: Published in the 2019 International Joint Conference on Neural Networks (IJCNN-2019), Budapest, Hungary, July 14-19, 2019, https://www.ijcnn.org/ , https://ieeexplore.ieee.org/document/885190

    Auto-Encoder Learning-Based UAV Communications for Livestock Management

    Get PDF
    The advancement in computing and telecommunication has broadened the applications of drones beyond military surveillance to other fields, such as agriculture. Livestock farming using unmanned aerial vehicle (UAV) systems requires surveillance and monitoring of animals on relatively large farmland. A reliable communication system between UAVs and the ground control station (GCS) is necessary to achieve this. This paper describes learning-based communication strategies and techniques that enable interaction and data exchange between UAVs and a GCS. We propose a deep auto-encoder UAV design framework for end-to-end communications. Simulation results show that the auto-encoder learns joint transmitter (UAV) and receiver (GCS) mapping functions for various communication strategies, such as QPSK, 8PSK, 16PSK and 16QAM, without prior knowledge

    Surveying Areas in Developing Regions Through Context Aware Drone Mobility.

    Get PDF
    Developing regions are often characterized by large areas that are poorly reachable or explored. The mapping of these regions and the census of roaming populations in these areas are often difficult and sporadic. In this paper we put forward an approach to aid area surveying which relies on autonomous drone mobility. In particular we illustrate the two main components of the approach. An efficient on-device object detection component, built on Convolutional Neural Networks, capable of detecting human settlements and animals on the ground with acceptable performance (latency and accuracy) and a path planning component, informed by the object identification module, which exploits Artificial Potential Fields to dynamically adapt the flight in order to gather useful information of the environment, while keeping optimal flight paths. We report some initial performance results of the on board visual perception module and describe our experimental platform based on a fixed-wing aircraft.The project was partially funded through an Institutional GCRF EPSRC grant

    Convolutional neural network-based real-time object detection and tracking for parrot AR drone 2.

    Get PDF
    Recent advancements in the field of Artificial Intelligence (AI) have provided an opportunity to create autonomous devices, robots, and machines characterized particularly with the ability to make decisions and perform tasks without human mediation. One of these devices, Unmanned Aerial Vehicles (UAVs) or drones are widely used to perform tasks like surveillance, search and rescue, object detection and target tracking, parcel delivery (recently started by Amazon), and many more. The sensitivity in performing said tasks demands that drones must be efficient and reliable. For this, in this paper, an approach to detect and track the target object, moving or still, for a drone is presented. The Parrot AR Drone 2 is used for this application. Convolutional Neural Network (CNN) is used for object detection and target tracking. The object detection results show that CNN detects and classifies object with a high level of accuracy (98%). For real-time tracking, the tracking algorithm responds faster than conventionally used approaches, efficiently tracking the detected object without losing it from sight. The calculations based on several iterations exhibit that the efficiency achieved for target tracking is 96.5%

    Improving animal monitoring using small unmanned aircraft systems (sUAS) and deep learning networks

    Get PDF
    In recent years, small unmanned aircraft systems (sUAS) have been used widely to monitor animals because of their customizability, ease of operating, ability to access difficult to navigate places, and potential to minimize disturbance to animals. Automatic identification and classification of animals through images acquired using a sUAS may solve critical problems such as monitoring large areas with high vehicle traffic for animals to prevent collisions, such as animal-aircraft collisions on airports. In this research we demonstrate automated identification of four animal species using deep learning animal classification models trained on sUAS collected images. We used a sUAS mounted with visible spectrum cameras to capture 1288 images of four different animal species: cattle (Bos taurus), horses (Equus caballus), Canada Geese (Branta canadensis), and white-tailed deer (Odocoileus virginianus). We chose these animals because they were readily accessible and whitetailed deer and Canada Geese are considered aviation hazards, as well as being easily identifiable within aerial imagery. A four-class classification problem involving these species was developed from the acquired data using deep learning neural networks. We studied the performance of two deep neural network models, convolutional neural networks (CNN) and deep residual networks (ResNet). Results indicate that the ResNet model with 18 layers, ResNet 18, may be an effective algorithm at classifying between animals while using a relatively small number of training samples. The best ResNet architecture produced a 99.18% overall accuracy (OA) in animal identification and a Kappa statistic of 0.98. The highest OA and Kappa produced by CNN were 84.55% and 0.79 respectively. These findings suggest that ResNet is effective at distinguishing among the four species tested and shows promise for classifying larger datasets of more diverse animals
    corecore