709 research outputs found

    Weed Classification for Site-Specific Weed Management Using an Automated Stereo Computer-Vision Machine-Learning System in Rice Fields

    Get PDF
    Producción CientíficaSite-specific weed management and selective application of herbicides as eco-friendly techniques are still challenging tasks to perform, especially for densely cultivated crops, such as rice. This study is aimed at developing a stereo vision system for distinguishing between rice plants and weeds and further discriminating two types of weeds in a rice field by using artificial neural networks (ANNs) and two metaheuristic algorithms. For this purpose, stereo videos were recorded across the rice field and different channels were extracted and decomposed into the constituent frames. Next, upon pre-processing and segmentation of the frames, green plants were extracted out of the background. For accurate discrimination of the rice and weeds, a total of 302 color, shape, and texture features were identified. Two metaheuristic algorithms, namely particle swarm optimization (PSO) and the bee algorithm (BA), were used to optimize the neural network for selecting the most effective features and classifying different types of weeds, respectively. Comparing the proposed classification method with the K-nearest neighbors (KNN) classifier, it was found that the proposed ANN-BA classifier reached accuracies of 88.74% and 87.96% for right and left channels, respectively, over the test set. Taking into account either the arithmetic or the geometric means as the basis, the accuracies were increased up to 92.02% and 90.7%, respectively, over the test set. On the other hand, the KNN suffered from more cases of misclassification, as compared to the proposed ANN-BA classifier, generating an overall accuracy of 76.62% and 85.59% for the classification of the right and left channel data, respectively, and 85.84% and 84.07% for the arithmetic and geometric mean values, respectively

    Cellular Automata Applications in Shortest Path Problem

    Full text link
    Cellular Automata (CAs) are computational models that can capture the essential features of systems in which global behavior emerges from the collective effect of simple components, which interact locally. During the last decades, CAs have been extensively used for mimicking several natural processes and systems to find fine solutions in many complex hard to solve computer science and engineering problems. Among them, the shortest path problem is one of the most pronounced and highly studied problems that scientists have been trying to tackle by using a plethora of methodologies and even unconventional approaches. The proposed solutions are mainly justified by their ability to provide a correct solution in a better time complexity than the renowned Dijkstra's algorithm. Although there is a wide variety regarding the algorithmic complexity of the algorithms suggested, spanning from simplistic graph traversal algorithms to complex nature inspired and bio-mimicking algorithms, in this chapter we focus on the successful application of CAs to shortest path problem as found in various diverse disciplines like computer science, swarm robotics, computer networks, decision science and biomimicking of biological organisms' behaviour. In particular, an introduction on the first CA-based algorithm tackling the shortest path problem is provided in detail. After the short presentation of shortest path algorithms arriving from the relaxization of the CAs principles, the application of the CA-based shortest path definition on the coordinated motion of swarm robotics is also introduced. Moreover, the CA based application of shortest path finding in computer networks is presented in brief. Finally, a CA that models exactly the behavior of a biological organism, namely the Physarum's behavior, finding the minimum-length path between two points in a labyrinth is given.Comment: To appear in the book: Adamatzky, A (Ed.) Shortest path solvers. From software to wetware. Springer, 201

    Accuracy vs. Energy: An Assessment of Bee Object Inference in Videos From On-Hive Video Loggers With YOLOv3, YOLOv4-Tiny, and YOLOv7-Tiny

    Get PDF
    A continuing trend in precision apiculture is to use computer vision methods to quantify characteristics of bee traffic in managed colonies at the hive\u27s entrance. Since traffic at the hive\u27s entrance is a contributing factor to the hive\u27s productivity and health, we assessed the potential of three open-source convolutional network models, YOLOv3, YOLOv4-tiny, and YOLOv7-tiny, to quantify omnidirectional traffic in videos from on-hive video loggers on regular, unmodified one- and two-super Langstroth hives and compared their accuracies, energy efficacies, and operational energy footprints. We trained and tested the models with a 70/30 split on a dataset of 23,173 flying bees manually labeled in 5819 images from 10 randomly selected videos and manually evaluated the trained models on 3600 images from 120 randomly selected videos from different apiaries, years, and queen races. We designed a new energy efficacy metric as a ratio of performance units per energy unit required to make a model operational in a continuous hive monitoring data pipeline. In terms of accuracy, YOLOv3 was first, YOLOv7-tiny—second, and YOLOv4-tiny—third. All models underestimated the true amount of traffic due to false negatives. YOLOv3 was the only model with no false positives, but had the lowest energy efficacy and highest operational energy footprint in a deployed hive monitoring data pipeline. YOLOv7-tiny had the highest energy efficacy and the lowest operational energy footprint in the same pipeline. Consequently, YOLOv7-tiny is a model worth considering for training on larger bee datasets if a primary objective is the discovery of non-invasive computer vision models of traffic quantification with higher energy efficacies and lower operational energy footprints

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    Biocidal Activity of Phyto-Derivative Products Used on Phototrophic Biofilms Growing on Stone Surfaces of the Domus Aurea in Rome (Italy)

    Get PDF
    Hypogean or enclosed monuments are important cultural heritage sites that can suffer biodegradation. Many of the stone walls of the prestigious Domus Aurea are overwhelmed by dense biofilms and so need intervention. Room 93 was chosen as a study site with the aim to test the efficacy of phyto-derivatives as new biocides. Laboratory studies were performed comparing the effects of liquorice leaf extract (Glycyrrhiza glabra L.), lavender essential oil (Lavandula angustifolia Mill.) and a combination of both. In situ studies were also performed to test the effect of liquorice. The results were compared with those of the commonly used synthetic biocide benzalkonium chloride. The effects on the biofilms were assessed by microscopy along with chlorophyll fluorescence analysis. The phototrophs in the biofilms were identified morphologically, while the heterotrophs were identified with culture analysis and 16S gene sequencing. Results showed that the mixed solution liquorice/lavender was the most effective in inhibiting the photosynthetic activities of biofilms in the laboratory tests; while, in situ, the effect of liquorice was particularly encouraging as an efficient and low-invasive biocide. The results demonstrate a high potential biocidal efficacy of the phyto-derivatives, but also highlight the need to develop an efficient application regime

    DeepWings©: automatic wing geometric morphometrics classification of honey bee (Apis mellifera) subspecies using deep learning for detecting landmarks

    Get PDF
    Honey bee classification by wing geometric morphometrics entails the first step of manual annotation of 19 landmarks in the forewing vein junctions. This is a time-consuming and error- prone endeavor, with implications for classification accuracy. Herein, we developed a software called DeepWings © that overcomes this constraint in wing geometric morphometrics classification by automatically detecting the 19 landmarks on digital images of the right forewing. We used a database containing 7634 forewing images, including 1864 analyzed by F. Ruttner in the original delineation of 26 honey bee subspecies, to tune a convolutional neural network as a wing detector, a deep learning U-Net as a landmarks segmenter, and a support vector machine as a subspecies classifier. The implemented MobileNet wing detector was able to achieve a mAP of 0.975 and the landmarks segmenter was able to detect the 19 landmarks with 91.8% accuracy, with an average positional precision of 0.943 resemblance to manually annotated landmarks. The subspecies classifier, in turn, presented an average accuracy of 86.6% for 26 subspecies and 95.8% for a subset of five important subspecies. The final implementation of the system showed good speed performance, requiring only 14 s to process 10 images. DeepWings © is very user-friendly and is the first fully automated software, offered as a free Web service, for honey bee classification from wing geometric morphometrics. DeepWings© can be used for honey bee breeding, conservation, and even scientific purposes as it provides the coordinates of the landmarks in excel format, facilitating the work of research teams using classical identification approaches and alternative analytical tools.Financial support was provided through the program COMPETE 2020—POCI (Programa Operacional para a Competividade e Internacionalização) and by Portuguese funds through FCT (Fundação para a Ciência e a Tecnologia) in the framework of the project BeeHappy (POCI-01- 0145-FEDER-029871). FCT provided financial support by national funds (FCT/MCTES) to CIMO (UIDB/00690/2020).info:eu-repo/semantics/publishedVersio

    New Approach for Detecting and Tracking a Moving Object

    Get PDF
    This article presents the implementation of a tracking system for a moving target using a fixed camera. The objective of this work is the ability to detect a moving object and locate their positions. In picture processing, tracking moving objects in a known or unknown environment is commonly studied. It is based on invariance properties of objects of interest. The invariance can affect the geometry of the scene or the objects. The proposed approach is composed of several steps; the first is the extraction of points of interest in the current image. Then, these points will be tracked in the following image by using techniques for calculating the optical flow. After this step, the static points will be removed to focus on moving objects, That is to say, there is only the characteristic points belonging to moving objects. Now, to detect moving targets using images of the video, the background is first extracted from the successive images. In our approach, a method of the average values of every pixel has been developed for modeling background. The last step which stays before switching to tracking moving object is the segmentation which allows identifying every moving object. And by using the characteristic points in the previous steps

    Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO

    Get PDF
    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid.The research leading to these results has received funding from the RoboCity2030-III-CM project (Robótica aplicada a la mejora de la calidad de vida de los ciudadanos. fase III; S2013/MIT-2748), funded by Programas de Actividades I + D en la Comunidad de Madrid and cofunded by Structural Funds of the EU
    • …
    corecore