10,065 research outputs found

    ABS-FishCount: An Agent-Based Simulator of Underwater Sensors for Measuring the Amount of Fish

    Get PDF
    [EN] Underwater sensors provide one of the possibilities to explore oceans, seas, rivers, fish farms and dams, which all together cover most of our planet's area. Simulators can be helpful to test and discover some possible strategies before implementing these in real underwater sensors. This speeds up the development of research theories so that these can be implemented later. In this context, the current work presents an agent-based simulator for defining and testing strategies for measuring the amount of fish by means of underwater sensors. The current approach is illustrated with the definition and assessment of two strategies for measuring fish. One of these two corresponds to a simple control mechanism, while the other is an experimental strategy and includes an implicit coordination mechanism. The experimental strategy showed a statistically significant improvement over the control one in the reduction of errors with a large Cohen's d effect size of 2.55.This work acknowledges the research project Desarrollo Colaborativo de Soluciones AAL with reference TIN2014-57028-R funded by the Spanish Ministry of Economy and Competitiveness. This work has been supported by the program Estancias de movilidad en el extranjero José Castillejo para jóvenes doctores funded by the Spanish Ministry of Education, Culture and Sport with reference CAS17/00005. We also acknowledge support from Universidad de Zaragoza , Fundación Bancaria Ibercaja and Fundación CAI in the Programa Ibercaja-CAI de Estancias de Investigación with reference IT24/16. We acknowledge the research project Construcción de un framework para agilizar el desarrollo de aplicaciones móviles en el ámbito de la salud funded by University of Zaragoza and Foundation Ibercaja with grant reference JIUZ-2017-TEC-03. It has also been supported by Organismo Autónomo Programas Educativos Europeos with reference 2013-1-CZ1-GRU06-14277. We also aknowledge support from project Sensores vestibles y tecnología móvil como apoyo en la formación y práctica de mindfulness: prototipo previo aplicado a bienestar funded by University of Zaragoza with grant number UZ2017-TEC-02. Furthermore, we acknowledge the Fondo Social Europeo and the Departamento de Tecnología y Universidad del Gobierno de Aragón for their joint support with grant number Ref-T81.García-Magariño, I.; Lacuesta Gilabert, R.; Lloret, J. (2017). ABS-FishCount: An Agent-Based Simulator of Underwater Sensors for Measuring the Amount of Fish. Sensors. 17(11):1-19. https://doi.org/10.3390/s17112606S1191711Lloret, J. (2013). Underwater Sensor Nodes and Networks. Sensors, 13(9), 11782-11796. doi:10.3390/s130911782Akyildiz, I. F., Pompili, D., & Melodia, T. (2005). Underwater acoustic sensor networks: research challenges. Ad Hoc Networks, 3(3), 257-279. doi:10.1016/j.adhoc.2005.01.004Santos, R., Orozco, J., Micheletto, M., Ochoa, S., Meseguer, R., Millan, P., & Molina, C. (2017). Real-Time Communication Support for Underwater Acoustic Sensor Networks. Sensors, 17(7), 1629. doi:10.3390/s17071629Das, A. P., & Thampi, S. M. (2017). Simulation Tools for Underwater Sensor Networks: A Survey. Network Protocols and Algorithms, 8(4), 41. doi:10.5296/npa.v8i4.10471Kawahara, R., Nobuhara, S., & Matsuyama, T. (2016). Dynamic 3D capture of swimming fish by underwater active stereo. Methods in Oceanography, 17, 118-137. doi:10.1016/j.mio.2016.08.002Schaner, T., Fox, M. G., & Taraborelli, A. C. (2009). An inexpensive system for underwater video surveys of demersal fishes. Journal of Great Lakes Research, 35(2), 317-319. doi:10.1016/j.jglr.2008.12.003Shinoda, R., Wu, H., Murata, M., Ohnuki, H., Yoshiura, Y., & Endo, H. (2017). Development of an optical communication type biosensor for real-time monitoring of fish stress. Sensors and Actuators B: Chemical, 247, 765-773. doi:10.1016/j.snb.2017.03.034Chen, Z., Zhang, Z., Dai, F., Bu, Y., & Wang, H. (2017). Monocular Vision-Based Underwater Object Detection. Sensors, 17(8), 1784. doi:10.3390/s17081784Saberioon, M. M., & Cisar, P. (2016). Automated multiple fish tracking in three-Dimension using a Structured Light Sensor. Computers and Electronics in Agriculture, 121, 215-221. doi:10.1016/j.compag.2015.12.014Pais, M. P., & Cabral, H. N. (2017). Fish behaviour effects on the accuracy and precision of underwater visual census surveys. A virtual ecologist approach using an individual-based model. Ecological Modelling, 346, 58-69. doi:10.1016/j.ecolmodel.2016.12.011Burget, P., & Pachner, D. (2005). FISH FARM AUTOMATION. IFAC Proceedings Volumes, 38(1), 137-142. doi:10.3182/20050703-6-cz-1902.02113Simon, Y., Levavi-Sivan, B., Cahaner, A., Hulata, G., Antler, A., Rozenfeld, L., & Halachmi, I. (2017). A behavioural sensor for fish stress. Aquacultural Engineering, 77, 107-111. doi:10.1016/j.aquaeng.2017.04.001Petreman, I. C., Jones, N. E., & Milne, S. W. (2014). Observer bias and subsampling efficiencies for estimating the number of migrating fish in rivers using Dual-frequency IDentification SONar (DIDSON). Fisheries Research, 155, 160-167. doi:10.1016/j.fishres.2014.03.001Garcia, M., Sendra, S., Lloret, G., & Lloret, J. (2011). Monitoring and control sensor system for fish feeding in marine fish farms. IET Communications, 5(12), 1682-1690. doi:10.1049/iet-com.2010.0654Lloret, J., Garcia, M., Sendra, S., & Lloret, G. (2014). An underwater wireless group-based sensor network for marine fish farms sustainability monitoring. Telecommunication Systems, 60(1), 67-84. doi:10.1007/s11235-014-9922-3Bharamagoudra, M. R., Manvi, S. S., & Gonen, B. (2017). Event driven energy depth and channel aware routing for underwater acoustic sensor networks: Agent oriented clustering based approach. Computers & Electrical Engineering, 58, 1-19. doi:10.1016/j.compeleceng.2017.01.004Gallehdari, Z., Meskin, N., & Khorasani, K. (2017). Distributed reconfigurable control strategies for switching topology networked multi-agent systems. ISA Transactions, 71, 51-67. doi:10.1016/j.isatra.2017.06.008Jurdak, R., Elfes, A., Kusy, B., Tews, A., Hu, W., Hernandez, E., … Sikka, P. (2015). Autonomous surveillance for biosecurity. Trends in Biotechnology, 33(4), 201-207. doi:10.1016/j.tibtech.2015.01.003García-Magariño, I., & Plaza, I. (2015). FTS-SOCI: An agent-based framework for simulating teaching strategies with evolutions of sociograms. Simulation Modelling Practice and Theory, 57, 161-178. doi:10.1016/j.simpat.2015.07.003Cooke, S. J., Brownscombe, J. W., Raby, G. D., Broell, F., Hinch, S. G., Clark, T. D., & Semmens, J. M. (2016). Remote bioenergetics measurements in wild fish: Opportunities and challenges. Comparative Biochemistry and Physiology Part A: Molecular & Integrative Physiology, 202, 23-37. doi:10.1016/j.cbpa.2016.03.022García, M. R., Cabo, M. L., Herrera, J. R., Ramilo-Fernández, G., Alonso, A. A., & Balsa-Canto, E. (2017). Smart sensor to predict retail fresh fish quality under ice storage. Journal of Food Engineering, 197, 87-97. doi:10.1016/j.jfoodeng.2016.11.006Tušer, M., Frouzová, J., Balk, H., Muška, M., Mrkvička, T., & Kubečka, J. (2014). Evaluation of potential bias in observing fish with a DIDSON acoustic camera. Fisheries Research, 155, 114-121. doi:10.1016/j.fishres.2014.02.031Rakowitz, G., Tušer, M., Říha, M., Jůza, T., Balk, H., & Kubečka, J. (2012). Use of high-frequency imaging sonar (DIDSON) to observe fish behaviour towards a surface trawl. Fisheries Research, 123-124, 37-48. doi:10.1016/j.fishres.2011.11.018Cenek, M., & Franklin, M. (2017). An adaptable agent-based model for guiding multi-species Pacific salmon fisheries management within a SES framework. Ecological Modelling, 360, 132-149. doi:10.1016/j.ecolmodel.2017.06.024Gao, L., & Hailu, A. (2011). Evaluating the effects of area closure for recreational fishing in a coral reef ecosystem: The benefits of an integrated economic and biophysical modeling. Ecological Economics, 70(10), 1735-1745. doi:10.1016/j.ecolecon.2011.04.014Helbing, D., & Balietti, S. (2011). From social simulation to integrative system design. The European Physical Journal Special Topics, 195(1), 69-100. doi:10.1140/epjst/e2011-01402-7Reynolds, C. W. (1987). Flocks, herds and schools: A distributed behavioral model. ACM SIGGRAPH Computer Graphics, 21(4), 25-34. doi:10.1145/37402.37406Beltran, R. S., Testa, J. W., & Burns, J. M. (2017). An agent-based bioenergetics model for predicting impacts of environmental change on a top marine predator, the Weddell seal. Ecological Modelling, 351, 36-50. doi:10.1016/j.ecolmodel.2017.02.002Berman, M., Nicolson, C., Kofinas, G., Tetlichi, J., & Martin, S. (2004). Adaptation and Sustainability in a Small Arctic Community : Results of an Agent-based Simulation Model. ARCTIC, 57(4). doi:10.14430/arctic517Kadir, H. A., & Arshad, M. R. (2015). Cooperative Multi Agent System for Ocean Observation System based on Consensus Algorithm. Procedia Computer Science, 76, 203-208. doi:10.1016/j.procs.2015.12.343Trygonis, V., Georgakarakos, S., Dagorn, L., & Brehmer, P. (2016). Spatiotemporal distribution of fish schools around drifting fish aggregating devices. Fisheries Research, 177, 39-49. doi:10.1016/j.fishres.2016.01.013De Kerckhove, D. T., Milne, S., & Shuter, B. J. (2015). Measuring fish school swimming speeds with two acoustic beams and determining the angle of the school detection. Fisheries Research, 172, 432-439. doi:10.1016/j.fishres.2015.08.001Source Code of the Agent-Based Simulator of Underwater Sensors for Measuring the Amount of Fishes Called ABS-FishCounthttp://dx.doi.org/10.17632/yzmt73x8j8.1Cossentino, M., Gaud, N., Hilaire, V., Galland, S., & Koukam, A. (2009). ASPECS: an agent-oriented software process for engineering complex systems. Autonomous Agents and Multi-Agent Systems, 20(2), 260-304. doi:10.1007/s10458-009-9099-4García-Magariño, I., Palacios-Navarro, G., & Lacuesta, R. (2017). TABSAOND: A technique for developing agent-based simulation apps and online tools with nondeterministic decisions. Simulation Modelling Practice and Theory, 77, 84-107. doi:10.1016/j.simpat.2017.05.006García-Magariño, I., Gómez-Rodríguez, A., González-Moreno, J. C., & Palacios-Navarro, G. (2015). PEABS: A Process for developing Efficient Agent-Based Simulators. Engineering Applications of Artificial Intelligence, 46, 104-112. doi:10.1016/j.engappai.2015.09.003Rosenthal, J. A. (1996). Qualitative Descriptors of Strength of Association and Effect Size. Journal of Social Service Research, 21(4), 37-59. doi:10.1300/j079v21n04_0

    Using mobile-based augmented reality and object detection for real-time Abalone growth monitoring

    Get PDF
    Abalone are becoming increasingly popular for human consumption. Whilst their popularity has risen, measuring the number and size distribution of Abalone at various stages of growth in existing farms remains a significant challenge. Current Abalone stock management techniques rely on manual inspection which is time consuming, causes stress to the animal, and results in mediocre data quality. To rectify this, we propose a novel mobile-based tool which combines object detection and augmented reality for the real-time counting and measuring of Abalone, that is both network and location independent. We applied our portable handset tool to both measure and count Abalone at various growth stages, and performed extended measuring evaluation to assess the robustness of our proposed approach. Our experimental results revealed that the proposed tool greatly outperforms traditional approaches and was able to successfully count up to 15 Abalone at various life stages with above 95% accuracy, as well as significantly decrease the time taken to measure Abalone while still maintaining an accuracy within a maximum error range of 2.5% of the Abalone’s actual size

    Sustainable Palm Tree Farming: Leveraging IoT and Multi-Modal Data for Early Detection and Mapping of Red Palm Weevil

    Full text link
    The Red Palm Weevil (RPW) is a highly destructive insect causing economic losses and impacting palm tree farming worldwide. This paper proposes an innovative approach for sustainable palm tree farming by utilizing advanced technologies for the early detection and management of RPW. Our approach combines computer vision, deep learning (DL), the Internet of Things (IoT), and geospatial data to detect and classify RPW-infested palm trees effectively. The main phases include; (1) DL classification using sound data from IoT devices, (2) palm tree detection using YOLOv8 on UAV images, and (3) RPW mapping using geospatial data. Our custom DL model achieves 100% precision and recall in detecting and localizing infested palm trees. Integrating geospatial data enables the creation of a comprehensive RPW distribution map for efficient monitoring and targeted management strategies. This technology-driven approach benefits agricultural authorities, farmers, and researchers in managing RPW infestations and safeguarding palm tree plantations' productivity

    Fruit Detection and Classification using YOLO Models

    Get PDF
    Computer Vision and Deep Learning techniques have become an advent in multiple domains like healthcare, Technology, as well as Agriculture . Computer vision techniques like object detection are being widely used in agriculture to reduce to efforts required and make agriculture a little more efficient for the farmers. The applications of deep learning in agriculture include leaf disease detection and weather forecasting, and the most advent applications include object detection to detect fruits, and vegetables which can be ensembled with robotics for automated yield production and harvesting. The proposed article describes one such application of fruit detection using various YOLO (You Only Look Once) models. The study encompasses four fruit classes namely Chiku, Mango, Mosambi, and Tomato. Models of Yolo V3, Yolo V4, and Yolo V8 were trained on a customized dataset collected from Indian farms and fruit gardens. The real time images images were collected, pre-processed, and annotated using online labeling tools. A total of 1200 images were used as a part of the complete training process. Basic preprocessing was performed on these images and possible inbuilt augmentation techniques supported by the above-mentioned models were used.Training is applied on custom dataset for all classes. In this experiment we have received the F1 score for YOLOv3(Chiku-82%.Mamgo-91%,Mosambi-87%,,Tomato-77%),YOLOv4(Chiku-89%.Mamgo-98%,Mosambi-95%,,Tomato-91%) and YOLOV8 (Chiku-90%.Mamgo-75%,Mosambi-82%,,Tomato-84%)models. In these models YOLOv4 with two layers gives the highest accuracy for all the classes

    WIND TURBINE TOWER DETECTION USING FEATURE DESCRIPTORS AND DEEP LEARNING

    Get PDF
    Wind Turbine Towers (WTTs) are the main structures of wind farms. They are costly devices that must be thoroughly inspected according to maintenance plans. Today, existence of machine vision techniques along with unmanned aerial vehicles (UAVs) enable fast, easy, and intelligent visual inspection of the structures. Our work is aimed towards developing a visionbased system to perform Nondestructive tests (NDTs) for wind turbines using UAVs. In order to navigate the flying machine toward the wind turbine tower and reliably land on it, the exact position of the wind turbine and its tower must be detected. We employ several strong computer vision approaches such as Scale-Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), Features from Accelerated Segment Test (FAST), Brute-Force, Fast Library for Approximate Nearest Neighbors (FLANN) to detect the WTT. Then, in order to increase the reliability of the system, we apply the ResNet, MobileNet, ShuffleNet, EffNet, and SqueezeNet pre-trained classifiers in order to verify whether a detected object is indeed a turbine tower or not. This intelligent monitoring system has auto navigation ability and can be used for future goals including intelligent fault diagnosis and maintenance purposes. The simulation results show the accuracy of the proposed model are 89.4% in WTT detection and 97.74% in verification (classification) problems

    Agricultural Object Detection with You Look Only Once (YOLO) Algorithm: A Bibliometric and Systematic Literature Review

    Full text link
    Vision is a major component in several digital technologies and tools used in agriculture. The object detector, You Look Only Once (YOLO), has gained popularity in agriculture in a relatively short span due to its state-of-the-art performance. YOLO offers real-time detection with good accuracy and is implemented in various agricultural tasks, including monitoring, surveillance, sensing, automation, and robotics. The research and application of YOLO in agriculture are accelerating rapidly but are fragmented and multidisciplinary. Moreover, the performance characteristics (i.e., accuracy, speed, computation) of the object detector influence the rate of technology implementation and adoption in agriculture. Thus, the study aims to collect extensive literature to document and critically evaluate the advances and application of YOLO for agricultural object recognition. First, we conducted a bibliometric review of 257 articles to understand the scholarly landscape of YOLO in agricultural domain. Secondly, we conducted a systematic review of 30 articles to identify current knowledge, gaps, and modifications in YOLO for specific agricultural tasks. The study critically assesses and summarizes the information on YOLO's end-to-end learning approach, including data acquisition, processing, network modification, integration, and deployment. We also discussed task-specific YOLO algorithm modification and integration to meet the agricultural object or environment-specific challenges. In general, YOLO-integrated digital tools and technologies show the potential for real-time, automated monitoring, surveillance, and object handling to reduce labor, production cost, and environmental impact while maximizing resource efficiency. The study provides detailed documentation and significantly advances the existing knowledge on applying YOLO in agriculture, which can greatly benefit the scientific community

    A Review of the Challenges of Using Deep Learning Algorithms to Support Decision-Making in Agricultural Activities

    Get PDF
    Deep Learning has been successfully applied to image recognition, speech recognition, and natural language processing in recent years. Therefore, there has been an incentive to apply it in other fields as well. The field of agriculture is one of the most important fields in which the application of deep learning still needs to be explored, as it has a direct impact on human well-being. In particular, there is a need to explore how deep learning models can be used as a tool for optimal planting, land use, yield improvement, production/disease/pest control, and other activities. The vast amount of data received from sensors in smart farms makes it possible to use deep learning as a model for decision-making in this field. In agriculture, no two environments are exactly alike, which makes testing, validating, and successfully implementing such technologies much more complex than in most other industries. This paper reviews some recent scientific developments in the field of deep learning that have been applied to agriculture, and highlights some challenges and potential solutions using deep learning algorithms in agriculture. The results in this paper indicate that by employing new methods from deep learning, higher performance in terms of accuracy and lower inference time can be achieved, and the models can be made useful in real-world applications. Finally, some opportunities for future research in this area are suggested.This work is supported by the R&D Project BioDAgro—Sistema operacional inteligente de informação e suporte á decisão em AgroBiodiversidade, project PD20-00011, promoted by Fundação La Caixa and Fundação para a Ciência e a Tecnologia, taking place at the C-MAST-Centre for Mechanical and Aerospace Sciences and Technology, Department of Electromechanical Engineering of the University of Beira Interior, Covilhã, Portugal.info:eu-repo/semantics/publishedVersio

    Data-Driven Air Quality and Environmental Evaluation for Cattle Farms

    Get PDF
    The expansion of agricultural practices and the raising of animals are key contributors to air pollution. Cattle farms contain hazardous gases, so we developed a cattle farm air pollution analyzer to count the number of cattle and provide comprehensive statistics on different air pollutant concentrations based on severity over various time periods. The modeling was performed in two parts: the first stage focused on object detection using satellite data of farm images to identify and count the number of cattle; the second stage predicted the next hour air pollutant concentration of the seven cattle farm air pollutants considered. The output from the second stage was then visualized based on severity, and analytics were performed on the historical data. The visualization illustrates the relationship between cattle count and air pollutants, an important factor for analyzing the pollutant concentration trend. We proposed the models Detectron2, YOLOv4, RetinaNet, and YOLOv5 for the first stage, and LSTM (single/multi lag), CNN-LSTM, and Bi-LSTM for the second stage. YOLOv5 performed best in stage one with an average precision of 0.916 and recall of 0.912, with the average precision and recall for all models being above 0.87. For stage two, CNN-LSTM performed well with an MAE of 3.511 and an MAPE of 0.016, while a stacked model had an MAE of 5.010 and an MAPE of 0.023

    AI-Augmented Monitoring and Management by Image Analysis for Object Detection and Counting

    Get PDF
    Counting the number of objects from images has become an increasingly important topic in different applications, such as crowd counting, cell microscopy image analyses in biomedical imaging, and horticulture monitoring and prediction. Many studies have been working on automatic object counting with Convolutional Neural Networks (CNNs). This research is aimed to shed more light on the applications of deep learning models to count objects in images in different places, such as growing fields, classrooms, streets, etc. We will study how CNN predicts the numbers of objects and measure the accuracy of trained models with different training parameters by using evaluation metrics, mAP, and RMSE. The performance of object detection and counting using a CNN, YOLOv5, will be analyzed. The model will be trained on the Global Wheat Head Detection 2021 dataset for crop counting and COCO dataset for counting of labeled objects. The performance of the optimized model on crowd counting will be tested with pictures taken on the Texas A&M University campus
    corecore