165 research outputs found

    DESIGN AND DEVELOPMENT OF A BROILER MORTALITY REMOVAL ROBOT

    Get PDF
    Manual collection of broiler mortality is time-consuming, unpleasant, and laborious. The objectives of this research were: (1) to design and fabricate a broiler mortality removal robot from commercially available components to automatically collect dead birds; (2) to compare and evaluate deep learning models and image processing algorithms for detecting and locating dead birds; and (3) to examine detection and mortality pickup performance of the robot under different light intensities. The robot consisted of a two-finger gripper, a robot arm, a camera mounted on the robot’s arm, and a computer controller. The robot arm was mounted on a table, and 64 Ross 708 broilers between 7 and 14 days of age were used for the robot development and evaluation. The broiler shank was the target anatomical part for detection and mortality pickup. Deep learning models and image processing algorithms were embedded into the vision system and provided location and orientation of the shank of interest, so that the gripper could approach and position itself for precise pickup. Light intensities of 10, 20, 30, 40, 50, 60, 70, and 1000 lux were evaluated. Results indicated that the deep learning model “You Only Look Once (YOLO)” V4 was able to detect and locate shanks more accurately and efficiently than YOLO V3. Higher light intensities improved the performance of the deep learning model detection, image processing orientation identification, and final pickup performance. The final success rate for picking up dead birds was 90.0% at the 1000-lux light intensity. In conclusion, the developed system is a helpful tool towards automating broiler mortality removal from commercial housing, and contributes to further development of an integrated autonomous set of solutions to improve production and resource use efficiency in commercial broiler production, as well as to improve well-being of workers

    Nondestructive Chicken Egg Fertility Detection Using CNN-Transfer Learning Algorithms

    Get PDF
    This study explores the application of CNN-Transfer Learning for nondestructive chicken egg fertility detection. Four models, VGG16, ResNet50, InceptionNet, and MobileNet, were trained and evaluated on a dataset using augmented images. The training results demonstrated that all models achieved high accuracy, indicating their ability to accurately learn and classify chicken eggs’ fertility state. However, when evaluated on the testing set, variations in accuracy and performance were observed. VGG16 achieved a high accuracy of 0.9803 on the testing set but had challenges in accurately detecting fertile eggs, as indicated by a NaN sensitivity value. ResNet50 also achieved an accuracy of 0.98 but struggled to identify fertile and non-fertile eggs, as suggested by NaN values for sensitivity and specificity. However, InceptionNet demonstrated excellent performance, with an accuracy of 0.9804, a sensitivity of 1 for detecting fertile eggs, and a specificity of 0.9615 for identifying non-fertile eggs. MobileNet achieved an accuracy of 0.9804 on the testing set; however, it faced challenges in accurately classifying the fertility status of chicken eggs, as indicated by NaN values for both sensitivity and specificity. While the models showed promise during training, variations in accuracy and performance were observed during testing. InceptionNet exhibited the best overall performance, accurately classifying fertile and non-fertile eggs. Further optimization and fine-tuning of the models are necessary to address the limitations in accurately detecting fertile and non-fertile eggs. This study highlights the potential of CNN-Transfer Learning for nondestructive fertility detection and emphasizes the need for further research to enhance the models’ capabilities and ensure accurate classification

    Developing and applying precision animal farming tools for poultry behavior monitoring

    Get PDF
    Appropriate measurement of broiler behaviors is critical to optimize broiler production efficiency and improve precision management strategies. However, performance of different precision tools on measuring broiler behaviors of interest remains unclear. This dissertation systematically developed and evaluated radio frequency identification (RFID) system, image processing, and deep learning for automatically detecting and analyzing broiler behaviors. Then different behaviors (i.e., feeding, drinking, stretching, restricted feeding) of broilers under representative management practices were measured using the developed precision tools. The broilers were Ross 708 in weeks 4-8. The major findings show that the RFID system achieved high performance (over 90% accuracy) for continuously tracking feeding and drinking behaviors of individual broilers, after they were customized and modified, such as tag sensitivity test, power adjustment, radio wave shielding, and assessment of interference by add-ons. The image processing algorithms combined with a machine learning model were customized and adjusted based on the experimental conditions and finally achieved 85% sensitivity, specificity, and accuracy for detecting bird number at feeder and at drinkers. After adjusting labeling method and hyperparameter tuning, the faster region-based convolutional neural network (faster R-CNN) had over 86% precision, recall, specificity, and accuracy for detecting broiler stretching behaviors. In comprehensive algorithms, the faster R-CNN showed over 92% precision, recall, and F1 score for detecting feeder, eating birds, and birds around feeder. The bird trackers had a 3.2% error rate to track individual birds around feeder. The support vector machine behavior classifier achieved over 92% performance for classifying walking birds. Image processing model was also developed to detect birds that were restricted to feeder access. Broilers had different behavior responses to different sessions of a day, bird ages, environments, diets, and allocated resources. Reducing stocking density, increasing feeder space, and applying poultry-specific light spectrum and intensity were beneficial for birds to perform behaviors, such as feeding, drinking, and stretching, while using the antibiotics-free diet reduced bird feeding time. In conclusion, the developed tools are useful tools for automated broiler behavior monitoring and the measured behavior responses provide insights into precision management of welfare-oriented broiler production

    Precision Poultry Farming

    Get PDF
    This book presents the latest advances in applications of continuous, objective, and automated sensing technologies and computer tools for sustainable and efficient poultry production, and it offers solutions to the poultry industry to address challenges in terms of poultry management, the environment, nutrition, automation and robotics, health, welfare assessment, behavior monitoring, waste management, etc. The reader will find original research papers that address, on a global scale, the sustainability and efficiency of the poultry industry and explore the above-mentioned areas through applications of PPF solutions in poultry meat and egg productio

    AGI for Agriculture

    Full text link
    Artificial General Intelligence (AGI) is poised to revolutionize a variety of sectors, including healthcare, finance, transportation, and education. Within healthcare, AGI is being utilized to analyze clinical medical notes, recognize patterns in patient data, and aid in patient management. Agriculture is another critical sector that impacts the lives of individuals worldwide. It serves as a foundation for providing food, fiber, and fuel, yet faces several challenges, such as climate change, soil degradation, water scarcity, and food security. AGI has the potential to tackle these issues by enhancing crop yields, reducing waste, and promoting sustainable farming practices. It can also help farmers make informed decisions by leveraging real-time data, leading to more efficient and effective farm management. This paper delves into the potential future applications of AGI in agriculture, such as agriculture image processing, natural language processing (NLP), robotics, knowledge graphs, and infrastructure, and their impact on precision livestock and precision crops. By leveraging the power of AGI, these emerging technologies can provide farmers with actionable insights, allowing for optimized decision-making and increased productivity. The transformative potential of AGI in agriculture is vast, and this paper aims to highlight its potential to revolutionize the industry

    Advances in Sensors, Big Data and Machine Learning in Intelligent Animal Farming

    Get PDF
    Animal production (e.g., milk, meat, and eggs) provides valuable protein production for human beings and animals. However, animal production is facing several challenges worldwide such as environmental impacts and animal welfare/health concerns. In animal farming operations, accurate and efficient monitoring of animal information and behavior can help analyze the health and welfare status of animals and identify sick or abnormal individuals at an early stage to reduce economic losses and protect animal welfare. In recent years, there has been growing interest in animal welfare. At present, sensors, big data, machine learning, and artificial intelligence are used to improve management efficiency, reduce production costs, and enhance animal welfare. Although these technologies still have challenges and limitations, the application and exploration of these technologies in animal farms will greatly promote the intelligent management of farms. Therefore, this Special Issue will collect original papers with novel contributions based on technologies such as sensors, big data, machine learning, and artificial intelligence to study animal behavior monitoring and recognition, environmental monitoring, health evaluation, etc., to promote intelligent and accurate animal farm management

    On Tackling Fundamental Constraints in Brain-Computer Interface Decoding via Deep Neural Networks

    Get PDF
    A Brain-Computer Interface (BCI) is a system that provides a communication and control medium between human cortical signals and external devices, with the primary aim to assist or to be used by patients who suffer from a neuromuscular disease. Despite significant recent progress in the area of BCI, there are numerous shortcomings associated with decoding Electroencephalography-based BCI signals in real-world environments. These include, but are not limited to, the cumbersome nature of the equipment, complications in collecting large quantities of real-world data, the rigid experimentation protocol and the challenges of accurate signal decoding, especially in making a system work in real-time. Hence, the core purpose of this work is to investigate improving the applicability and usability of BCI systems, whilst preserving signal decoding accuracy. Recent advances in Deep Neural Networks (DNN) provide the possibility for signal processing to automatically learn the best representation of a signal, contributing to improved performance even with a noisy input signal. Subsequently, this thesis focuses on the use of novel DNN-based approaches for tackling some of the key underlying constraints within the area of BCI. For example, recent technological improvements in acquisition hardware have made it possible to eliminate the pre-existing rigid experimentation procedure, albeit resulting in noisier signal capture. However, through the use of a DNN-based model, it is possible to preserve the accuracy of the predictions from the decoded signals. Moreover, this research demonstrates that by leveraging DNN-based image and signal understanding, it is feasible to facilitate real-time BCI applications in a natural environment. Additionally, the capability of DNN to generate realistic synthetic data is shown to be a potential solution in reducing the requirement for costly data collection. Work is also performed in addressing the well-known issues regarding subject bias in BCI models by generating data with reduced subject-specific features. The overall contribution of this thesis is to address the key fundamental limitations of BCI systems. This includes the unyielding traditional experimentation procedure, the mandatory extended calibration stage and sustaining accurate signal decoding in real-time. These limitations lead to a fragile BCI system that is demanding to use and only suited for deployment in a controlled laboratory. Overall contributions of this research aim to improve the robustness of BCI systems and enable new applications for use in the real-world

    In situ real-time Zooplankton Detection and Classification

    Get PDF
    Zooplankton plays a key-role on Earth’s ecosystem, emerging in the oceans and rivers in great quantities and diversity, making it an important and rather common topic on scientific studies. It serves as prey for many large living beings, such as fish and whales, and helps to keep the food chain stabilized by acting not only as prey to other animals but also as a consumer of phytoplankton, the main producers of oxygen on the planet. Zooplankton are also good indicators of environmental changes, such as global warming or rapid fluctuations in carbon dioxide in the atmosphere, since their abundance and existence is dependent on many environmental factors that indicate such changes. Not only is it important to study the numbers of zooplankton in the water masses, but also to know of what different species these numbers are composed of, as different species can provide information of different environmental attributes. In this thesis a possible solution for the zooplankton in situ detection and classification problem in real-time is proposed using a portable deep learning approach based on CNNs (Convolutional Neural Networks) deployed on INESC TEC’s MarinEye system. The proposed solution makes use of two different CNNs, one for the detection problem and another for the classification problem, running in MarinEye’s plankton imaging system, and portability is guaranteed by the use of the Movidius™ Neural Compute Stick as the deep learning motor in the hardware side. The software was implemented as a ROS node, which guarantees not only portability but facilitates communication between the imaging system and other MarinEye’s modules.O zooplâncton representa um papel fundamental no ecossistema do planeta, surgindo nos oceanos e rios em grandes quantidades numa elevada diversidade de espécies, sendo um objecto de estudo comum em publicações e artigos produzidos pela comunidade científica. A sua importância vem de entre outros factores do facto de ser a principal fonte de alimento de uma grande parte da vida marinha, desde pequenos peixes a baleias, e de ser um grande consumidor de fitoplâncton, a principal fonte de oxigénio do planeta. O zooplâncton é também um bom indicador de alterações ambientais, como o aquecimento global ou variações rápidas na quantidade de dióxido de carbono na atmosfera, uma vez que a sua abundância depende de diversos factores ambientais relacionados com tais mudanças, sendo não só importante perceber em que quantidades existe nas massas de água do planeta, mas também por que diferentes espécies está distribuído. Nesta tese é apresentada uma possível solução para a deteção e classificação de zooplâncton in situ e em tempo real, recorrendo a uma abordagem facilmente portável de Deep Learning, baseada em Redes Neuronais Convolucionais implementado no sistema MarinEye do INESC TEC. A solução proposta faz uso de duas arquitecturas de redes diferentes, uma dedicada à tarefa de deteção do zooplâncton, e outra dedicada `a sua classificação, implementadas no módulo de aquisição de imagens de plâncton do sistema MarinEye. A portabilidade e flexibilidade do sistema foi garantida através do uso da Movidius™ Neural Compute Stick como motor de deep learning, assim como da implementação do software como um nó de ROS, que garante não só a portabilidade do sistema, como também permite uma facilidade de comunicação entre os diferentes módulos do MarinEye

    The Hidden Forms And Functions Of Courtship In The Brown-Headed Cowbird (molothrus Ater)

    Get PDF
    Reproductive fitness is the result of complex interacting processes, however our understanding of reproduction is often limited to a few, static male traits. While conspicuous male traits are very well studied, female behavior has received far less attention. Furthermore, conspicuous male traits often fail to predict reproductive success, suggest that there are other important aspects of sexual behavior, however what these factors are or how they interact remains largely unknown. Brown-headed cowbirds (Molothrus ater) breed readily in captivity and their copulatory behavior can be evoked under carefully controlled experimental conditions. By pairing many years of behavioral observations with careful analysis and quantification of behavior, I identified several mechanisms guiding courtship and reproduction. Female copulation is directly evoked by male song, and the strength of the copulatory display reflects signal strength. Interestingly, the copulatory display is mediated by the variable behavioral state of the female, suggesting that song alone is insufficient to elicit copulation. Flocks also display measurable cohesion in the timing of their behavior, transitioning as a group between singing to males and singing to females, and the strength of this group cohesion predicts reproductive success for both individuals and the group as a whole. This work shows that reproductive fitness is far richer than just the quality of male signals and provides a platform to understand the rich complexity of animal courtship
    • …
    corecore