4 research outputs found
A review on the application of computer vision and machine learning in the tea industry
Tea is rich in polyphenols, vitamins, and protein, which is good for health and tastes great. As a result, tea is very popular and has become the second most popular beverage in the world after water. For this reason, it is essential to improve the yield and quality of tea. In this paper, we review the application of computer vision and machine learning in the tea industry in the last decade, covering three crucial stages: cultivation, harvesting, and processing of tea. We found that many advanced artificial intelligence algorithms and sensor technologies have been used in tea, resulting in some vision-based tea harvesting equipment and disease detection methods. However, these applications focus on the identification of tea buds, the detection of several common diseases, and the classification of tea products. Clearly, the current applications have limitations and are insufficient for the intelligent and sustainable development of the tea field. The current fruitful developments in technologies related to UAVs, vision navigation, soft robotics, and sensors have the potential to provide new opportunities for vision-based tea harvesting machines, intelligent tea garden management, and multimodal-based tea processing monitoring. Therefore, research and development combining computer vision and machine learning is undoubtedly a future trend in the tea industry
Agricultural Structures and Mechanization
In our globalized world, the need to produce quality and safe food has increased exponentially in recent decades to meet the growing demands of the world population. This expectation is being met by acting at multiple levels, but mainly through the introduction of new technologies in the agricultural and agri-food sectors. In this context, agricultural, livestock, agro-industrial buildings, and agrarian infrastructure are being built on the basis of a sophisticated design that integrates environmental, landscape, and occupational safety, new construction materials, new facilities, and mechanization with state-of-the-art automatic systems, using calculation models and computer programs. It is necessary to promote research and dissemination of results in the field of mechanization and agricultural structures, specifically with regard to farm building and rural landscape, land and water use and environment, power and machinery, information systems and precision farming, processing and post-harvest technology and logistics, energy and non-food production technology, systems engineering and management, and fruit and vegetable cultivation systems. This Special Issue focuses on the role that mechanization and agricultural structures play in the production of high-quality food and continuously over time. For this reason, it publishes highly interdisciplinary quality studies from disparate research fields including agriculture, engineering design, calculation and modeling, landscaping, environmentalism, and even ergonomics and occupational risk prevention
Estimation of Botanical Composition in Mixed Clover-Grass Fields Using Machine Learning-Based Image Analysis
This study aims to provide an effective image analysis method for clover detection and botanical composition (BC) estimation in clover-grass mixture fields. Three transfer learning methods, namely, fine-tuned DeepLab V3+, SegNet, and fully convolutional network-8s (FCN-8s), were utilized to detect clover fractions (on an area basis). The detected clover fraction (CFdetected), together with auxiliary variables, viz., measured clover height (H-clover) and grass height (H-grass), were used to build multiple linear regression (MLR) and back propagation neural network (BPNN) models for BC estimation. A total of 347 clover-grass images were used to build the estimation model on clover fraction and BC. Of the 347 samples, 226 images were augmented to 904 images for training, 25 were selected for validation, and the remaining 96 samples were used as an independent dataset for testing. Testing results showed that the intersection-over-union (IoU) values based on the DeepLab V3+, SegNet, and FCN-8s were 0.73, 0.57, and 0.60, respectively. The root mean square error (RMSE) values for the three transfer learning methods were 8.5, 10.6, and 10.0%. Subsequently, models based on BPNN and MLR were built to estimate BC, by using either CFdetected only or CFdetected, grass height, and clover height all together. Results showed that BPNN was generally superior to MLR in terms of estimating BC. The BPNN model only using CFdetected had a RMSE of 8.7%. In contrast, the BPNN model using all three variables (CFdetected, H-clover, and H-grass) as inputs had an RMSE of 6.6%, implying that DeepLab V3+ together with BPNN can provide good estimation of BC and can offer a promising method for improving forage management
Recommended from our members
Tender Tea Shoots Recognition and Positioning for Picking Robot Using Improved YOLO-V3 Model
To recognize the tender shoots for high-quality tea and to determine the picking points accurately and quickly, this paper proposes a method of recognizing the picking points of the tender tea shoots with the improved YOLO-v3 deep convolutional neural network algorithm. This method realizes the end-to-end target detection and the recognition of different postures of high-quality tea shoots, considering both efficiency and accuracy. At first, in order to predict the category and position of tender tea shoots, an image pyramid structure is used to obtain the characteristic map of tea shoots at different scales. The residual network block structure is added to the downsampling part, and the fully connected part is replaced by a \times 1$ convolution operation at the end, ensuring accurate identification of the result and simplifying the network structure. The K-means method is used to cluster the dimension of the target box. Finally, the image data set of picking points for high-quality tea shoots is built. The accuracy of the trained model under the verification set is over 90%, which is much higher than the detection accuracy of the research methods.Natural Science Foundation of Shandong Province under Grant ZR2019MEE102;
Key Research and Development Program of Shandong Province under Grant 2018GNC112007; Project of Shandong Province Higher Educational Science and Technology Program under Grant J18KA015