1,122 research outputs found

    Applications of Image Processing in Viticulture: A Review

    Get PDF
    The production of high quality grapes for wine making is challenging. Significant progress has been made in the automated prediction of harvest yields from images but the analysis of images to predict the quality of the harvest has yet to be fully addressed. The quality of wine produced depends in part on the quality of the grapes harvested and therefore on the presence of disease in the vineyard. There is potential for automated early detection of disease in grape crops through the development of accurate techniques for image processing. This paper presents a review of current research and highlights some of the key challenges for geo-computation (image processing, computer vision and data mining techniques) to inform the management of vineyards and highlights the key challenges for in-field image capture and analysis. An exploration of potential applications for the knowledge generated by imaging techniques is then presented. This discussion is driven by the current interest in the effect of rapid and dramatic climate change on the production of wine and focuses on how this information might be utilized to inform the design and validation of accurate predictive models

    DQLRFMG: Design of an Augmented Fusion of Deep Q Learning with Logistic Regression and Deep Forests for Multivariate Classification and Grading of Fruits

    Get PDF
    Accurate categorization and grading of fruits are essential in numerous fields, including agriculture, food processing, and distribution. This paper addresses the need for an advanced model capable of classifying and grading fruits more effectively than existing methods. Traditional approaches are limited by their lower precision, accuracy, recall, area under the curve (AUC), and delay. In order to overcome these obstacles, the proposed model combines the capabilities of Deep Q Learning (DQL) for classification and Logistic Regression (LR) with Deep Forests for fruit grading process. Three distinct datasets were used to evaluate the model: the Kaggle - Fruits 360 Dataset, the FRont Experimental System for High throughput plant phenotyping Datasets, and ImageNet samples. In multiple respects, comparative analysis demonstrates that the proposed model outperforms existing methods. Specifically, it achieves a remarkable 4.9% improvement in precision, 5.5% improvement in accuracy, 4.5% improvement in recall, 3.9% improvement in AUC, and an 8.5% reduction in delay levels. Utilizing the strengths of both DQL and LR with Deep Forests, the proposed model achieves its superior performance. DQL, a technique for reinforcement learning, provides the ability to learn and make decisions based on the feedback from the environment. By combining DQL and LR, the classification accuracy is improved, allowing for the precise identification of fruit varieties including Mango, Apple. Papaya, etc. In addition, Deep Forests, a novel framework for ensemble learning, is utilized for fruit grading. Deep Forests utilizes decision trees to effectively capture complex patterns in the data, allowing for dependable and robust fruit grading. Experimental findings indicate that the combination of DQL and LR with Deep Forests yields remarkable performance improvements in fruit classification and grading tasks. Improved precision, accuracy, recall, AUC, and delay indicate the model's superiority over existing methods. This research contributes to the field of fruit classification and grading by developing a sophisticated model that can support a variety of applications in the agriculture, food processing, and distribution industries

    Structured Light-Based 3D Reconstruction System for Plants.

    Get PDF
    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance

    Cashew dataset generation using augmentation and RaLSGAN and a transfer learning based tinyML approach towards disease detection

    Full text link
    Cashew is one of the most extensively consumed nuts in the world, and it is also known as a cash crop. A tree may generate a substantial yield in a few months and has a lifetime of around 70 to 80 years. Yet, in addition to the benefits, there are certain constraints to its cultivation. With the exception of parasites and algae, anthracnose is the most common disease affecting trees. When it comes to cashew, the dense structure of the tree makes it difficult to diagnose the disease with ease compared to short crops. Hence, we present a dataset that exclusively consists of healthy and diseased cashew leaves and fruits. The dataset is authenticated by adding RGB color transformation to highlight diseased regions, photometric and geometric augmentations, and RaLSGAN to enlarge the initial collection of images and boost performance in real-time situations when working with a constrained dataset. Further, transfer learning is used to test the classification efficiency of the dataset using algorithms such as MobileNet and Inception. TensorFlow lite is utilized to develop these algorithms for disease diagnosis utilizing drones in real-time. Several post-training optimization strategies are utilized, and their memory size is compared. They have proven their effectiveness by delivering high accuracy (up to 99%) and a decrease in memory and latency, making them ideal for use in applications with limited resources

    Computer Vision Algorithms For An Automated Harvester

    Get PDF
    Image classification and segmentation are the two main important parts in the 3D vision system of a harvesting robot. Regarding the first part, the vision system aids in the real time identification of contaminated areas of the farm based on the damage identified using the robot’s camera. To solve the problem of identification, a fast and non-destructive method, Support Vector Machine (SVM), is applied to improve the recognition accuracy and efficiency of the robot. Initially, a median filter is applied to remove the inherent noise in the colored image. SIFT features of the image are then extracted and computed forming a vector, which is then quantized into visual words. Finally, the histogram of the frequency of each element in the visual vocabulary is created and fed into an SVM classifier, which categorizes the mushrooms as either class one or class two. Our preliminary results for image classification were promising and the experiments carried out on the data set highlight fast computation time and a high rate of accuracy, reaching over 90% using this method, which can be employed in real life scenario. As pertains to image Segmentation on the other hand, the vision system aids in real time identification of mushrooms but a stiff challenge is encountered in robot vision as the irregularly spaced mushrooms of uneven sizes often occlude each other due to the nature of mushroom growth in the growing environment. We address the issue of mushroom segmentation by following a multi-step process; the images are first segmented in HSV color space to locate the area of interest and then both the image gradient information from the area of interest and Hough transform methods are used to locate the center position and perimeter of each individual mushroom in XY plane. Afterwards, the depth map information given by Microsoft Kinect is employed to estimate the Z- depth of each individual mushroom, which is then being used to measure the distance between the robot end effector and center coordinate of each individual mushroom. We tested this algorithm under various environmental conditions and our segmentation results indicate this method provides sufficient computational speed and accuracy

    Fruit ripeness classification: A survey

    Get PDF
    Fruit is a key crop in worldwide agriculture feeding millions of people. The standard supply chain of fruit products involves quality checks to guarantee freshness, taste, and, most of all, safety. An important factor that determines fruit quality is its stage of ripening. This is usually manually classified by field experts, making it a labor-intensive and error-prone process. Thus, there is an arising need for automation in fruit ripeness classification. Many automatic methods have been proposed that employ a variety of feature descriptors for the food item to be graded. Machine learning and deep learning techniques dominate the top-performing methods. Furthermore, deep learning can operate on raw data and thus relieve the users from having to compute complex engineered features, which are often crop-specific. In this survey, we review the latest methods proposed in the literature to automatize fruit ripeness classification, highlighting the most common feature descriptors they operate on

    Artificial intelligence approach for tomato detection and mass estimation in precision agriculture

    Get PDF
    Funding: This study was carried out with the support of “Research Program for Agricultural Science & Technology Development” (Project No: PJ013891012020), National Institute of Agricultural Sciences, Rural Development Administration, Republic of Korea.Application of computer vision and robotics in agriculture requires sufficient knowledge and understanding of the physical properties of the object of interest. Yield monitoring is an example where these properties affect the quantified estimation of yield mass. In this study, we propose an image-processing and artificial intelligence-based system using multi-class detection with instance-wise segmentation of fruits in an image that can further estimate dimensions and mass. We analyze a tomato image dataset with mass and dimension values collected using a calibrated vision system and accurate measuring devices. After successful detection and instance-wise segmentation, we extract the real-world dimensions of the fruit. Our characterization results exhibited a significantly high correlation between dimensions and mass, indicating that artificial intelligence algorithms can effectively capture this complex physical relation to estimate the final mass. We also compare different artificial intelligence algorithms to show that the computed mass agrees well with the actual mass. Detection and segmentation results show an average mask intersection over union of 96.05%, mean average precision of 92.28%, detection accuracy of 99.02%, and precision of 99.7%. The mean absolute percentage error for mass estimation was 7.09 for 77 test samples using a bagged ensemble tree regressor. This approach could be applied to other computer vision and robotic applications such as sizing and packaging systems and automated harvesting or to other measuring instruments.Publisher PDFPeer reviewe

    Deep Thermal Imaging: Proximate Material Type Recognition in the Wild through Deep Learning of Spatial Surface Temperature Patterns

    Get PDF
    We introduce Deep Thermal Imaging, a new approach for close-range automatic recognition of materials to enhance the understanding of people and ubiquitous technologies of their proximal environment. Our approach uses a low-cost mobile thermal camera integrated into a smartphone to capture thermal textures. A deep neural network classifies these textures into material types. This approach works effectively without the need for ambient light sources or direct contact with materials. Furthermore, the use of a deep learning network removes the need to handcraft the set of features for different materials. We evaluated the performance of the system by training it to recognise 32 material types in both indoor and outdoor environments. Our approach produced recognition accuracies above 98% in 14,860 images of 15 indoor materials and above 89% in 26,584 images of 17 outdoor materials. We conclude by discussing its potentials for real-time use in HCI applications and future directions.Comment: Proceedings of the 2018 CHI Conference on Human Factors in Computing System
    • …
    corecore