874 research outputs found

    On the efficacy of handcrafted and deep features for seed image classification

    Get PDF
    Computer vision techniques have become important in agriculture and plant sciences due to their wide variety of applications. In particular, the analysis of seeds can provide meaningful information on their evolution, the history of agriculture, the domestication of plants, and knowledge of diets in ancient times. This work aims to propose an exhaustive comparison of several different types of features in the context of multiclass seed classification, leveraging two public plant seeds data sets to classify their families or species. In detail, we studied possible optimisations of five traditional machine learning classifiers trained with seven different categories of handcrafted features. We also fine-tuned several well-known convolutional neural networks (CNNs) and the recently proposed SeedNet to determine whether and to what extent using their deep features may be advantageous over handcrafted features. The experimental results demonstrated that CNN features are appropriate to the task and representative of the multiclass scenario. In particular, SeedNet achieved a mean F-measure of 96%, at least. Nevertheless, several cases showed satisfactory performance from the handcrafted features to be considered a valid alternative. In detail, we found that the Ensemble strategy combined with all the handcrafted features can achieve 90.93% of mean F-measure, at least, with a considerably lower amount of times. We consider the obtained results an excellent preliminary step towards realising an automatic seeds recognition and classification framework

    SAMMI: Segment Anything Model for Malaria Identification

    Get PDF
    Malaria, a life-threatening disease caused by the Plasmodium parasite, is a pressing global health challenge. Timely detection is critical for effective treatment. This paper introduces a novel computer-aided diagnosis system for detecting Plasmodium parasites in blood smear images, aiming to enhance automation and accessibility in comprehensive screening scenarios. Our approach integrates the Segment Anything Model for precise unsupervised parasite detection. It then employs a deep learning framework, combining Convolutional Neural Networks and Vision Transformer to accurately classify malaria-infected cells. We rigorously evaluate our system using the IML public dataset and compare its performance against various off-the-shelf object detectors. The results underscore the efficacy of our method, demonstrating superior accuracy in detecting and classifying malaria-infected cells. This innovative Computer-aided diagnosis system presents a reliable and near real-time solution for malaria diagnosis, offering significant potential for widespread implementation in healthcare settings. By automating the diagnosis process and ensuring high accuracy, our system can contribute to timely interventions, thereby advancing the fight against malaria globally

    A Shallow Learning Investigation for COVID-19 Classification

    Get PDF
    COVID-19, an infectious coronavirus disease, triggered a pandemic that resulted in countless deaths. Since its inception, clinical institutions have used computed tomography as a supplemental screening method to reverse transcription-polymerase chain reaction. Deep learning approaches have shown promising results in addressing the problem; however, less computationally expensive techniques, such as those based on handcrafted descriptors and shallow classifiers, may be equally capable of detecting COVID-19 based on medical images of patients. This work proposes an initial investigation of several handcrafted descriptors well known in the computer vision literature already been exploited for similar tasks. The goal is to discriminate tomographic images belonging to three classes, COVID-19, pneumonia, and normal conditions, and present in a large public dataset. The results show that kNN and ensembles trained with texture descriptors achieve outstanding accuracy in this task, reaching accuracy and F-measure of 93.05% and 89.63%, respectively. Although it did not exceed state of the art, it achieved satisfactory performance with only 36 features, enabling the potential to achieve remarkable improvements from a computational complexity perspective

    On The Potential of Image Moments for Medical Diagnosis

    Get PDF
    Medical imaging is widely used for diagnosis and postoperative or post-therapy monitoring. The ever-increasing number of images produced has encouraged the introduction of automated methods to assist doctors or pathologists. In recent years, especially after the advent of convolutional neural networks, many researchers have focused on this approach, considering it to be the only method for diagnosis since it can perform a direct classification of images. However, many diagnostic systems still rely on handcrafted features to improve interpretability and limit resource consumption. In this work, we focused our efforts on orthogonal moments, first by providing an overview and taxonomy of their macrocategories and then by analysing their classification performance on very different medical tasks represented by four public benchmark data sets. The results confirmed that convolutional neural networks achieved excellent performance on all tasks. Despite being composed of much fewer features than those extracted by the networks, orthogonal moments proved to be competitive with them, showing comparable and, in some cases, better performance. In addition, Cartesian and harmonic categories provided a very low standard deviation, proving their robustness in medical diagnostic tasks. We strongly believe that the integration of the studied orthogonal moments can lead to more robust and reliable diagnostic systems, considering the performance obtained and the low variation of the results. Finally, since they have been shown to be effective on both magnetic resonance and computed tomography images, they can be easily extended to other imaging techniques

    A deep architecture based on attention mechanisms for effective end-to-end detection of early and mature malaria parasites

    Get PDF
    Malaria is a severe infectious disease caused by the Plasmodium parasite. The early and accurate detection of this disease is crucial to reducing the number of deaths it causes. However, the current method of detecting malaria parasites involves manual examination of blood smears, which is a time-consuming and labor-intensive process, mainly performed by skilled hematologists, especially in underdeveloped countries. To address this problem, we have developed two deep learning-based systems, YOLO-SPAM and YOLO-SPAM++, which can detect the parasites responsible for malaria at an early stage. Our evaluation of these systems using two public datasets of malaria parasite images, MP-IDB and IML, shows that they outperform the current state-of-the-art, with more than 11M fewer parameters than the baseline YOLOv5m6. YOLO-SPAM++ demonstrated a substantial 10% improvement over YOLO-SPAM and up to 20% against the best-performing baseline in preliminary experiments conducted on the Plasmodium Falciparum species of MP-IDB. On the other hand, YOLO-SPAM showed slightly better results than YOLO-SPAM++ in subsets without tiny parasites, while YOLO-SPAM++ performed better in subsets with tiny parasites, with precision values up to 94%. Further cross-species generalization validations, conducted by merging training sets of various species within MP-IDB, showed that YOLO-SPAM++ consistently outperformed YOLOv5 and YOLO-SPAM across all species, emphasizing its superior performance and precision in detecting tiny parasites. These architectures can be integrated into computer-aided diagnosis systems to create more reliable and robust systems for the early detection of malaria

    Understanding cheese ripeness: An artificial intelligence-based approach for hierarchical classification

    Get PDF
    Within the contemporary dairy industry, the effective monitoring of cheese ripeness constitutes a critical yet challenging task. This paper proposes the first public dataset encompassing images of cheese wheels that depict various products at distinct stages of ripening and introduces an innovative hybrid approach, integrating machine learning and computer vision techniques to automate the detection of cheese ripeness. By leveraging deep learning and shallow learning techniques, the proposed method endeavors to overcome the limitations associated with conventional assessment methodologies. It aims to provide automation, precision, and consistency in the evaluation of cheese ripeness, delving into a hierarchical classification for the simultaneous classification of distinct cheese types and ripeness levels and presenting a comprehensive solution to enhance the efficiency of the cheese production process. By employing a lightweight hierarchical feature aggregation methodology, this investigation navigates the intricate landscape of preprocessing steps, feature selection, and diverse classifiers. We report a noteworthy achievement, attaining a best F-measure score of 0.991 through the merging of features extracted from EfficientNet and DarkNet-53, opening the field to concretely address the complexity inherent in cheese quality assessment

    Automatic Monitoring Cheese Ripeness Using Computer Vision and Artificial Intelligence

    Get PDF
    Ripening is a very important process that contributes to cheese quality, as its characteristics are determined by the biochemical changes that occur during this period. Therefore, monitoring ripening time is a fundamental task to market a quality product in a timely manner. However, it is difficult to accurately determine the degree of cheese ripeness. Although some scientific methods have also been proposed in the literature, the conventional methods adopted in dairy industries are typically based on visual and weight control. This study proposes a novel approach aimed at automatically monitoring the cheese ripening based on the analysis of cheese images acquired by a photo camera. Both computer vision and machine learning techniques have been used to deal with this task. The study is based on a dataset of 195 images (specifically collected from an Italian dairy industry), which represent Pecorino cheese forms at four degrees of ripeness. All stages but the one labeled as 'day 18', which has 45 images, consist of 50 images. These images have been handled with image processing techniques and then classified according to the degree of ripening, i.e., 18, 22, 24, and 30 days. A 5-fold cross-validation strategy was used to empirically evaluate the performance of the models. During this phase, each training fold was augmented online. This strategy allowed to use 624 images for training, leaving 39 original images per fold for testing. Experimental results have demonstrated the validity of the approach, showing good performance for most of the trained models

    Stem/progenitor cells in fetuses and newborns: overview of immunohistochemical markers

    Get PDF
    Microanatomy of the vast majority of human organs at birth is characterized by marked differences as compared to adult organs, regarding their architecture and the cell types detectable at histology. In preterm neonates, these differences are even more evident, due to the lower level of organ maturation and to ongoing cell differentiation. One of the most remarkable finding in preterm tissues is the presence of huge amounts of stem/progenitor cells in multiple organs, including kidney, brain, heart, adrenals, and lungs. In other organs, such as liver, the completely different burden of cell types in preterm infants is mainly related to the different function of the liver during gestation, mainly focused on hematopoiesis, a function that is taken by bone marrow after birth. Our preliminary studies showed that the antigens expressed by stem/progenitors differ significantly from one organ to the next. Moreover, within each developing human tissue, reactivity for different stem cell markers also changes during gestation, according with the multiple differentiation steps encountered by each progenitor during development. A better knowledge of stem/progenitor cells of preterms will allow neonatologists to boost preterm organ maturation, favoring the differentiation of the multiple cells types that characterize each organ in at term neonates

    Enhancing H2 production rate in PGM-free photoelectrochemical cells by glycerol photo-oxidation

    Get PDF
    The photo-oxidation of glycerol was carried out by using TiO2 NTs photoanodes and Ni foam as the cathode for the Hydrogen Evolution Reaction. The photoanodes were prepared by anodizing Ti foils and titanium felt and then annealed under air exposure. They were tested in acidic aqueous solution without and with the addition of glycerol. When glycerol was present, the hydrogen production rate increased and allowed the simultaneous production of high value added partial oxidation compounds, i.e. 1,3-dihydroxyacetone (DHA), and glyceraldehyde (GA). The highest H2 evolution and partial oxidation compounds production rates were obtained by using home prepared TiO2 nanotubes (TiO2 NTs) synthesized on Ti fiber felt as the photoanode with an irradiated area of 90 cm2. These photoanodes were found to be highly stable both from a mechanical and a chemical point of view, so they can be reused after a simple cleaning step
    corecore