9 research outputs found

    Red Blood Cell Segmentation with Overlapping Cell Separation and Classification on Imbalanced Dataset

    Full text link
    Automated red blood cell (RBC) classification on blood smear images helps hematologists to analyze RBC lab results in a reduced time and cost. However, overlapping cells can cause incorrect predicted results, and so they have to be separated into multiple single RBCs before classifying. To classify multiple classes with deep learning, imbalance problems are common in medical imaging because normal samples are always higher than rare disease samples. This paper presents a new method to segment and classify RBCs from blood smear images, specifically to tackle cell overlapping and data imbalance problems. Focusing on overlapping cell separation, our segmentation process first estimates ellipses to represent RBCs. The method detects the concave points and then finds the ellipses using directed ellipse fitting. The accuracy from 20 blood smear images was 0.889. Classification requires balanced training datasets. However, some RBC types are rare. The imbalance ratio of this dataset was 34.538 for 12 RBC classes from 20,875 individual RBC samples. The use of machine learning for RBC classification with an imbalanced dataset is hence more challenging than many other applications. We analyzed techniques to deal with this problem. The best accuracy and F1-score were 0.921 and 0.8679, respectively, using EfficientNet-B1 with augmentation. Experimental results showed that the weight balancing technique with augmentation had the potential to deal with imbalance problems by improving the F1-score on minority classes, while data augmentation significantly improved the overall classification performance.Comment: This work has been submitted to the Heliyon for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Scene Text Detection with Polygon Offsetting and Border Augmentation

    No full text
    Scene text localization is a very crucial step in the issue of scene text recognition. The major challenges—such as how there are various sizes, shapes, unpredictable orientations, a wide range of colors and styles, occlusion, and local and global illumination variations—make the problem different from generic object detection. Unlike existing scene text localization methods, here we present a segmentation-based text detector which can detect an arbitrary shaped scene text by using polygon offsetting, combined with the border augmentation. This technique better distinguishes contiguous and arbitrary shaped text instances from nearby non-text regions. The quantitative experimental results on public benchmarks, ICDAR2015, ICDAR2017-MLT, ICDAR2019-MLT, and Total-Text datasets demonstrate the performance and robustness of our proposed method, compared to previous approaches which have been proposed

    Background modeling and subtraction by codebook construction

    No full text
    We present a new fast algorithm for background modeling and subtraction. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. Our method can handle scenes containing moving backgrounds or illumination variations (shadows and highlights), and it achieves robust detection for compressed videos. We compared our method with other multimode modeling techniques. 1

    Parasitic Egg Detection and Classification in Low-Cost Microscopic Images Using Transfer Learning

    No full text
    Intestinal parasitic infection leads to several morbidities in humans worldwide, especially in tropical countries. The traditional diagnosis usually relies on manual analysis from microscopic images which is prone to human error due to morphological similarity of different parasitic eggs and abundance of impurities in a sample. Many studies have developed automatic systems for parasite egg detection to reduce human workload. However, they work with high-quality microscopes, which unfortunately remain unaffordable in some rural areas. Our work thus exploits a benefit of a low-cost USB microscope. This instrument however provides poor quality images due to the limitation of magnification (10 Ă—), causing difficulty in parasite detection and species classification. In this paper, we propose a CNN-based technique using transfer learning strategy to enhance the efficiency of automatic parasite classification in poor-quality microscopic images. The patch-based technique with a sliding window is employed to search for the location of the eggs. Two networks, AlexNet and ResNet50, are examined with a trade-off between architecture size and classification performance. The results show that our proposed framework outperforms the state-of-the-art object recognition methods. Our system combined with the final decision from an expert may improve the real faecal examination with low-cost microscopes.</p

    Parasitic Egg Detection and Classification in Microscopic Images

    No full text
    Parasitic infections have been recognised as one of the most significant causes of illnesses by WHO. Most infected persons shed cysts or eggs in their living environment, and unwittingly cause transmission of parasites to other individuals. Diagnosis of intestinal parasites is usually based on direct examination in the laboratory, of which capacity is obviously limited. Targeting to automate routine faecal examination for parasitic diseases, this challenge aims to gather experts in the field to develop robust automated methods to detect and classify eggs of parasitic worms in a variety of microscopic images. Participants will work with a large-scale dataset, containing 11 types of parasitic eggs from faecal smear samples. They are the main interest because of causing major diseases and illness in developing countries. We open to any techniques used for parasitic egg recognition, ranging from conventional approaches based on statistical models to deep learning techniques. Finally, the organisers expect a new collaboration come out from the challenge

    Multimodal Augmented Reality – Augmenting Auditory-Tactile Feedback to Change the Perception of Thickness

    No full text
    With vision being a primary sense of humans, we often first estimate the physical properties of objects by looking at them. However, when in doubt, for example, about the material they are made of or its structure, it is natural to apply other senses, such as haptics by touching them. Aiming at the ultimate goal of achieving a full-sensory augmented reality experience, we present an initial study focusing on multimodal feedback when tapping an object to estimate the thickness of its material. Our results indicate that we can change the perception of thickness of stiff objects by modulating acoustic stimuli. For flexible objects, which have a more distinctive tactile characteristic, adding vibratory responses when tapping on thick objects can make people perceive them as thin. We also identified that in the latter case, adding congruent acoustic stimuli does not further enhance the illusion but worsens it
    corecore