2 research outputs found

    An ultra-compact particle size analyser using a CMOS image sensor and machine learning

    Get PDF
    Light scattering is a fundamental property that can be exploited to create essential devices such as particle analysers. The most common particle size analyser relies on measuring the angle-dependent diffracted light from a sample illuminated by a laser beam. Compared to other non-light-based counterparts, such a laser diffraction scheme offers precision, but it does so at the expense of size, complexity and cost. In this paper, we introduce the concept of a new particle size analyser in a collimated beam configuration using a consumer electronic camera and machine learning. The key novelty is a small form factor angular spatial filter that allows for the collection of light scattered by the particles up to predefined discrete angles. The filter is combined with a light-emitting diode and a complementary metal-oxide-semiconductor image sensor array to acquire angularly resolved scattering images. From these images, a machine learning model predicts the volume median diameter of the particles. To validate the proposed device, glass beads with diameters ranging from 13 to 125 µm were measured in suspension at several concentrations. We were able to correct for multiple scattering effects and predict the particle size with mean absolute percentage errors of 5.09% and 2.5% for the cases without and with concentration as an input parameter, respectively. When only spherical particles were analysed, the former error was significantly reduced (0.72%). Given that it is compact (on the order of ten cm) and built with low-cost consumer electronics, the newly designed particle size analyser has significant potential for use outside a standard laboratory, for example, in online and in-line industrial process monitoring

    Common Limitations of Image Processing Metrics:A Picture Story

    Get PDF
    While the importance of automatic image analysis is continuously increasing, recent meta-research revealed major flaws with respect to algorithm validation. Performance metrics are particularly key for meaningful, objective, and transparent performance assessment and validation of the used automatic algorithms, but relatively little attention has been given to the practical pitfalls when using specific metrics for a given image analysis task. These are typically related to (1) the disregard of inherent metric properties, such as the behaviour in the presence of class imbalance or small target structures, (2) the disregard of inherent data set properties, such as the non-independence of the test cases, and (3) the disregard of the actual biomedical domain interest that the metrics should reflect. This living dynamically document has the purpose to illustrate important limitations of performance metrics commonly applied in the field of image analysis. In this context, it focuses on biomedical image analysis problems that can be phrased as image-level classification, semantic segmentation, instance segmentation, or object detection task. The current version is based on a Delphi process on metrics conducted by an international consortium of image analysis experts from more than 60 institutions worldwide.Comment: This is a dynamic paper on limitations of commonly used metrics. The current version discusses metrics for image-level classification, semantic segmentation, object detection and instance segmentation. For missing use cases, comments or questions, please contact [email protected] or [email protected]. Substantial contributions to this document will be acknowledged with a co-authorshi
    corecore