7 research outputs found

    Towards An Automatic Segmentation for Assessment of Cardiac Left Ventricle Function

    Get PDF
    Research on detecting, recognising and interpreting Cardiac MRI has started since the 1980s. The problem with manual tracing efforts hampering the adoption of cardiac MRI as routine investigation. Manual tracing is also dependent on image quality, and there is no one-size-fitsall MRI setting for the optimum image result. In this paper, we present a proposed approach to automatically detect the left ventricle (LV) wall in the effort to automatically assist the assessment of cardiac function. Using a standard bechmark dataset, our experiments have shown that our proposed method had effectively improve the automatic detection of the epicardium and endocardium

    Towards the semantic representation of biological images: from pixels to regions

    No full text
    Biomedical images and models contain vast amounts of information. Regrettably, much of this information is only accessible by domain experts. This paper describes a biological use case in which this situation occurs. Motivation is given for describing images, from this use case, semantically. Furthermore, links are provided to the medical domain, demonstrating the transferability of this work. Subsequently, it is shown that a semantic representation in which every pixel is featured is needlessly expensive. This motivates the discussion of more abstract renditions, which are dealt with next. As part of this, the paper discusses the suitability of existing technologies. In particular, Region Connection Calculus and one implementation of the W3C Geospatial Vocabulary are considered. It transpires that the abstract representations provide a basic description that enables the user to perform a subset of the desired queries. However, a more complex depiction is required for this use case.</p

    Detection of Monocrystalline Silicon Wafer Defects Using Deep Transfer Learning

    No full text
    Defect detection is an important step in industrial production of monocrystalline silicon. Through the study of deep learning, this work proposes a framework for classifying monocrystalline silicon wafer defects using deep transfer learning (DTL). An existing pre-trained deep learning model was used as the starting point for building a new model. We studied the use of DTL and the potential adaptation of Mo bileNetV2 that was pre-trained using ImageNet for extracting monocrystalline silicon wafer defect features. This has led to speeding up the training process and to improving performance of the DTL-MobileNetV2 model in detecting and classifying six types of monocrystalline silicon wafer defects (crack, double contrast, hole, microcrack, saw-mark and stain). The process of training the DTL-MobileNetV2 model was optimized by relying on the dense block layer and global average pooling (GAP) method which had accelerated the convergence rate and improved generalization of the classification network. The monocrystalline silicon wafer defect classification technique relying on the DTL-MobileNetV2 model achieved the accuracy rate of 98.99% when evaluated against the testing set. This shows that DTL is an effective way of detecting different types of defects in monocrystalline silicon wafers, thus being suitable for minimizing misclassification and maximizing the overall production capacities

    Detection of Monocrystalline Silicon Wafer Defects Using Deep Transfer Learning, Journal of Telecommunications and Information Technology, 2022, nr 1

    Get PDF
    Defect detection is an important step in industrial production of monocrystalline silicon. Through the study of deep learning, this work proposes a framework for classifying monocrystalline silicon wafer defects using deep transfer learning (DTL). An existing pre-trained deep learning model was used as the starting point for building a new model. We studied the use of DTL and the potential adaptation of MobileNetV2 that was pre-trained using ImageNet for extracting monocrystalline silicon wafer defect features. This has led to speeding up the training process and to improving performance of the DTL-MobileNetV2 model in detecting and classifying six types of monocrystalline silicon wafer defects (crack, double contrast, hole, microcrack, saw-mark and stain). The process of training the DTL-MobileNetV2 model was optimized by relying on the dense block layer and global average pooling (GAP) method which had accelerated the convergence rate and improved generalization of the classification network. The monocrystalline silicon wafer defect classification technique relying on the DTL-MobileNetV2 model achieved the accuracy rate of 98.99% when evaluated against the testing set. This shows that DTL is an effective way of detecting different types of defects in monocrystalline silicon wafers, thus being suitable for minimizing misclassification and maximizing the overall production capacities

    Diabetic Retinopathy Detection from Fundus Images of the Eye Using Hybrid Deep Learning Features

    No full text
    Diabetic Retinopathy (DR) is a medical condition present in patients suffering from long-term diabetes. If a diagnosis is not carried out at an early stage, it can lead to vision impairment. High blood sugar in diabetic patients is the main source of DR. This affects the blood vessels within the retina. Manual detection of DR is a difficult task since it can affect the retina, causing structural changes such as Microaneurysms (MAs), Exudates (EXs), Hemorrhages (HMs), and extra blood vessel growth. In this work, a hybrid technique for the detection and classification of Diabetic Retinopathy in fundus images of the eye is proposed. Transfer learning (TL) is used on pre-trained Convolutional Neural Network (CNN) models to extract features that are combined to generate a hybrid feature vector. This feature vector is passed on to various classifiers for binary and multiclass classification of fundus images. System performance is measured using various metrics and results are compared with recent approaches for DR detection. The proposed method provides significant performance improvement in DR detection for fundus images. For binary classification, the proposed modified method achieved the highest accuracy of 97.8% and 89.29% for multiclass classification

    First deep images catalogue of extended IPHAS PNe

    Get PDF
    We present the first instalment of a deep imaging catalogue containing 58 True, Likely, and Possible extended PNe detected with the Isaac Newton Telescope Photometric H α Survey (IPHAS). The three narrow-band filters in the emission lines of H α, [N ii] λ6584 Å, and [O iii] λ5007 Å used for this purpose allowed us to improve our description of the morphology and dimensions of the nebulae. In some cases even the nature of the source has been reassessed. We were then able to unveil new macro- and micro-structures, which will without a doubt contribute to a more accurate analysis of these PNe. It has been also possible to perform a primary classification of the targets based on their ionization level. A Deep Learning classification tool has also been tested. We expect that all the PNe from the IPHAS catalogue of new extended planetary nebulae will ultimately be part of this deep H α, [N ii], and [O iii] imaging catalogue. © 2021 The Author(s) Published by Oxford University Press on behalf of Royal Astronomical Society.LS acknowledges support from PAPIIT grant IN-101819 (Mexico). MAG acknowledges support from grant AYA PGC2018-102184-B-I00 co-funded with FEDER funds. GR-L acknowledges support from CONACyT (grant 263373) and PRODEP (Mexico). JAT acknowledges funding by Dirección General de Asuntos del Personal Academico of the Universidad Nacional Autónoma de México (DGAPA, UNAM) project IA100720 and the Marcos Moshinsky Fundation (Mexico). AAZ acknowledges funding from STFC under grant number ST/T000414/1 and Newton grant ST/R006768/1, and from a Hung Hing Ying visiting professorship at the University of Hong Kong. DNFAI acknowledges funding under ‘Deep Learning for Classification of Astronomical Archives’ from the Newton-Ungku Omar Fund: F08/STFC/1792/2018.With funding from the Spanish government through the Severo Ochoa Centre of Excellence accreditation SEV-2017-0709.Peer reviewe
    corecore