16 research outputs found

    Segmentation methods for MRI human spine images using thresholding approaches

    Get PDF
    Computer-Aided Diagnosis (CAD) in MRI image processing can assist experts in detecting abnormality in human spine image efficiently. The manual process of detecting abnormality is tedious, hence the use of CAD in this field is helpful to increase the diagnosis efficiency. The segmentation method is an important and critical process in CAD that could affect the accuracy of the MRI spine image’s overall diagnosis. There are various segmentation methods commonly used in CAD. One of the methods is segmentation using thresholding. Thresholding approaches divide the area of interest by identifying the threshold values that can separate the image into desired grayscale levels based on its pixel’s intensity. This study focuses on investigating the optimal approach in segmenting lumbar vertebrae on the MRI images. The steps involved in this study include pre-processing (normalization), segmentation using local and global thresholding, neural network classification, and performance measurement. 20 images are used to evaluate and compare the segmentation methods. The effectiveness of the segmentation method is measured based on the performance measurement technique. This preliminary study shows that local thresholding outperforms the global thresholding approach with an accuracy of 91.4% and 87.7%

    Integration of 3D printing in computer-aided design and engineering course

    Get PDF
    Engineering students at an undergraduate level typically learn the design aspect and concept through lectures and practical sessions using computeraided software. However, the current computer-aided design and engineering (CAD/CAE) course did not expose the students to apply and relate the latest advanced technologies to solve global issues, for instance as listed in the United Nations Sustainable Development Goals (UN SDG). Therefore, an improved CAD/CAE course taken by the students of the Electrical and Electronic Engineering Programme in Universiti Kebangsaan Malaysia integrates 3D printing and conduct their project based on UN SDG themes. A total of 22 projects was produced, which involves both mechanical and electrical design with some of the physical models were 3D printed. Thus, students able to strengthen their understanding of the design concept through the integration of 3D printing and simultaneously aware of the current global issues

    Future stem cell analysis: progress and challenges towards state-of-the art approaches in automated cells analysis

    Get PDF
    Background and Aims A microscopic image has been used in cell analysis for cell type identification and classification, cell counting and cell size measurement. Most previous research works are tedious, including detailed understanding and time-consuming. The scientists and researchers are seeking modern and automatic cell analysis approaches in line with the current in-demand technology. Objectives This article provides a brief overview of a general cell and specific stem cell analysis approaches from the history of cell discovery up to the state-of-the-art approaches. Methodology A content description of the literature study has been surveyed from specific manuscript databases using three review methods: manuscript identification, screening, and inclusion. This review methodology is based on Prism guidelines in searching for originality and novelty in studies concerning cell analysis. Results By analysing generic cell and specific stem cell analysis approaches, current technology offers tremendous potential in assisting medical experts in performing cell analysis using a method that is less laborious, cost-effective, and reduces error rates. Conclusion This review uncovers potential research gaps concerning generic cell and specific stem cell analysis. Thus, it could be a reference for developing automated cells analysis approaches using current technology such as artificial intelligence and deep learning

    GA-based Optimisation of a LiDAR Feedback Autonomous Mobile Robot Navigation System

    Get PDF
    Autonomous mobile robots require an efficient navigation system in order to navigate from one location to another location fast and safe without hitting static or dynamic obstacles. A light-detection-and-ranging (LiDAR) based autonomous robot navigation is a multi-component navigation system consists of various parameters to be configured. With such structure and sometimes involving conflicting parameters, the process of determining the best configuration for the system is a non-trivial task. This work presents an optimisation method using Genetic algorithm (GA) to configure such navigation system with tuned parameters automatically. The proposed method can optimise parameters of a few components in a navigation system concurrently. The representation of chromosome and fitness function of GA for this specific robotic problem are discussed. The experimental results from simulation and real hardware show that the optimised navigation system outperforms a manually-tuned navigation system of an indoor mobile robot in terms of navigation time

    Pterygium classification using deep patch region-based anterior segment photographed images

    Get PDF
    Early pterygium screening is crucial to avoid blurred vision caused by cornea and pupil encroachment. However, medical assessment and conventional screening could be laborious and time-consuming to be implemented. This constraint seeks an advanced yet efficient automated pterygium screening to assist the current diagnostic method. Patch region-based anterior segment photographed images (ASPIs) focus the feature on a particular region of the pterygium growth. This work addresses the data limitation on deep neural network (DNN) processing with large-scale data requirements. It presents an automated pterygium classification of patch region-based ASPI using our previous re-establish network, “VggNet16-wbn”, the VggNet16, with the addition of batch normalisation layer after each convolutional layer. During an image pre-processing step, the pterygium and nonpterygium tissue are extracted from ASPI, followed by the generation of a single and three-by-three image patch region-based on the size of the 85×85 dataset. Data preparation with 10-fold cross-validation has been conducted to ensure the data are well generalised to minimise the probability of underfitting and overfitting problems. The proposed experimental work has successfully classified the pterygium tissue with more than 99% accuracy, sensitivity, specificity, and precision using appropriate hyperparameters values. This work could be used as a baseline framework for pterygium classification using limited data processing

    Strengthening programming skills among engineering students through experiential learning based robotics project

    Get PDF
    This study examined the educational effects in strengthening programming skills among university’s undergraduate engineering students via integration of a robotics project and an experiential learning approach. In this study, a robotics project was conducted to close the gap of students’ difficulty in relating the theoretical concepts of programming and real-world problems. Hence, an experiential learning approach using the Kolb model was proposed to investigate the problem. In this project, students were split into groups whereby they were asked to develop codes for controlling the navigation of a wheeled mobile robot. They were responsible for managing their group’s activities, conducting laboratory tests, producing technical reports and preparing a video presentation. The statistical analysis performed on the students’ summative assessments of a programming course revealed a remarkable improvement in their problem-solving skills and ability to provide programming solutions to a real-world problem

    Deep learning for an automated image-based stem cell classification

    Get PDF
    Hematopoiesis is a process in which hematopoietic stem cells produce other mature blood cells in the bone marrow through cell proliferation and differentiation. The hematopoietic cells are cultured on a petri dish to form a different colony-forming unit (CFU). The idea is to identify the type of CFU produced by the stem cell. Several software has been developed to classify the CFU automatically. However, an automated identification or classification of CFU types has become the main challenge. Most of the current software has common drawbacks, such as the expensive operating cost and complex machines. The purpose of this study is to investigate several selected convolutional neural network (CNN) pre-trained models to overcome these constraints for automated CFU classification. Prior to CFU classification, the images are acquired from mouse stem cells and categorized into three types which are CFU-erythroid (E), CFU-granulocyte/macrophage (GM) and CFU-PreB. These images are then pre-processed before being fed into CNN pre-trained models. The models adopt a deep learning neural network approach to extract informative features from the CFU images Classification performance shows that the models integrated with the pre-processing module can classify the CFUs with high accuracies and shorter computational time with 96.33% on 61 minutes and 37 seconds, respectively. Hence, this work finding could be used as the baseline reference for further research.Keywords: Automated stem cell classification; Colony-forming unit (CFU); Deep learning; Convolutional neural network (CNN) Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi, MalaysiabOptometry and Vision Sciences Programme, Faculty of Health Sciences, School of Healthcare Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia*proliferation and differentiation. The hematopoietic cells are cultured on a petri dish to form a different colony-forming unit (CFU). The idea is to identify the type of CFU produced by the stem cell. Several software has been developed to classify the CFU automatically. However, an automated identification or classification of CFU types has become the main challenge. Most of the current software has common drawbacks, such as the expensive operating cost and complex machines. The purpose of this study is to investigate several selected convolutional neural network (CNN) pre-trained models to overcome these constraints for automated CFU classification. Prior to CFU classification, the images are acquired from mouse stem cells and categorized into three types which are CFU-erythroid (E), CFU-granulocyte/macrophage (GM) and CFU-PreB. These images are then pre-processed before being fed into CNN pre-trained models. The models adopt a deep learning neural network approach to extract informative features from the CFU images Classification performance shows that the models integrated with the pre-processing module can classify the CFUs with high accuracies and shorter computational time with 96.33% on 61 minutes and 37 seconds, respectively. Hence, this work finding could be used as the baseline reference for further research

    Ocular dimensions by three-dimensional magnetic resonance imaging in emmetropic versus myopic school children

    Get PDF
    Background: Magnetic resonance imaging (MRI) has been used to investigate eye shapes; however, reports involving children are scarce. This study aimed to determine ocular dimensions, and their correlations with refractive error, using three-dimensional MRI in emmetropic versus myopic children. Methods: Healthy school children aged < 10 years were invited to take part in this cross-sectional study. Refraction and best-corrected distance visual acuity (BCDVA) were determined using cycloplegic refraction and a logarithm of the minimum angle of resolution (logMAR) chart, respectively. All children underwent MRI using a 3-Tesla whole-body scanner. Quantitative eyeball measurements included the longitudinal axial length (LAL), horizontal width (HW), and vertical height (VH) along the cardinal axes. Correlation analysis was used to determine the association between the level of refractive error and the eyeball dimensions. Results: A total of 70 eyes from 70 children (35 male, 35 female) with a mean (standard deviation [SD]) age of 8.38 (0.49) years were included and analyzed. Mean (SD) refraction (spherical equivalent, SEQ) and BCDVA were -2.55 (1.45) D and -0.01 (0.06) logMAR, respectively. Ocular dimensions were greater in myopes than in emmetropes (all P < 0.05), with no significant differences according to sex. Mean (SD) ocular dimensions were LAL 24.07 (0.91) mm, HW 23.41 (0.82) mm, and VH 23.70 (0.88) mm for myopes, and LAL 22.69 (0.55) mm, HW 22.65 (0.63) mm, and VH 22.94 (0.69) mm for emmetropes. Significant correlations were noted between SEQ and ocular dimensions, with a greater change in LAL (0.46 mm/D, P < 0.001) than in VH (0.27 mm/D, P < 0.001) and HW (0.22 mm/D, P = 0.001). Conclusions: Myopic eyeballs are larger than those with emmetropia. The eyeball elongates as myopia increases, with the greatest change in LAL, the least in HW, and an intermediate change in VH. These changes manifest in both sexes at a young age and low level of myopia. These data may serve as a reference for monitoring the development of refractive error in young Malaysian children of Chinese origin

    An investigation of automatic feature extraction for clustered microcalcifications on digital mammograms

    Get PDF
    Mammography is a common imaging modality used for breast screening. The limitations in reading mammogram images manually by radiologists have motivated an interest to the use of computerised systems to aid the process. Computer-aided diagnosis (CAD) systems have been widely used to assist radiologists in making decision; either for detection, CADe, or for diagnosis, CADx, of the anomalies in mammograms. This thesis aims to improve the sensitivity of the CADx system by proposing novel feature extraction techniques. Previous works have shown that multiple resolution images provide useful information for classification. The wavelet transform is one of the techniques that is commonly used to produce multiple resolution images, and is used to extract features from the produced sub-images for classification of microcalcification clusters in mammograms. However, the fixed directionality produced by the transform limit the opportunity to extract further useful features that may contain information associated with the malignancy of the clusters. This has driven the thesis to experiment on multiple orientation and multiple resolution images for providing features for microcalcification classification purposes. Extensive and original experiments are conducted to seek whether the multiple orientation and multiple resolution analysis of microcalcification clusters features are useful for classification. Results show that the proposed method achieves an accuracy of 78.3%, and outperforms the conventional wavelet transform, which achieves an accuracy of 64.9%. A feature selection step using Principal Component Analysis (PCA) is employed to reduce the number of the features as well as the complexity of the system. The overall result shows that the accuracy of the system when 2-features from steerable pyramid filtering are used as input achieved 85.5% as opposed to 2-features from conventional wavelet transform, which achieves an accuracy of 69.9%. In addition, the effectiveness of the diagnosis system also depends on the classifier. Deep belief networks have demonstrated to be able to extract high-level of input representations. The ability of greedy learning in deep networks provide a highly non-linear mapping of the input and the output. The advantage of DBN in being able to analyse complex patterns, in this thesis, is exploited for classification of microcalcification clusters into benign or malignant sets. An extensive research experiment is conducted to use DBN in extracting features for microcalcification classification. The experiment of using DBN solely as a feature extractor and classifier of raw pixel microcalcification images shows no significant improvement. Therefore, a novel technique using filtered images is proposed, so that a DBN will extract features from the filtered images. The analysis result shows an improvement in accuracy from 47.9% to 60.8% when the technique is applied. With these new findings, it may contribute to the identification of the microcalcification clusters in mammograms.Thesis (Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 201

    A rapid and non-destructive technique in determining the ripeness of oil palm fresh fruit bunch (FFB)

    Get PDF
    Oil palm industry is one of the main industries in Malaysia that contributes to the country’s gross domestic product (GDP). In the oil palm industrial sector, methods of planting, detection and assessment are very important to produce high quality palm oil. Currently, the ripeness of oil palm fresh fruit bunch (FFB) is estimated using eyesight (most common), computer vision, hyperspectral imaging, light detection and ranging (LiDAR), near infrared (NIR) spectroscopy, and magnetic resonance imaging. The objective of this research is to introduce a rapid and non-destructive technique in determining and assessing the ripeness of oil palm fresh fruit bunch (FFB) by using a LiDAR scanning system. The LiDAR scanning system is used to scan three types of oil palm fruits at different level of ripeness which is under ripe, ripe, and over ripe. The reflectance intensity that bounces off the fruits are gathered and analysed to determine the different level or ripeness. Even though the intensity value is purely relative, it is proportional to the reflectance or absorption rate from the LiDAR sensor. A rapid method to determine the ripeness of palm fruits using a LiDAR sensor was proposed by calculating the reflectance percentage from 0% to 100% using the concept of linearity
    corecore