59 research outputs found
Shape-based Insect Classification: a Hybrid Region-based and Contour-based Approach
The American Burying Beetle (ABB) (Nicrophorus americanus) is a critically endangered insect whose distribution is limited to several states at the periphery of its historical range in the eastern and central United States. The objective of this study is to develop a digital image classification algorithm that will be used in an autonomous monitoring system to be attached to existing ABB traps that will detect, image, classify and report insects to species as they enter the trap. A training set of 92 individual specimens representing 11 insect species with shape similarity from the Oklahoma State University Entomology Museum was used in this study. Starting with a color digital image, an unsupervised preprocessing algorithm extracts each insect shape, converts it to a binary image, and then aligns it for classification using pattern recognition techniques. For region-based and contour-based shape representation methods, an area component and a Fourier descriptor methods are implemented for shape representation and classification. Analysis of initial classification results revealed that the pose variability of insect legs and antennae introduced excessive uncertainty in the feature space. To address this, a novel shape decomposition algorithm based on curvature theory is proposed to remove legs and antennae from the insect shape automatically prior to classification. This shape decomposition approach increased overall classification accuracy from 64% to 76% and 57% to 67% for area component and Fourier descriptor methods respectively. To further improve classification accuracy, a hybrid approach using a decision fusion technique has also been implemented after initial classification by each method. This resulted in 100% classification accuracy for ABB and 90% overall classification accuracy for the 11 species (total 92 images) investigated.Electrical Engineerin
Active shape models with focus on overlapping problems applied to plant detection and soil pore analysis
[no abstract
Center of gravity guided signature of planar shapes
Measuring the similarities between two planar shapes is a complex problem. A notion of calculating the signature of a planar shape has been proposed. This signature is a unique feature of the planar shape that differentiates it from other planar shapes. Moreover, the comparison of signatures of two planar shapes helps in determining the degree of similarity between them. In part, researchers have tried to propose effective algorithms to compute the signature of the planar shapes. O\u27Rourke introduced the concept of signature of simple polygons for measuring similarities between two dimensional shapes. We propose to model a generalized notion of signature by considering the center of gravity of polygons. Standard signature is determined by considering the half plane through the edges of the polygon. In the generalized model, we propose to measure signature by considering half plane through the center of gravity of polygons and parallel boundary edges
Three Dimensional Nonlinear Statistical Modeling Framework for Morphological Analysis
This dissertation describes a novel three-dimensional (3D) morphometric analysis framework for building statistical shape models and identifying shape differences between populations. This research generalizes the use of anatomical atlases on more complex anatomy as in case of irregular, flat bones, and bones with deformity and irregular bone growth. The foundations for this framework are: 1) Anatomical atlases which allow the creation of homologues anatomical models across populations; 2) Statistical representation for output models in a compact form to capture both local and global shape variation across populations; 3) Shape Analysis using automated 3D landmarking and surface matching. The proposed framework has various applications in clinical, forensic and physical anthropology fields. Extensive research has been published in peer-reviewed image processing, forensic anthropology, physical anthropology, biomedical engineering, and clinical orthopedics conferences and journals.
The forthcoming discussion of existing methods for morphometric analysis, including manual and semi-automatic methods, addresses the need for automation of morphometric analysis and statistical atlases. Explanations of these existing methods for the construction of statistical shape models, including benefits and limitations of each method, provide evidence of the necessity for such a novel algorithm. A novel approach was taken to achieve accurate point correspondence in case of irregular and deformed anatomy. This was achieved using a scale space approach to detect prominent scale invariant features. These features were then matched and registered using a novel multi-scale method, utilizing both coordinate data as well as shape descriptors, followed by an overall surface deformation using a new constrained free-form deformation.
Applications of output statistical atlases are discussed, including forensic applications for the skull sexing, as well as physical anthropology applications, such as asymmetry in clavicles. Clinical applications in pelvis reconstruction and studying of lumbar kinematics and studying thickness of bone and soft tissue are also discussed
Recommended from our members
Fast embedding for image classification & retrieval and its application to the hostel industry
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonContent-based image classification and retrieval are the automatic processes of taking
an unseen image input and extracting its features representing the input image. Then,
for the classification task, this mathematically measured input is categorized according
to established criteria in the server and consequently shows the output as a result. On
the other hand, for the retrieval task, the extracted features of an unseen query image
are sent to the server to search for the most visually similar images to a given image
and retrieve these images as a result. Despite image features could be represented
by classical features, artificial intelligence-based features, Convolutional Neural
Networks (CNN) to be precise, have become powerful tools in the field. Nonetheless,
the high dimensional CNN features have been a challenge in particular for applications
on mobile or Internet of Things devices. Therefore, in this thesis, several fast
embeddings are explored and proposed to overcome the constraints of low memory,
bandwidth, and power. Furthermore, the first hostel image database is created with
three datasets, hostel image dataset containing 13,908 interior and exterior images of
hostels across the world, and Hostels-900 dataset and Hostels-2K dataset containing
972 images and 2,380 images, respectively, of 20 London hostel buildings. The results
demonstrate that the proposed fast embeddings such as the application of GHM-Rand
operator, GHM-Fix operator, and binary feature vectors are able to outperform or give
competitive results to those state-of-the-art methods with a lot less computational
resource. Additionally, the findings from a ten-year literature review of CBIR study in
the tourism industry could picturize the relevant research activities in the past decade
which are not only beneficial to the hostel industry or tourism sector but also to the
computer science and engineering research communities for the potential real-life
applications of the existing and developing technologies in the field
Learning cell representations in temporal and 3D contexts
Cell morphology and its changes under different circumstances is one of the primary ways by which we can understand biology. Computational tools for characterization and analysis, therefore, play a critical role in advancing studies involving cell morphology.
In this thesis, I explored the use of representation learning and self-supervised methods to analyze nuclear texture in fluorescence imaging across different contexts and scales. To analyze the cell cycle using 2D temporal imaging data, as well as DNA damage in 3D imaging data, I employed a simple model based on the VAE-GAN architecture. Through the VAE-GAN model, I constructed manifolds in which the latent representations of the data can be grouped and clustered based on textural similarities without the need for exhaustive training annotations. I used these representations, as well as manually engineered features, to perform various analyses both at the single cell and tissue levels.
The application on the cell cycle data revealed that common tasks such as cell cycle staging and cell cycle time estimation can be done even with minimal fluorescence information and user annotation. On the other hand, the texture classes derived to characterize DNA damage in 3D histology images unveiled differences between control and treated tissue regions. Lastly, by aggregating cell-level information to characterize local cell neighborhoods, interactions between DNA-damaged cells and immune cells can be quantified and some tissue microstructures can be identified.
The results presented in this thesis demonstrated the utility of the representations learned through my approach in supporting biological inquiries involving temporal and 3D spatial data. The quantitative measurements computed using the presented methods have the potential to aid not only similar experiments on the cell cycle and DNA damage but also in exploratory studies in 3D histology
Recommended from our members
Development of computer-based algorithms for unsupervised assessment of radiotherapy contouring
INTRODUCTION: Despite the advances in radiotherapy treatment delivery, target volume
delineation remains one of the greatest sources of error in the radiotherapy delivery process,
which can lead to poor tumour control probability and impact clinical outcome. Contouring
assessments are performed to ensure high quality of target volume definition in clinical trials
but this can be subjective and labour-intensive.
This project addresses the hypothesis that computational segmentation techniques, with a given
prior, can be used to develop an image-based tumour delineation process for contour
assessments. This thesis focuses on the exploration of the segmentation techniques to develop
an automated method for generating reference delineations in the setting of advanced lung
cancer. The novelty of this project is in the use of the initial clinician outline as a prior for
image segmentation.
METHODS: Automated segmentation processes were developed for stage II and III non-small
cell lung cancer using the IDEAL-CRT clinical trial dataset. Marker-controlled watershed
segmentation, two active contour approaches (edge- and region-based) and graph-cut applied
on superpixels were explored. k-nearest neighbour (k-NN) classification of tumour from
normal tissues based on texture features was also investigated.
RESULTS: 63 cases were used for development and training. Segmentation and classification
performance were evaluated on an independent test set of 16 cases. Edge-based active contour
segmentation achieved highest Dice similarity coefficient of 0.80 ± 0.06, followed by graphcut
at 0.76 ± 0.06, watershed at 0.72 ± 0.08 and region-based active contour at 0.71 ± 0.07,
with mean computational times of 192 ± 102 sec, 834 ± 438 sec, 21 ± 5 sec and 45 ± 18 sec
per case respectively. Errors in accuracy of irregularly shaped lesions and segmentation
leakages at the mediastinum were observed.
In the distinction of tumour and non-tumour regions, misclassification errors of 14.5% and
15.5% were achieved using 16- and 8-pixel regions of interest (ROIs) respectively. Higher
misclassification errors of 24.7% and 26.9% for 16- and 8-pixel ROIs were obtained in the
analysis of the tumour boundary.
CONCLUSIONS: Conventional image-based segmentation techniques with the application of
priors are useful in automatic segmentation of tumours, although further developments are
required to improve their performance. Texture classification can be useful in distinguishing
tumour from non-tumour tissue, but the segmentation task at the tumour boundary is more
difficult. Future work with deep-learning segmentation approaches need to be explored.Funded by National Radiotherapy Trials Quality Assurance (RTTQA) grou
Automatic Segmentation and Classification of Red and White Blood cells in Thin Blood Smear Slides
In this work we develop a system for automatic detection and classification of cytological images which plays an increasing important role in medical diagnosis. A primary aim of this work is the accurate segmentation of cytological images of blood smears and subsequent feature extraction, along with studying related classification problems such as the identification and counting of peripheral blood smear particles, and classification of white blood cell into types five. Our proposed approach benefits from powerful image processing techniques to perform complete blood count (CBC) without human intervention. The general framework in this blood smear analysis research is as follows. Firstly, a digital blood smear image is de-noised using optimized Bayesian non-local means filter to design a dependable cell counting system that may be used under different image capture conditions. Then an edge preservation technique with Kuwahara filter is used to recover degraded and blurred white blood cell boundaries in blood smear images while reducing the residual negative effect of noise in images. After denoising and edge enhancement, the next step is binarization using combination of Otsu and Niblack to separate the cells and stained background. Cells separation and counting is achieved by granulometry, advanced active contours without edges, and morphological operators with watershed algorithm. Following this is the recognition of different types of white blood cells (WBCs), and also red blood cells (RBCs) segmentation. Using three main types of features: shape, intensity, and texture invariant features in combination with a variety of classifiers is next step. The following features are used in this work: intensity histogram features, invariant moments, the relative area, co-occurrence and run-length matrices, dual tree complex wavelet transform features, Haralick and Tamura features. Next, different statistical approaches involving correlation, distribution and redundancy are used to measure of the dependency between a set of features and to select feature variables on the white blood cell classification. A global sensitivity analysis with random sampling-high dimensional model representation (RS-HDMR) which can deal with independent and dependent input feature variables is used to assess dominate discriminatory power and the reliability of feature which leads to an efficient feature selection. These feature selection results are compared in experiments with branch and bound method and with sequential forward selection (SFS), respectively. This work examines support vector machine (SVM) and Convolutional Neural Networks (LeNet5) in connection with white blood cell classification. Finally, white blood cell classification system is validated in experiments conducted on cytological images of normal poor quality blood smears. These experimental results are also assessed with ground truth manually obtained from medical experts
- …