26 research outputs found
Optimizing automated characterization of liver fibrosis histological images by investigating color spaces at different resolutions
Texture analysis (TA) of histological images has recently received attention as an automated method of characterizing liver fibrosis. The colored staining methods used to identify different tissue components reveal various patterns that contribute in different ways to the digital texture of the image. A histological digital image can be represented with various color spaces. The approximation processes of pixel values that are carried out while converting between different color spaces can affect image texture and subsequently could influence the performance of TA. Conventional TA is carried out on grey scale images, which are a luminance approximation to the original RGB (Red, Green, and Blue) space. Currently, grey scale is considered sufficient for characterization of fibrosis but this may not be the case for sophisticated assessment of fibrosis or when resolution conditions vary. This paper investigates the accuracy of TA results on three color spaces, conventional grey scale, RGB, and Hue-Saturation-Intensity (HSI), at different resolutions. The results demonstrate that RGB is the most accurate in texture classification of liver images, producing better results, most notably at low resolution. Furthermore, the green channel, which is dominated by collagen fiber deposition, appears to provide most of the features for characterizing fibrosis images. The HSI space demonstrated a high percentage error for the majority of texture methods at all resolutions, suggesting that this space is insufficient for fibrosis characterization. The grey scale space produced good results at high resolution; however, errors increased as resolution decreased
A multimodal deep learning framework using local feature representations for face recognition
YesThe most recent face recognition systems are
mainly dependent on feature representations obtained using
either local handcrafted-descriptors, such as local binary patterns
(LBP), or use a deep learning approach, such as deep
belief network (DBN). However, the former usually suffers
from the wide variations in face images, while the latter
usually discards the local facial features, which are proven
to be important for face recognition. In this paper, a novel
framework based on merging the advantages of the local
handcrafted feature descriptors with the DBN is proposed to
address the face recognition problem in unconstrained conditions.
Firstly, a novel multimodal local feature extraction
approach based on merging the advantages of the Curvelet
transform with Fractal dimension is proposed and termed
the Curvelet–Fractal approach. The main motivation of this
approach is that theCurvelet transform, a newanisotropic and
multidirectional transform, can efficiently represent themain
structure of the face (e.g., edges and curves), while the Fractal
dimension is one of the most powerful texture descriptors
for face images. Secondly, a novel framework is proposed,
termed the multimodal deep face recognition (MDFR)framework,
to add feature representations by training aDBNon top
of the local feature representations instead of the pixel intensity
representations. We demonstrate that representations acquired by the proposed MDFR framework are complementary
to those acquired by the Curvelet–Fractal approach.
Finally, the performance of the proposed approaches has
been evaluated by conducting a number of extensive experiments
on four large-scale face datasets: the SDUMLA-HMT,
FERET, CAS-PEAL-R1, and LFW databases. The results
obtained from the proposed approaches outperform other
state-of-the-art of approaches (e.g., LBP, DBN, WPCA) by
achieving new state-of-the-art results on all the employed
datasets
Radiomics-based differentiation of lung disease models generated by polluted air based on X-ray computed tomography data
BACKGROUND: Lung diseases (resulting from air pollution) require a widely accessible method for risk estimation and early diagnosis to ensure proper and responsive treatment. Radiomics-based fractal dimension analysis of X-ray computed tomography attenuation patterns in chest voxels of mice exposed to different air polluting agents was performed to model early stages of disease and establish differential diagnosis. METHODS: To model different types of air pollution, BALBc/ByJ mouse groups were exposed to cigarette smoke combined with ozone, sulphur dioxide gas and a control group was established. Two weeks after exposure, the frequency distributions of image voxel attenuation data were evaluated. Specific cut-off ranges were defined to group voxels by attenuation. Cut-off ranges were binarized and their spatial pattern was associated with calculated fractal dimension, then abstracted by the fractal dimension -- cut-off range mathematical function. Nonparametric Kruskal-Wallis (KW) and Mann-Whitney post hoc (MWph) tests were used. RESULTS: Each cut-off range versus fractal dimension function plot was found to contain two distinctive Gaussian curves. The ratios of the Gaussian curve parameters are considerably significant and are statistically distinguishable within the three exposure groups. CONCLUSIONS: A new radiomics evaluation method was established based on analysis of the fractal dimension of chest X-ray computed tomography data segments. The specific attenuation patterns calculated utilizing our method may diagnose and monitor certain lung diseases, such as chronic obstructive pulmonary disease (COPD), asthma, tuberculosis or lung carcinomas
Image resolution and exposure time of digital radiographs affects fractal dimension of periapical bone
Quantifying tumour heterogeneity in 18F-FDG PET/CT imaging by texture analysis
(18)F-Fluorodeoxyglucose positron emission tomography/computed tomography ((18)F-FDG PET/CT) is now routinely used in oncological imaging for diagnosis and staging and increasingly to determine early response to treatment, often employing semiquantitative measures of lesion activity such as the standardized uptake value (SUV). However, the ability to predict the behaviour of a tumour in terms of future therapy response or prognosis using SUVs from a baseline scan prior to treatment is limited. It is recognized that medical images contain more useful information than may be perceived with the naked eye, leading to the field of "radiomics" whereby additional features can be extracted by computational postprocessing techniques. In recent years, evidence has slowly accumulated showing that parameters obtained by texture analysis of radiological images, reflecting the underlying spatial variation and heterogeneity of voxel intensities within a tumour, may yield additional predictive and prognostic information. It is hoped that measurement of these textural features may allow better tissue characterization as well as better stratification of treatment in clinical trials, or individualization of future cancer treatment in the clinic, than is possible with current imaging biomarkers. In this review we focus on the literature describing the emerging methods of texture analysis in (18)FDG PET/CT, as well as other imaging modalities, and how the measurement of spatial variation of voxel grey-scale intensity within an image may provide additional predictive and prognostic information, and postulate the underlying biological mechanisms.</p