25,245 research outputs found
Medical imaging analysis with artificial neural networks
Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging
Passively mode-locked laser using an entirely centred erbium-doped fiber
This paper describes the setup and experimental results for an entirely centred erbium-doped fiber laser with passively mode-locked output. The gain medium of the ring laser cavity configuration comprises a 3 m length of two-core optical fiber, wherein an undoped outer core region of 9.38 μm diameter surrounds a 4.00 μm diameter central core region doped with erbium ions at 400 ppm concentration. The generated stable soliton mode-locking output has a central wavelength of 1533 nm and pulses that yield an average output power of 0.33 mW with a pulse energy of 31.8 pJ. The pulse duration is 0.7 ps and the measured output repetition rate of 10.37 MHz corresponds to a 96.4 ns pulse spacing in the pulse train
Recommended from our members
Exploration of PET and MRI radiomic features for decoding breast cancer phenotypes and prognosis.
Radiomics is an emerging technology for imaging biomarker discovery and disease-specific personalized treatment management. This paper aims to determine the benefit of using multi-modality radiomics data from PET and MR images in the characterization breast cancer phenotype and prognosis. Eighty-four features were extracted from PET and MR images of 113 breast cancer patients. Unsupervised clustering based on PET and MRI radiomic features created three subgroups. These derived subgroups were statistically significantly associated with tumor grade (p = 2.0 × 10-6), tumor overall stage (p = 0.037), breast cancer subtypes (p = 0.0085), and disease recurrence status (p = 0.0053). The PET-derived first-order statistics and gray level co-occurrence matrix (GLCM) textural features were discriminative of breast cancer tumor grade, which was confirmed by the results of L2-regularization logistic regression (with repeated nested cross-validation) with an estimated area under the receiver operating characteristic curve (AUC) of 0.76 (95% confidence interval (CI) = [0.62, 0.83]). The results of ElasticNet logistic regression indicated that PET and MR radiomics distinguished recurrence-free survival, with a mean AUC of 0.75 (95% CI = [0.62, 0.88]) and 0.68 (95% CI = [0.58, 0.81]) for 1 and 2 years, respectively. The MRI-derived GLCM inverse difference moment normalized (IDMN) and the PET-derived GLCM cluster prominence were among the key features in the predictive models for recurrence-free survival. In conclusion, radiomic features from PET and MR images could be helpful in deciphering breast cancer phenotypes and may have potential as imaging biomarkers for prediction of breast cancer recurrence-free survival
Pain Level Detection From Facial Image Captured by Smartphone
Accurate symptom of cancer patient in regular basis is highly concern to the medical service provider for clinical decision making such as adjustment of medication. Since patients have limitations to provide self-reported symptoms, we have investigated how mobile phone application can play the vital role to help the patients in this case. We have used facial images captured by smart phone to detect pain level accurately. In this pain detection process, existing algorithms and infrastructure are used for cancer patients to make cost low and user-friendly. The pain management solution is the first mobile-based study as far as we found today. The proposed algorithm has been used to classify faces, which is represented as a weighted combination of Eigenfaces. Here, angular distance, and support vector machines (SVMs) are used for the classification system. In this study, longitudinal data was collected for six months in Bangladesh. Again, cross-sectional pain images were collected from three different countries: Bangladesh, Nepal and the United States. In this study, we found that personalized model for pain assessment performs better for automatic pain assessment. We also got that the training set should contain varying levels of pain in each group: low, medium and high
Image Based Biomarkers from Magnetic Resonance Modalities: Blending Multiple Modalities, Dimensions and Scales.
The successful analysis and processing of medical
imaging data is a multidisciplinary work that requires the
application and combination of knowledge from diverse fields,
such as medical engineering, medicine, computer science and
pattern classification. Imaging biomarkers are biologic features
detectable by imaging modalities and their use offer the prospect
of more efficient clinical studies and improvement in both
diagnosis and therapy assessment. The use of Dynamic Contrast
Enhanced Magnetic Resonance Imaging (DCE-MRI) and its
application to the diagnosis and therapy has been extensively
validated, nevertheless the issue of an appropriate or optimal
processing of data that helps to extract relevant biomarkers
to highlight the difference between heterogeneous tissue still
remains. Together with DCE-MRI, the data extracted from
Diffusion MRI (DWI-MR and DTI-MR) represents a promising
and complementary tool. This project initially proposes the
exploration of diverse techniques and methodologies for the
characterization of tissue, following an analysis and classification
of voxel-level time-intensity curves from DCE-MRI data mainly
through the exploration of dissimilarity based representations
and models. We will explore metrics and representations to
correlate the multidimensional data acquired through diverse
imaging modalities, a work which starts with the appropriate
elastic registration methodology between DCE-MRI and DWI-
MR on the breast and its corresponding validation.
It has been shown that the combination of multi-modal MRI
images improve the discrimination of diseased tissue. However the fusion
of dissimilar imaging data for classification and segmentation purposes is
not a trivial task, there is an inherent difference in information domains,
dimensionality and scales. This work also proposes a multi-view consensus
clustering methodology for the integration of multi-modal MR images
into a unified segmentation of tumoral lesions for heterogeneity assessment. Using a variety of metrics and distance functions this multi-view
imaging approach calculates multiple vectorial dissimilarity-spaces for
each one of the MRI modalities and makes use of the concepts behind
cluster ensembles to combine a set of base unsupervised segmentations
into an unified partition of the voxel-based data. The methodology is
specially designed for combining DCE-MRI and DTI-MR, for which a
manifold learning step is implemented in order to account for the geometric constrains of the high dimensional diffusion information.The successful analysis and processing of medical
imaging data is a multidisciplinary work that requires the
application and combination of knowledge from diverse fields,
such as medical engineering, medicine, computer science and
pattern classification. Imaging biomarkers are biologic features
detectable by imaging modalities and their use offer the prospect
of more efficient clinical studies and improvement in both
diagnosis and therapy assessment. The use of Dynamic Contrast
Enhanced Magnetic Resonance Imaging (DCE-MRI) and its
application to the diagnosis and therapy has been extensively
validated, nevertheless the issue of an appropriate or optimal
processing of data that helps to extract relevant biomarkers
to highlight the difference between heterogeneous tissue still
remains. Together with DCE-MRI, the data extracted from
Diffusion MRI (DWI-MR and DTI-MR) represents a promising
and complementary tool. This project initially proposes the
exploration of diverse techniques and methodologies for the
characterization of tissue, following an analysis and classification
of voxel-level time-intensity curves from DCE-MRI data mainly
through the exploration of dissimilarity based representations
and models. We will explore metrics and representations to
correlate the multidimensional data acquired through diverse
imaging modalities, a work which starts with the appropriate
elastic registration methodology between DCE-MRI and DWI-
MR on the breast and its corresponding validation.
It has been shown that the combination of multi-modal MRI
images improve the discrimination of diseased tissue. However the fusion
of dissimilar imaging data for classification and segmentation purposes is
not a trivial task, there is an inherent difference in information domains,
dimensionality and scales. This work also proposes a multi-view consensus
clustering methodology for the integration of multi-modal MR images
into a unified segmentation of tumoral lesions for heterogeneity assessment. Using a variety of metrics and distance functions this multi-view
imaging approach calculates multiple vectorial dissimilarity-spaces for
each one of the MRI modalities and makes use of the concepts behind
cluster ensembles to combine a set of base unsupervised segmentations
into an unified partition of the voxel-based data. The methodology is
specially designed for combining DCE-MRI and DTI-MR, for which a
manifold learning step is implemented in order to account for the geometric constrains of the high dimensional diffusion information
- …