637 research outputs found
Recommended from our members
Replication and Refinement of an Algorithm for Automated Drusen Segmentation on Optical Coherence Tomography
Here, we investigate the extent to which re-implementing a previously published algorithm for OCT-based drusen quantification permits replicating the reported accuracy on an independent dataset. We refined that algorithm so that its accuracy is increased. Following a systematic literature search, an algorithm was selected based on its reported excellent results. Several steps were added to improve its accuracy. The replicated and refined algorithms were evaluated on an independent dataset with the same metrics as in the original publication. Accuracy of the refined algorithm (overlap ratio 36–52%) was significantly greater than the replicated one (overlap ratio 25–39%). In particular, separation of the retinal pigment epithelium and the ellipsoid zone could be improved by the refinement. However, accuracy was still lower than reported previously on different data (overlap ratio 67–76%). This is the first replication study of an algorithm for OCT image analysis. Its results indicate that current standards for algorithm validation do not provide a reliable estimate of algorithm performance on images that differ with respect to patient selection and image quality. In order to contribute to an improved reproducibility in this field, we publish both our replication and the refinement, as well as an exemplary dataset
Upper airways segmentation using principal curvatures
Esta tesis propone una nueva técnica para segmentar las vÃas aéreas superiores. Esta propuesta
permite la extracción de estructuras curvilÃneas usando curvaturas principales. La propuesta
permite la extracción de éstas estructuras en imágenes 2D y 3D. Entre las principales novedades
se encuentra la propuesta de un nuevo criterio de parada en la propagación del algoritmo de
realce de contraste (operador multi-escala de tipo sombrero alto). De la misma forma, el criterio
de parada propuesto es usado para detener los algoritmos de difusión anisotrópica. Además, un
nuevo criterio es propuesto para seleccionar las curvaturas principales que conforman las
estructuras curvilÃneas, que se basa en los criterios propuestos por Steger, Deng et. al. y
Armande et. al. Además, se propone un nuevo algoritmo para realizar la supresión de nomáximos
que permite reducir la presencia de discontinuidades en el borde de las estructuras
curvilÃneas. Para extraer los bordes de las estructuras curvilÃneas, se utiliza un algoritmo de
enlace que incluye un nuevo criterio de distancia para reducir la aparición de agujeros en la
estructura final. Finalmente, con base en los resultados obtenidos, se utiliza un algoritmo
morfológico para cerrar los agujeros y se aplica un algoritmo de crecimiento de regiones para
obtener la segmentación final de las vÃas respiratorias superiores.This dissertation proposes a new approach to segment the upper airways. This proposal allows
the extraction of curvilinear structures based on the principal curvatures. The proposal
allows extracting these structures from 2D and 3D images. Among the main novelties is the
proposal of a new stopping criterion to stop the propagation of the contrast enhancement algorithm
(multiscale top-hat morphological operator). In the same way, the proposed stopping
criterion is used to stop the anisotropic diffusion algorithms. In addition, a new criterion is
proposed to select the principal curvatures that make up the curvilinear structures, which is
based on the criteria proposed by Steger, Deng et. al. and Armande et. al. Furthermore, a
new algorithm to perform the non-maximum suppression that allows reducing the presence
of discontinuities in the border of curvilinear structures is proposed. To extract the edges of
the curvilinear structures, a linking algorithm is used that includes a new distance criterion to
reduce the appearance of gaps in the final structure. Finally, based on the obtained results, a
morphological algorithm is used to close the gaps and a region growing algorithm to obtain
the final upper airways segmentation is applied.Doctor en IngenierÃaDoctorad
Facilitating sensor interoperability and incorporating quality in fingerprint matching systems
This thesis addresses the issues of sensor interoperability and quality in the context of fingerprints and makes a three-fold contribution. The first contribution is a method to facilitate fingerprint sensor interoperability that involves the comparison of fingerprint images originating from multiple sensors. The proposed technique models the relationship between images acquired by two different sensors using a Thin Plate Spline (TPS) function. Such a calibration model is observed to enhance the inter-sensor matching performance on the MSU dataset containing images from optical and capacitive sensors. Experiments indicate that the proposed calibration scheme improves the inter-sensor Genuine Accept Rate (GAR) by 35% to 40% at a False Accept Rate (FAR) of 0.01%. The second contribution is a technique to incorporate the local image quality information in the fingerprint matching process. Experiments on the FVC 2002 and 2004 databases suggest the potential of this scheme to improve the matching performance of a generic fingerprint recognition system. The final contribution of this thesis is a method for classifying fingerprint images into 3 categories: good, dry and smudged. Such a categorization would assist in invoking different image processing or matching schemes based on the nature of the input fingerprint image. A classification rate of 97.45% is obtained on a subset of the FVC 2004 DB1 database
Biomimetic Design for Efficient Robotic Performance in Dynamic Aquatic Environments - Survey
This manuscript is a review over the published articles on edge detection. At first, it provides theoretical background, and then reviews wide range of methods of edge detection in different categorizes. The review also studies the relationship between categories, and presents evaluations regarding to their application, performance, and implementation. It was stated that the edge detection methods structurally are a combination of image smoothing and image differentiation plus a post-processing for edge labelling. The image smoothing involves filters that reduce the noise, regularize the numerical computation, and provide a parametric representation of the image that works as a mathematical microscope to analyze it in different scales and increase the accuracy and reliability of edge detection. The image differentiation provides information of intensity transition in the image that is necessary to represent the position and strength of the edges and their orientation. The edge labelling calls for post-processing to suppress the false edges, link the dispread ones, and produce a uniform contour of objects
Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates
The study of cerebral anatomy in developing neonates is of great importance for
the understanding of brain development during the early period of life. This
dissertation therefore focuses on three challenges in the modelling of cerebral
anatomy in neonates during brain development. The methods that have been
developed all use Magnetic Resonance Images (MRI) as source data.
To facilitate study of vascular development in the neonatal period, a set of image
analysis algorithms are developed to automatically extract and model cerebral
vessel trees. The whole process consists of cerebral vessel tracking from
automatically placed seed points, vessel tree generation, and vasculature
registration and matching. These algorithms have been tested on clinical Time-of-
Flight (TOF) MR angiographic datasets.
To facilitate study of the neonatal cortex a complete cerebral cortex segmentation
and reconstruction pipeline has been developed. Segmentation of the neonatal
cortex is not effectively done by existing algorithms designed for the adult brain
because the contrast between grey and white matter is reversed. This causes pixels
containing tissue mixtures to be incorrectly labelled by conventional methods. The
neonatal cortical segmentation method that has been developed is based on a novel
expectation-maximization (EM) method with explicit correction for mislabelled
partial volume voxels. Based on the resulting cortical segmentation, an implicit
surface evolution technique is adopted for the reconstruction of the cortex in
neonates. The performance of the method is investigated by performing a detailed
landmark study.
To facilitate study of cortical development, a cortical surface registration algorithm
for aligning the cortical surface is developed. The method first inflates extracted
cortical surfaces and then performs a non-rigid surface registration using free-form
deformations (FFDs) to remove residual alignment. Validation experiments using
data labelled by an expert observer demonstrate that the method can capture local
changes and follow the growth of specific sulcus
Anisotropy Across Fields and Scales
This open access book focuses on processing, modeling, and visualization of anisotropy information, which are often addressed by employing sophisticated mathematical constructs such as tensors and other higher-order descriptors. It also discusses adaptations of such constructs to problems encountered in seemingly dissimilar areas of medical imaging, physical sciences, and engineering. Featuring original research contributions as well as insightful reviews for scientists interested in handling anisotropy information, it covers topics such as pertinent geometric and algebraic properties of tensors and tensor fields, challenges faced in processing and visualizing different types of data, statistical techniques for data processing, and specific applications like mapping white-matter fiber tracts in the brain. The book helps readers grasp the current challenges in the field and provides information on the techniques devised to address them. Further, it facilitates the transfer of knowledge between different disciplines in order to advance the research frontiers in these areas. This multidisciplinary book presents, in part, the outcomes of the seventh in a series of Dagstuhl seminars devoted to visualization and processing of tensor fields and higher-order descriptors, which was held in Dagstuhl, Germany, on October 28–November 2, 2018
Exploiting Spatio-Temporal Coherence for Video Object Detection in Robotics
This paper proposes a method to enhance video object detection for indoor environments in robotics. Concretely, it exploits knowledge about the camera motion between frames to propagate previously detected objects to successive frames. The proposal is rooted in the concepts of planar homography to propose regions of interest where to find objects, and recursive Bayesian filtering to integrate observations over time. The proposal is evaluated on six virtual, indoor environments, accounting for the detection of nine object classes over a total of ∼ 7k frames. Results show that our proposal improves the recall and the F1-score by a factor of 1.41 and 1.27, respectively, as well as it achieves a significant reduction of the object categorization entropy (58.8%) when compared to a two-stage video object detection method used as baseline, at the cost of small time overheads (120 ms) and precision loss (0.92).</p
- …