163 research outputs found
Segmentation of pelvic structures from preoperative images for surgical planning and guidance
Prostate cancer is one of the most frequently diagnosed malignancies globally and the second leading cause of cancer-related mortality in males in the developed world. In recent decades, many techniques have been proposed for prostate cancer diagnosis and treatment. With the development of imaging technologies such as CT and MRI, image-guided procedures have become increasingly important as a means to improve clinical outcomes. Analysis of the preoperative images and construction of 3D models prior to treatment would help doctors to better localize and visualize the structures of interest, plan the procedure, diagnose disease and guide the surgery or therapy. This requires efficient and robust medical image analysis and segmentation technologies to be developed.
The thesis mainly focuses on the development of segmentation techniques in pelvic MRI for image-guided robotic-assisted laparoscopic radical prostatectomy and external-beam radiation therapy. A fully automated multi-atlas framework is proposed for bony pelvis segmentation in MRI, using the guidance of MRI AE-SDM. With the guidance of the AE-SDM, a multi-atlas segmentation algorithm is used to delineate the bony pelvis in a new \ac{MRI} where there is no CT available. The proposed technique outperforms state-of-the-art algorithms for MRI bony pelvis segmentation. With the SDM of pelvis and its segmented surface, an accurate 3D pelvimetry system is designed and implemented to measure a comprehensive set of pelvic geometric parameters for the examination of the relationship between these parameters and the difficulty of robotic-assisted laparoscopic radical prostatectomy. This system can be used in both manual and automated manner with a user-friendly interface.
A fully automated and robust multi-atlas based segmentation has also been developed to delineate the prostate in diagnostic MR scans, which have large variation in both intensity and shape of prostate. Two image analysis techniques are proposed, including patch-based label fusion with local appearance-specific atlases and multi-atlas propagation via a manifold graph on a database of both labeled and unlabeled images when limited labeled atlases are available. The proposed techniques can achieve more robust and accurate segmentation results than other multi-atlas based methods.
The seminal vesicles are also an interesting structure for therapy planning, particularly for external-beam radiation therapy. As existing methods fail for the very onerous task of segmenting the seminal vesicles, a multi-atlas learning framework via random decision forests with graph cuts refinement has further been proposed to solve this difficult problem. Motivated by the performance of this technique, I further extend the multi-atlas learning to segment the prostate fully automatically using multispectral (T1 and T2-weighted) MR images via hybrid \ac{RF} classifiers and a multi-image graph cuts technique. The proposed method compares favorably to the previously proposed multi-atlas based prostate segmentation.
The work in this thesis covers different techniques for pelvic image segmentation in MRI. These techniques have been continually developed and refined, and their application to different specific problems shows ever more promising results.Open Acces
QUIS-CAMPI: Biometric Recognition in Surveillance Scenarios
The concerns about individuals security have justified the increasing number of surveillance
cameras deployed both in private and public spaces. However, contrary to popular belief,
these devices are in most cases used solely for recording, instead of feeding intelligent analysis
processes capable of extracting information about the observed individuals. Thus, even though
video surveillance has already proved to be essential for solving multiple crimes, obtaining relevant
details about the subjects that took part in a crime depends on the manual inspection
of recordings. As such, the current goal of the research community is the development of
automated surveillance systems capable of monitoring and identifying subjects in surveillance
scenarios. Accordingly, the main goal of this thesis is to improve the performance of biometric
recognition algorithms in data acquired from surveillance scenarios. In particular, we aim at
designing a visual surveillance system capable of acquiring biometric data at a distance (e.g.,
face, iris or gait) without requiring human intervention in the process, as well as devising biometric
recognition methods robust to the degradation factors resulting from the unconstrained
acquisition process.
Regarding the first goal, the analysis of the data acquired by typical surveillance systems
shows that large acquisition distances significantly decrease the resolution of biometric samples,
and thus their discriminability is not sufficient for recognition purposes. In the literature,
diverse works point out Pan Tilt Zoom (PTZ) cameras as the most practical way for acquiring
high-resolution imagery at a distance, particularly when using a master-slave configuration. In
the master-slave configuration, the video acquired by a typical surveillance camera is analyzed
for obtaining regions of interest (e.g., car, person) and these regions are subsequently imaged
at high-resolution by the PTZ camera. Several methods have already shown that this configuration
can be used for acquiring biometric data at a distance. Nevertheless, these methods
failed at providing effective solutions to the typical challenges of this strategy, restraining its
use in surveillance scenarios. Accordingly, this thesis proposes two methods to support the development
of a biometric data acquisition system based on the cooperation of a PTZ camera
with a typical surveillance camera. The first proposal is a camera calibration method capable
of accurately mapping the coordinates of the master camera to the pan/tilt angles of the PTZ
camera. The second proposal is a camera scheduling method for determining - in real-time -
the sequence of acquisitions that maximizes the number of different targets obtained, while
minimizing the cumulative transition time. In order to achieve the first goal of this thesis,
both methods were combined with state-of-the-art approaches of the human monitoring field
to develop a fully automated surveillance capable of acquiring biometric data at a distance and
without human cooperation, designated as QUIS-CAMPI system.
The QUIS-CAMPI system is the basis for pursuing the second goal of this thesis. The analysis
of the performance of the state-of-the-art biometric recognition approaches shows that these
approaches attain almost ideal recognition rates in unconstrained data. However, this performance
is incongruous with the recognition rates observed in surveillance scenarios. Taking into
account the drawbacks of current biometric datasets, this thesis introduces a novel dataset comprising
biometric samples (face images and gait videos) acquired by the QUIS-CAMPI system at a
distance ranging from 5 to 40 meters and without human intervention in the acquisition process.
This set allows to objectively assess the performance of state-of-the-art biometric recognition
methods in data that truly encompass the covariates of surveillance scenarios. As such, this set
was exploited for promoting the first international challenge on biometric recognition in the wild. This thesis describes the evaluation protocols adopted, along with the results obtained
by the nine methods specially designed for this competition. In addition, the data acquired by
the QUIS-CAMPI system were crucial for accomplishing the second goal of this thesis, i.e., the
development of methods robust to the covariates of surveillance scenarios. The first proposal
regards a method for detecting corrupted features in biometric signatures inferred by a redundancy
analysis algorithm. The second proposal is a caricature-based face recognition approach
capable of enhancing the recognition performance by automatically generating a caricature
from a 2D photo. The experimental evaluation of these methods shows that both approaches
contribute to improve the recognition performance in unconstrained data.A crescente preocupação com a segurança dos indivíduos tem justificado o crescimento
do número de câmaras de vídeo-vigilância instaladas tanto em espaços privados como públicos.
Contudo, ao contrário do que normalmente se pensa, estes dispositivos são, na maior parte dos
casos, usados apenas para gravação, não estando ligados a nenhum tipo de software inteligente
capaz de inferir em tempo real informações sobre os indivíduos observados. Assim, apesar de a
vídeo-vigilância ter provado ser essencial na resolução de diversos crimes, o seu uso está ainda
confinado à disponibilização de vídeos que têm que ser manualmente inspecionados para extrair
informações relevantes dos sujeitos envolvidos no crime. Como tal, atualmente, o principal
desafio da comunidade científica é o desenvolvimento de sistemas automatizados capazes de
monitorizar e identificar indivíduos em ambientes de vídeo-vigilância.
Esta tese tem como principal objetivo estender a aplicabilidade dos sistemas de reconhecimento
biométrico aos ambientes de vídeo-vigilância. De forma mais especifica, pretende-se
1) conceber um sistema de vídeo-vigilância que consiga adquirir dados biométricos a longas distâncias
(e.g., imagens da cara, íris, ou vídeos do tipo de passo) sem requerer a cooperação dos
indivíduos no processo; e 2) desenvolver métodos de reconhecimento biométrico robustos aos
fatores de degradação inerentes aos dados adquiridos por este tipo de sistemas.
No que diz respeito ao primeiro objetivo, a análise aos dados adquiridos pelos sistemas típicos
de vídeo-vigilância mostra que, devido à distância de captura, os traços biométricos amostrados
não são suficientemente discriminativos para garantir taxas de reconhecimento aceitáveis.
Na literatura, vários trabalhos advogam o uso de câmaras Pan Tilt Zoom (PTZ) para adquirir
imagens de alta resolução à distância, principalmente o uso destes dispositivos no modo masterslave.
Na configuração master-slave um módulo de análise inteligente seleciona zonas de interesse
(e.g. carros, pessoas) a partir do vídeo adquirido por uma câmara de vídeo-vigilância
e a câmara PTZ é orientada para adquirir em alta resolução as regiões de interesse. Diversos
métodos já mostraram que esta configuração pode ser usada para adquirir dados biométricos
à distância, ainda assim estes não foram capazes de solucionar alguns problemas relacionados
com esta estratégia, impedindo assim o seu uso em ambientes de vídeo-vigilância. Deste modo,
esta tese propõe dois métodos para permitir a aquisição de dados biométricos em ambientes de
vídeo-vigilância usando uma câmara PTZ assistida por uma câmara típica de vídeo-vigilância. O
primeiro é um método de calibração capaz de mapear de forma exata as coordenadas da câmara
master para o ângulo da câmara PTZ (slave) sem o auxílio de outros dispositivos óticos. O
segundo método determina a ordem pela qual um conjunto de sujeitos vai ser observado pela
câmara PTZ. O método proposto consegue determinar em tempo-real a sequência de observações
que maximiza o número de diferentes sujeitos observados e simultaneamente minimiza o
tempo total de transição entre sujeitos. De modo a atingir o primeiro objetivo desta tese, os
dois métodos propostos foram combinados com os avanços alcançados na área da monitorização
de humanos para assim desenvolver o primeiro sistema de vídeo-vigilância completamente automatizado
e capaz de adquirir dados biométricos a longas distâncias sem requerer a cooperação
dos indivíduos no processo, designado por sistema QUIS-CAMPI.
O sistema QUIS-CAMPI representa o ponto de partida para iniciar a investigação relacionada
com o segundo objetivo desta tese. A análise do desempenho dos métodos de reconhecimento
biométrico do estado-da-arte mostra que estes conseguem obter taxas de reconhecimento
quase perfeitas em dados adquiridos sem restrições (e.g., taxas de reconhecimento
maiores do que 99% no conjunto de dados LFW). Contudo, este desempenho não é corroborado pelos resultados observados em ambientes de vídeo-vigilância, o que sugere que os conjuntos
de dados atuais não contêm verdadeiramente os fatores de degradação típicos dos ambientes de
vídeo-vigilância. Tendo em conta as vulnerabilidades dos conjuntos de dados biométricos atuais,
esta tese introduz um novo conjunto de dados biométricos (imagens da face e vídeos do tipo de
passo) adquiridos pelo sistema QUIS-CAMPI a uma distância máxima de 40m e sem a cooperação
dos sujeitos no processo de aquisição. Este conjunto permite avaliar de forma objetiva o desempenho
dos métodos do estado-da-arte no reconhecimento de indivíduos em imagens/vídeos
capturados num ambiente real de vídeo-vigilância. Como tal, este conjunto foi utilizado para
promover a primeira competição de reconhecimento biométrico em ambientes não controlados.
Esta tese descreve os protocolos de avaliação usados, assim como os resultados obtidos por 9
métodos especialmente desenhados para esta competição. Para além disso, os dados adquiridos
pelo sistema QUIS-CAMPI foram essenciais para o desenvolvimento de dois métodos para
aumentar a robustez aos fatores de degradação observados em ambientes de vídeo-vigilância. O
primeiro é um método para detetar características corruptas em assinaturas biométricas através
da análise da redundância entre subconjuntos de características. O segundo é um método de
reconhecimento facial baseado em caricaturas automaticamente geradas a partir de uma única
foto do sujeito. As experiências realizadas mostram que ambos os métodos conseguem reduzir
as taxas de erro em dados adquiridos de forma não controlada
Two and three dimensional segmentation of multimodal imagery
The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes
A robust framework for medical image segmentation through adaptable class-specific representation
Medical image segmentation is an increasingly important component in virtual pathology, diagnostic imaging and computer-assisted surgery. Better hardware for image acquisition and a variety of advanced visualisation methods have paved the way for the development of computer based tools for medical image analysis and interpretation. The routine use of medical imaging scans of multiple modalities has been growing over the last decades and data sets such as the Visible Human Project have introduced a new modality in the form of colour cryo section data. These developments have given rise to an increasing need for better automatic and semiautomatic segmentation methods. The work presented in this thesis concerns the development of a new framework for robust semi-automatic segmentation of medical imaging data of multiple modalities. Following the specification of a set of conceptual and technical requirements, the framework known as ACSR (Adaptable Class-Specific Representation) is developed in the first case for 2D colour cryo section
segmentation. This is achieved through the development of a novel algorithm for adaptable class-specific sampling of point neighbourhoods, known as the PGA (Path Growing Algorithm), combined with Learning Vector Quantization. The framework is extended to accommodate 3D volume segmentation of cryo section data and subsequently segmentation of single and multi-channel greyscale MRl data. For the latter the issues of inhomogeneity and noise are specifically addressed. Evaluation is based on comparison with previously published results on standard simulated and real data sets, using visual presentation, ground truth comparison and human observer experiments. ACSR provides the user with a simple and intuitive visual initialisation process followed by a fully automatic segmentation. Results on both cryo section and MRI data compare favourably to existing methods, demonstrating robustness both to common artefacts and multiple user initialisations. Further developments into specific clinical applications are discussed in the future work section
Autonomous flight in unstructured and unknown indoor environments
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (p. 119-126).This thesis presents the design, implementation, and validation of a system that enables a micro air vehicle to autonomously explore and map unstructured and unknown indoor environments. Such a vehicle would be of considerable use in many real-world applications such as search and rescue, civil engineering inspection, and a host of military tasks where it is dangerous or difficult to send people. While mapping and exploration capabilities are common for ground vehicles today, air vehicles seeking to achieve these capabilities face unique challenges. While there has been recent progress toward sensing, control, and navigation suites for GPS-denied flight, there have been few demonstrations of stable, goal-directed flight in real environments. The main focus of this research is the development of real-time state estimation techniques that allow our quadrotor helicopter to fly autonomously in indoor, GPS-denied environments. Accomplishing this feat required the development of a large integrated system that brought together many components into a cohesive package. As such, the primary contribution is the development of the complete working system. I show experimental results that illustrate the MAV's ability to navigate accurately in unknown environments, and demonstrate that our algorithms enable the MAV to operate autonomously in a variety of indoor environments.by Abraham Galton Bachrach.S.M
Etkilesimli gezgin imge ve video bölütleme için çizge temelli etkin bir yaklaşım.
Over the past few years, processing of visual information by mobile devices getting more affordable due to the advances in mobile technologies. Efficient and accurate segmentation of objects from an image or video leads many interesting multimedia applications. In this study, we address interactive image and video segmentation on mobile devices. We first propose a novel interaction methodology leading better satisfaction based on subjective user evaluation. Due to small screens of the mobile devices, error tolerance is also a crucial factor. Hence, we also propose a novel user-stroke correction mechanism handling most of the interaction errors. Moreover, in order to satisfy the computational efficiency requirements of mobile devices, we propose a novel spatially and temporally dynamic graph-cut method. Conducted experiments suggest that the proposed efficiency improvements result in significant computation time decrease. As an extension to video sequences, a video segmentation system is proposed starting after an interaction on key-frames. As a novel approach, we redefine the video segmentation problem as propagation of Markov Random Field (MRF) energy obtained via interactive image segmentation tool on some key-frames along temporal domain. MRF propagation is performed by using a recently introduced bilateral filtering without using any global texture or color model. A novel technique is also developed to dynamically solve graph-cuts for varying, non-lattice graphs. In addition to the efficiency, segmentation quality is also tested both quantitatively and qualitatively; indeed, for many challenging examples, quite significant time efficiency is observed without loss of segmentation quality.M.S. - Master of Scienc
Statistical Medial Model dor Cardiac Segmentation and Morphometry
In biomedical image analysis, shape information can be utilized for many purposes. For example, irregular shape features can help identify diseases; shape features can help match different instances of anatomical structures for statistical comparison; and prior knowledge of the mean and possible variation of an anatomical structure\u27s shape can help segment a new example of this structure in noisy, low-contrast images. A good shape representation helps to improve the performance of the above techniques. The overall goal of the proposed research is to develop and evaluate methods for representing shapes of anatomical structures. The medial model is a shape representation method that models a 3D object by explicitly defining its skeleton (medial axis) and deriving the object\u27s boundary via inverse-skeletonization . This model represents shape compactly, and naturally expresses descriptive global shape features like thickening , bending , and elongation . However, its application in biomedical image analysis has been limited, and it has not yet been applied to the heart, which has a complex shape. In this thesis, I focus on developing efficient methods to construct the medial model, and apply it to solve biomedical image analysis problems. I propose a new 3D medial model which can be efficiently applied to complex shapes. The proposed medial model closely approximates the medial geometry along medial edge curves and medial branching curves by soft-penalty optimization and local correction. I further develop a scheme to perform model-based segmentation using a statistical medial model which incorporates prior shape and appearance information. The proposed medial models are applied to a series of image analysis tasks. The 2D medial model is applied to the corpus callosum which results in an improved alignment of the patterns of commissural connectivity compared to a volumetric registration method. The 3D medial model is used to describe the myocardium of the left and right ventricles, which provides detailed thickness maps characterizing different disease states. The model-based myocardium segmentation scheme is tested in a heterogeneous adult MRI dataset. Our segmentation experiments demonstrate that the statistical medial model can accurately segment the ventricular myocardium and provide useful parameters to characterize heart function
Incorporating Boltzmann Machine Priors for Semantic Labeling in Images and Videos
Semantic labeling is the task of assigning category labels to regions in an image. For example, a scene may consist of regions corresponding to categories such as sky, water, and ground, or parts of a face such as eyes, nose, and mouth. Semantic labeling is an important mid-level vision task for grouping and organizing image regions into coherent parts. Labeling these regions allows us to better understand the scene itself as well as properties of the objects in the scene, such as their parts, location, and interaction within the scene. Typical approaches for this task include the conditional random field (CRF), which is well-suited to modeling local interactions among adjacent image regions. However the CRF is limited in dealing with complex, global (long-range) interactions between regions in an image, and between frames in a video. This thesis presents approaches to modeling long-range interactions within images and videos, for use in semantic labeling.
In order to model these long-range interactions, we incorporate priors based on the restricted Boltzmann machine (RBM). The RBM is a generative model which has demonstrated the ability to learn the shape of an object and the CRBM is a temporal extension which can learn the motion of an object. Although the CRF is a good baseline labeler, we show how the RBM and CRBM can be added to the architecture to model both the global object shape within an image and the temporal dependencies of the object from previous frames in a video. We demonstrate the labeling performance of our models for the parts of complex face images from the Labeled Faces in the Wild database (for images) and the YouTube Faces Database (for videos). Our hybrid models produce results that are both quantitatively and qualitatively better than the baseline CRF alone for both images and videos
Depth data improves non-melanoma skin lesion segmentation and diagnosis
Examining surface shape appearance by touching and observing a lesion from different
points of view is a part of the clinical process for skin lesion diagnosis. Motivated
by this, we hypothesise that surface shape embodies important information that serves
to represent lesion identity and status. A new sensor, Dense Stereo Imaging System
(DSIS) allows us to capture 1:1 aligned 3D surface data and 2D colour images simultaneously.
This thesis investigates whether the extra surface shape appearance information,
represented by features derived from the captured 3D data benefits skin lesion
analysis, particularly on the tasks of segmentation and classification. In order to validate
the contribution of 3D data to lesion identification, we compare the segmentations
resulting from various combinations of images cues (e.g., colour, depth and texture)
embedded in a region-based level set segmentation method. The experiments indicate
that depth is complementary to colour. Adding the 3D information reduces the error
rate from 7:8% to 6:6%. For the purpose of evaluating the segmentation results, we
propose a novel ground truth estimation approach that incorporates a prior pattern analysis
of a set of manual segmentations. The experiments on both synthetic and real data
show that this method performs favourably compared to the state of the art approach
STAPLE [1] on ground truth estimation. Finally, we explore the usefulness of 3D information
to non-melanoma lesion diagnosis by tests on both human and computer
based classifications of five lesion types. The results provide evidence for the benefit
of the additional 3D information, i.e., adding the 3D-based features gives a significantly
improved classification rate of 80:7% compared to only using colour features
(75:3%). The three main contributions of the thesis are improved methods for lesion
segmentation, non-melanoma lesion classification and lesion boundary ground-truth
estimation
- …