387 research outputs found
Surface fluid registration of conformal representation: Application to detect disease burden and genetic influence on hippocampus
abstract: In this paper, we develop a new automated surface registration system based on surface conformal parameterization by holomorphic 1-forms, inverse consistent surface fluid registration, and multivariate tensor-based morphometty (mTBM). First, we conformally map a surface onto a planar rectangle space with holomorphic 1-forms. Second, we compute surface conformal representation by combining its local conformal factor and mean curvature and linearly scale the dynamic range of the conformal representation to form the feature image of the surface. Third, we align the feature image with a chosen template image via the fluid image registration algorithm, which has been extended into the curvilinear coordinates to adjust for the distortion introduced by surface parameterization. The inverse consistent image registration algorithm is also incorporated in the system to jointly estimate the forward and inverse transformations between the study and template images. This alignment induces a corresponding deformation on the surface. We tested the system on Alzheimer's Disease Neuroimaging Initiative (ADNI) baseline dataset to study AD symptoms on hippocampus. In our system, by modeling a hippocampus as a 3D parametric surface, we nonlinearly registered each surface with a selected template surface. Then we used mTBM to analyze the morphometry difference between diagnostic groups. Experimental results show that the new system has better performance than two publicly available subcortical surface registration tools: FIRST and SPHARM. We also analyzed the genetic influence of the Apolipoprotein E(is an element of)4 allele (ApoE4), which is considered as the most prevalent risk factor for AD. Our work successfully detected statistically significant difference between ApoE4 carriers and non-carriers in both patients of mild cognitive impairment (MCI) and healthy control subjects. The results show evidence that the ApoE genotype may be associated with accelerated brain atrophy so that our work provides a new MRI analysis tool that may help presymptomatic AD research.NOTICE: this is the author’s version of a work that was accepted for publication in NEUROIMAGE. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Neuroimage, 78, 111-134 [2013] http://dx.doi.org/10.1016/j.neuroimage.2013.04.01
Classification non supervisée d’images 3D et extension à la segmentation exploitant les informations de couleur et de profondeur
Access to the 3D images at a reasonable frame rate is widespread now, thanks to the recent advances in low cost depth sensors as well as the efficient methods to compute 3D from 2D images. As a consequence, it is highly demanding to enhance the capability of existing computer vision applications by incorporating 3D information. Indeed, it has been demonstrated in numerous researches that the accuracy of different tasks increases by including 3D information as an additional feature. However, for the task of indoor scene analysis and segmentation, it remains several important issues, such as: (a) how the 3D information itself can be exploited? and (b) what is the best way to fuse color and 3D in an unsupervised manner? In this thesis, we address these issues and propose novel unsupervised methods for 3D image clustering and joint color and depth image segmentation. To this aim, we consider image normals as the prominent feature from 3D image and cluster them with methods based on finite statistical mixture models. We consider Bregman Soft Clustering method to ensure computationally efficient clustering. Moreover, we exploit several probability distributions from directional statistics, such as the von Mises-Fisher distribution and the Watson distribution. By combining these, we propose novel Model Based Clustering methods. We empirically validate these methods using synthetic data and then demonstrate their application for 3D/depth image analysis. Afterward, we extend these methods to segment synchronized 3D and color image, also called RGB-D image. To this aim, first we propose a statistical image generation model for RGB-D image. Then, we propose novel RGB-D segmentation method using a joint color-spatial-axial clustering and a statistical planar region merging method. Results show that, the proposed method is comparable with the state of the art methods and requires less computation time. Moreover, it opens interesting perspectives to fuse color and geometry in an unsupervised manner. We believe that the methods proposed in this thesis are equally applicable and extendable for clustering different types of data, such as speech, gene expressions, etc. Moreover, they can be used for complex tasks, such as joint image-speech data analysisL'accès aux séquences d'images 3D s'est aujourd'hui démocratisé, grâce aux récentes avancées dans le développement des capteurs de profondeur ainsi que des méthodes permettant de manipuler des informations 3D à partir d'images 2D. De ce fait, il y a une attente importante de la part de la communauté scientifique de la vision par ordinateur dans l'intégration de l'information 3D. En effet, des travaux de recherche ont montré que les performances de certaines applications pouvaient être améliorées en intégrant l'information 3D. Cependant, il reste des problèmes à résoudre pour l'analyse et la segmentation de scènes intérieures comme (a) comment l'information 3D peut-elle être exploitée au mieux ? et (b) quelle est la meilleure manière de prendre en compte de manière conjointe les informations couleur et 3D ? Nous abordons ces deux questions dans cette thèse et nous proposons de nouvelles méthodes non supervisées pour la classification d'images 3D et la segmentation prenant en compte de manière conjointe les informations de couleur et de profondeur. A cet effet, nous formulons l'hypothèse que les normales aux surfaces dans les images 3D sont des éléments à prendre en compte pour leur analyse, et leurs distributions sont modélisables à l'aide de lois de mélange. Nous utilisons la méthode dite « Bregman Soft Clustering » afin d'être efficace d'un point de vue calculatoire. De plus, nous étudions plusieurs lois de probabilités permettant de modéliser les distributions de directions : la loi de von Mises-Fisher et la loi de Watson. Les méthodes de classification « basées modèles » proposées sont ensuite validées en utilisant des données de synthèse puis nous montrons leur intérêt pour l'analyse des images 3D (ou de profondeur). Une nouvelle méthode de segmentation d'images couleur et profondeur, appelées aussi images RGB-D, exploitant conjointement la couleur, la position 3D, et la normale locale est alors développée par extension des précédentes méthodes et en introduisant une méthode statistique de fusion de régions « planes » à l'aide d'un graphe. Les résultats montrent que la méthode proposée donne des résultats au moins comparables aux méthodes de l'état de l'art tout en demandant moins de temps de calcul. De plus, elle ouvre des perspectives nouvelles pour la fusion non supervisée des informations de couleur et de géométrie. Nous sommes convaincus que les méthodes proposées dans cette thèse pourront être utilisées pour la classification d'autres types de données comme la parole, les données d'expression en génétique, etc. Elles devraient aussi permettre la réalisation de tâches complexes comme l'analyse conjointe de données contenant des images et de la parol
Ultrasound imaging system combined with multi-modality image analysis algorithms to monitor changes in anatomical structures
This dissertation concerns the development and validation of an ultrasound imaging system and novel image analysis algorithms applicable to multiple imaging modalities. The ultrasound imaging system will include a framework for 3D volume reconstruction of freehand ultrasound: a mechanism to register the 3D volumes across time and subjects, as well as with other imaging modalities, and a playback mechanism to view image slices concurrently from different acquisitions that correspond to the same anatomical region. The novel image analysis algorithms include a noise reduction method that clusters pixels into homogenous patches using a directed graph of edges between neighboring pixels, a segmentation method that creates a hierarchical graph structure using statistical analysis and a voting system to determine the similarity between homogeneous patches given their neighborhood, and finally, a hybrid atlas-based registration method that makes use of intensity corrections induced at anatomical landmarks to regulate deformable registration. The combination of the ultrasound imaging system and the image analysis algorithms will provide the ability to monitor nerve regeneration in patients undergoing regenerative, repair or transplant strategies in a sequential, non-invasive manner, including visualization of registered real-time and pre-acquired data, thus enabling preventive and therapeutic strategies for nerve regeneration in Composite Tissue Allotransplantation (CTA). The registration algorithm is also applied to MR images of the brain to obtain reliable and efficient segmentation of the hippocampus, which is a prominent structure in the study of diseases of the elderly such as vascular dementia, Alzheimer’s, and late life depression. Experimental results on 2D and 3D images, including simulated and real images, with illustrations visualizing the intermediate outcomes and the final results are presented.
Biologically motivated keypoint detection for RGB-D data
With the emerging interest in active vision, computer vision researchers have been increasingly
concerned with the mechanisms of attention. Therefore, several visual attention
computational models inspired by the human visual system, have been developed, aiming at
the detection of regions of interest in images.
This thesis is focused on selective visual attention, which provides a mechanism for the
brain to focus computational resources on an object at a time, guided by low-level image properties
(Bottom-Up attention). The task of recognizing objects in different locations is achieved
by focusing on different locations, one at a time. Given the computational requirements of the
models proposed, the research in this area has been mainly of theoretical interest. More recently,
psychologists, neurobiologists and engineers have developed cooperation's and this has
resulted in considerable benefits. The first objective of this doctoral work is to bring together
concepts and ideas from these different research areas, providing a study of the biological research
on human visual system and a discussion of the interdisciplinary knowledge in this area, as
well as the state-of-art on computational models of visual attention (bottom-up). Normally, the
visual attention is referred by engineers as saliency: when people fix their look in a particular
region of the image, that's because that region is salient. In this research work, saliency methods
are presented based on their classification (biological plausible, computational or hybrid)
and in a chronological order.
A few salient structures can be used for applications like object registration, retrieval or
data simplification, being possible to consider these few salient structures as keypoints when
aiming at performing object recognition. Generally, object recognition algorithms use a large
number of descriptors extracted in a dense set of points, which comes along with very high computational
cost, preventing real-time processing. To avoid the problem of the computational
complexity required, the features have to be extracted from a small set of points, usually called
keypoints. The use of keypoint-based detectors allows the reduction of the processing time and
the redundancy in the data. Local descriptors extracted from images have been extensively
reported in the computer vision literature. Since there is a large set of keypoint detectors, this
suggests the need of a comparative evaluation between them. In this way, we propose to do a
description of 2D and 3D keypoint detectors, 3D descriptors and an evaluation of existing 3D keypoint
detectors in a public available point cloud library with 3D real objects. The invariance of
the 3D keypoint detectors was evaluated according to rotations, scale changes and translations.
This evaluation reports the robustness of a particular detector for changes of point-of-view and
the criteria used are the absolute and the relative repeatability rate. In our experiments, the
method that achieved better repeatability rate was the ISS3D method.
The analysis of the human visual system and saliency maps detectors with biological inspiration
led to the idea of making an extension for a keypoint detector based on the color
information in the retina. Such proposal produced a 2D keypoint detector inspired by the behavior
of the early visual system. Our method is a color extension of the BIMP keypoint detector,
where we include both color and intensity channels of an image: color information is included
in a biological plausible way and multi-scale image features are combined into a single keypoints
map. This detector is compared against state-of-art detectors and found particularly
well-suited for tasks such as category and object recognition. The recognition process is performed
by comparing the extracted 3D descriptors in the locations indicated by the keypoints after mapping the 2D keypoints locations to the 3D space. The evaluation allowed us to obtain
the best pair keypoint detector/descriptor on a RGB-D object dataset. Using our keypoint detector
and the SHOTCOLOR descriptor a good category recognition rate and object recognition
rate were obtained, and it is with the PFHRGB descriptor that we obtain the best results.
A 3D recognition system involves the choice of keypoint detector and descriptor. A new
method for the detection of 3D keypoints on point clouds is presented and a benchmarking is
performed between each pair of 3D keypoint detector and 3D descriptor to evaluate their performance
on object and category recognition. These evaluations are done in a public database
of real 3D objects. Our keypoint detector is inspired by the behavior and neural architecture
of the primate visual system: the 3D keypoints are extracted based on a bottom-up 3D saliency
map, which is a map that encodes the saliency of objects in the visual environment. The saliency
map is determined by computing conspicuity maps (a combination across different modalities)
of the orientation, intensity and color information, in a bottom-up and in a purely stimulusdriven
manner. These three conspicuity maps are fused into a 3D saliency map and, finally, the
focus of attention (or "keypoint location") is sequentially directed to the most salient points in
this map. Inhibiting this location automatically allows the system to attend to the next most
salient location. The main conclusions are: with a similar average number of keypoints, our 3D
keypoint detector outperforms the other eight 3D keypoint detectors evaluated by achiving the
best result in 32 of the evaluated metrics in the category and object recognition experiments,
when the second best detector only obtained the best result in 8 of these metrics. The unique
drawback is the computational time, since BIK-BUS is slower than the other detectors. Given
that differences are big in terms of recognition performance, size and time requirements, the
selection of the keypoint detector and descriptor has to be matched to the desired task and we
give some directions to facilitate this choice. After proposing the 3D keypoint detector, the research focused on a robust detection and
tracking method for 3D objects by using keypoint information in a particle filter. This method
consists of three distinct steps: Segmentation, Tracking Initialization and Tracking. The segmentation
is made to remove all the background information, reducing the number of points for
further processing. In the initialization, we use a keypoint detector with biological inspiration.
The information of the object that we want to follow is given by the extracted keypoints. The
particle filter does the tracking of the keypoints, so with that we can predict where the keypoints
will be in the next frame. In a recognition system, one of the problems is the computational cost
of keypoint detectors with this we intend to solve this problem. The experiments with PFBIKTracking
method are done indoors in an office/home environment, where personal robots are
expected to operate. The Tracking Error evaluates the stability of the general tracking method.
We also quantitatively evaluate this method using a "Tracking Error". Our evaluation is done by
the computation of the keypoint and particle centroid. Comparing our system that the tracking
method which exists in the Point Cloud Library, we archive better results, with a much smaller
number of points and computational time. Our method is faster and more robust to occlusion
when compared to the OpenniTracker.Com o interesse emergente na visão ativa, os investigadores de visão computacional têm
estado cada vez mais preocupados com os mecanismos de atenção. Por isso, uma série de
modelos computacionais de atenção visual, inspirado no sistema visual humano, têm sido desenvolvidos.
Esses modelos têm como objetivo detetar regiões de interesse nas imagens.
Esta tese está focada na atenção visual seletiva, que fornece um mecanismo para que
o cérebro concentre os recursos computacionais num objeto de cada vez, guiado pelas propriedades
de baixo nível da imagem (atenção Bottom-Up). A tarefa de reconhecimento de
objetos em diferentes locais é conseguida através da concentração em diferentes locais, um
de cada vez. Dados os requisitos computacionais dos modelos propostos, a investigação nesta
área tem sido principalmente de interesse teórico. Mais recentemente, psicólogos, neurobiólogos
e engenheiros desenvolveram cooperações e isso resultou em benefícios consideráveis. No
início deste trabalho, o objetivo é reunir os conceitos e ideias a partir dessas diferentes áreas
de investigação. Desta forma, é fornecido o estudo sobre a investigação da biologia do sistema
visual humano e uma discussão sobre o conhecimento interdisciplinar da matéria, bem como
um estado de arte dos modelos computacionais de atenção visual (bottom-up). Normalmente,
a atenção visual é denominada pelos engenheiros como saliência, se as pessoas fixam o olhar
numa determinada região da imagem é porque esta região é saliente. Neste trabalho de investigação,
os métodos saliência são apresentados em função da sua classificação (biologicamente
plausível, computacional ou híbrido) e numa ordem cronológica.
Algumas estruturas salientes podem ser usadas, em vez do objeto todo, em aplicações
tais como registo de objetos, recuperação ou simplificação de dados. É possível considerar
estas poucas estruturas salientes como pontos-chave, com o objetivo de executar o reconhecimento
de objetos. De um modo geral, os algoritmos de reconhecimento de objetos utilizam um
grande número de descritores extraídos num denso conjunto de pontos. Com isso, estes têm um
custo computacional muito elevado, impedindo que o processamento seja realizado em tempo
real. A fim de evitar o problema da complexidade computacional requerido, as características
devem ser extraídas a partir de um pequeno conjunto de pontos, geralmente chamados pontoschave.
O uso de detetores de pontos-chave permite a redução do tempo de processamento e a
quantidade de redundância dos dados. Os descritores locais extraídos a partir das imagens têm
sido amplamente reportados na literatura de visão por computador. Uma vez que existe um
grande conjunto de detetores de pontos-chave, sugere a necessidade de uma avaliação comparativa
entre eles. Desta forma, propomos a fazer uma descrição dos detetores de pontos-chave
2D e 3D, dos descritores 3D e uma avaliação dos detetores de pontos-chave 3D existentes numa
biblioteca de pública disponível e com objetos 3D reais. A invariância dos detetores de pontoschave
3D foi avaliada de acordo com variações nas rotações, mudanças de escala e translações.
Essa avaliação retrata a robustez de um determinado detetor no que diz respeito às mudanças
de ponto-de-vista e os critérios utilizados são as taxas de repetibilidade absoluta e relativa. Nas
experiências realizadas, o método que apresentou melhor taxa de repetibilidade foi o método
ISS3D.
Com a análise do sistema visual humano e dos detetores de mapas de saliência com inspiração
biológica, surgiu a ideia de se fazer uma extensão para um detetor de ponto-chave
com base na informação de cor na retina. A proposta produziu um detetor de ponto-chave 2D
inspirado pelo comportamento do sistema visual. O nosso método é uma extensão com base na cor do detetor de ponto-chave BIMP, onde se incluem os canais de cor e de intensidade de
uma imagem. A informação de cor é incluída de forma biológica plausível e as características
multi-escala da imagem são combinadas num único mapas de pontos-chave. Este detetor
é comparado com os detetores de estado-da-arte e é particularmente adequado para tarefas
como o reconhecimento de categorias e de objetos. O processo de reconhecimento é realizado
comparando os descritores 3D extraídos nos locais indicados pelos pontos-chave. Para isso, as
localizações do pontos-chave 2D têm de ser convertido para o espaço 3D. Isto foi possível porque
o conjunto de dados usado contém a localização de cada ponto de no espaço 2D e 3D. A avaliação
permitiu-nos obter o melhor par detetor de ponto-chave/descritor num RGB-D object dataset.
Usando o nosso detetor de ponto-chave e o descritor SHOTCOLOR, obtemos uma noa taxa de
reconhecimento de categorias e para o reconhecimento de objetos é com o descritor PFHRGB
que obtemos os melhores resultados.
Um sistema de reconhecimento 3D envolve a escolha de detetor de ponto-chave e descritor,
por isso é apresentado um novo método para a deteção de pontos-chave em nuvens de
pontos 3D e uma análise comparativa é realizada entre cada par de detetor de ponto-chave
3D e descritor 3D para avaliar o desempenho no reconhecimento de categorias e de objetos.
Estas avaliações são feitas numa base de dados pública de objetos 3D reais. O nosso detetor
de ponto-chave é inspirado no comportamento e na arquitetura neural do sistema visual dos
primatas. Os pontos-chave 3D são extraídas com base num mapa de saliências 3D bottom-up,
ou seja, um mapa que codifica a saliência dos objetos no ambiente visual. O mapa de saliência
é determinada pelo cálculo dos mapas de conspicuidade (uma combinação entre diferentes
modalidades) da orientação, intensidade e informações de cor de forma bottom-up e puramente
orientada para o estímulo. Estes três mapas de conspicuidade são fundidos num mapa de saliência
3D e, finalmente, o foco de atenção (ou "localização do ponto-chave") está sequencialmente
direcionado para os pontos mais salientes deste mapa. Inibir este local permite que o sistema
automaticamente orientado para próximo local mais saliente. As principais conclusões são: com
um número médio similar de pontos-chave, o nosso detetor de ponto-chave 3D supera os outros
oito detetores de pontos-chave 3D avaliados, obtendo o melhor resultado em 32 das métricas
avaliadas nas experiências do reconhecimento das categorias e dos objetos, quando o segundo
melhor detetor obteve apenas o melhor resultado em 8 dessas métricas. A única desvantagem
é o tempo computacional, uma vez que BIK-BUS é mais lento do que os outros detetores. Dado
que existem grandes diferenças em termos de desempenho no reconhecimento, de tamanho
e de tempo, a seleção do detetor de ponto-chave e descritor tem de ser interligada com a
tarefa desejada e nós damos algumas orientações para facilitar esta escolha neste trabalho de
investigação.
Depois de propor um detetor de ponto-chave 3D, a investigação incidiu sobre um método
robusto de deteção e tracking de objetos 3D usando as informações dos pontos-chave num filtro
de partículas. Este método consiste em três etapas distintas: Segmentação, Inicialização do
Tracking e Tracking. A segmentação é feita de modo a remover toda a informação de fundo,
a fim de reduzir o número de pontos para processamento futuro. Na inicialização, usamos um
detetor de ponto-chave com inspiração biológica. A informação do objeto que queremos seguir
é dada pelos pontos-chave extraídos. O filtro de partículas faz o acompanhamento dos pontoschave,
de modo a se poder prever onde os pontos-chave estarão no próximo frame. As experiências
com método PFBIK-Tracking são feitas no interior, num ambiente de escritório/casa, onde
se espera que robôs pessoais possam operar. Também avaliado quantitativamente este método
utilizando um "Tracking Error". A avaliação passa pelo cálculo das centróides dos pontos-chave e
das partículas. Comparando o nosso sistema com o método de tracking que existe na biblioteca usada no desenvolvimento, nós obtemos melhores resultados, com um número muito menor de
pontos e custo computacional. O nosso método é mais rápido e mais robusto em termos de
oclusão, quando comparado com o OpenniTracker
Enhanced independent vector analysis for audio separation in a room environment
Independent vector analysis (IVA) is studied as a frequency domain blind source separation method, which can theoretically avoid the permutation problem by retaining the dependency between different frequency bins of the same source vector while removing the dependency between different source vectors. This thesis focuses upon improving the performance of independent vector analysis when it is used to solve the audio separation problem in a room environment.
A specific stability problem of IVA, i.e. the block permutation problem, is identified and analyzed. Then a robust IVA method is proposed to solve this problem by exploiting the phase continuity of the unmixing matrix. Moreover, an auxiliary function based IVA algorithm with an overlapped chain type source prior is proposed as well to mitigate this problem.
Then an informed IVA scheme is proposed which combines the geometric information of the sources from video to solve the problem by providing an intelligent initialization for optimal convergence. The proposed informed IVA algorithm can also achieve a faster convergence in terms of iteration numbers and better separation performance. A pitch based evaluation method is defined to judge the separation performance objectively when the information describing the mixing matrix and sources is missing.
In order to improve the separation performance of IVA, an appropriate multivariate source prior is needed to better preserve the dependency structure within the source vectors. A particular multivariate generalized Gaussian distribution is adopted as the source prior. The nonlinear score function derived from this proposed source prior contains the fourth order relationships between different frequency bins, which provides a more informative and stronger dependency structure compared with the original IVA algorithm and thereby improves the separation performance.
Copula theory is a central tool to model the nonlinear dependency structure. The t copula is proposed to describe the dependency structure within the frequency domain speech signals due to its tail dependency property, which means if one variable has an extreme value, other variables are expected to have extreme values. A multivariate student's t distribution constructed by using a t copula with the univariate student's t marginal distribution is proposed as the source prior. Then the IVA algorithm with the proposed source prior is derived.
The proposed algorithms are tested with real speech signals in different reverberant room environments both using modelled room impulse response and real room recordings. State-of-the-art criteria are used to evaluate the separation performance, and the experimental results confirm the advantage of the proposed algorithms
The effects of macrophage-stimulating protein and gamma synuclein on the development of brainstem motor systems
Disorders of motility are among the most common and debilitating
neurological ailments. In most cases, treatment of these conditions is at best
palliative. This is mainly due to the apparent inability of central neurons to
regenerate after a given noxious event. In the past, embryos have proven to be
valuable objects to study the mechanisms of growth and death in nerve cells
since both are physiological events in the developing nervous system. The
discovery and study of an ever-growing list of molecules that are involved in
these events has significantly furthered our understanding of the conditions
that have to be met for individual nerve cell populations to develop into
functional structures. It also has the potential to contribute significantly to the
establishment of more targeted and efficient therapeutic strategies.Here, the effects of macrophage-stimulating protein (MSP) and 7-
synuclein on two systems in the developing brainstem involved in controlling
movement have been studied: a) the cranial motoneurons and b) the
dopaminergic neurons of the substantia nigra and the ventral tegmental area.
MSP exerts a variety of biological actions on many cell types, but has no
known functions in the brain. To investigate whether MSP is also capable of
acting as a neurotrophic factor, hypoglossal motoneurons were purified from
the embryonic chicken hindbrain because these neurons are known to express
the MSP receptor tyrosine kinase RON. The study shows that MSP promotes
the in vitro survival of these neurons during the period of naturally occurring
neuronal cell death and enhances the growth of neurites from these neurons.
Furthermore, MSP mRNA was detected in the developing tongue which is the
XXI
target tissue for hypoglossal neurons. These studies demonstrate that MSP is a
neurotrophic factor for a distinct population of developing motoneurons.γ-synuclein is a recently discovered member of the synuclein family.
Another member of this family, a-synuclein has been implicated in the
pathogenesis of Parkinson's disease. However, little is known about the
function of γ-synuclein and it has not yet been directly implicated in the
genesis of neurodegenerative conditions. Here, brainstems of transgenic mice
lacking γ-synuclein have been analysed by means of immunohistochemical
and histological techniques. The data obtained shows that γ-synuclein is
expressed in the murine substantia nigra and in most cranial motor nuclei and
that the localization of the protein undergoes a shift during development from
a cytosomal to an axonal and synaptic localization. Mice lacking γ-synuclein
have a deficit of neurons in these structures. In the context of recent studies
which have revealed in vivo and in vitro interactions between the synucleins,
this data suggests that a fine balance between α- and γ-synuclein seems
critical to prevent the demise of certain neurons during the period of naturally
occurring neuronal cell death. It also indicates that γ-synuclein may play a
role in the pathogenesis of Parkinson's diseas
Learning discriminative models with incomplete data
Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.Includes bibliographical references (p. 115-121).Many practical problems in pattern recognition require making inferences using multiple modalities, e.g. sensor data from video, audio, physiological changes etc. Often in real-world scenarios there can be incompleteness in the training data. There can be missing channels due to sensor failures in multi-sensory data and many data points in the training set might be unlabeled. Further, instead of having exact labels we might have easy to obtain coarse labels that correlate with the task. Also, there can be labeling errors, for example human annotation can lead to incorrect labels in the training data. The discriminative paradigm of classification aims to model the classification boundary directly by conditioning on the data points; however, discriminative models cannot easily handle incompleteness since the distribution of the observations is never explicitly modeled. We present a unified Bayesian framework that extends the discriminative paradigm to handle four different kinds of incompleteness. First, a solution based on a mixture of Gaussian processes is proposed for achieving sensor fusion under the problematic conditions of missing channels. Second, the framework addresses incompleteness resulting from partially labeled data using input dependent regularization.(cont.) Third, we introduce the located hidden random field (LHRF) that learns finer level labels when only some easy to obtain coarse information is available. Finally the proposed framework can handle incorrect labels, the fourth case of incompleteness. One of the advantages of the framework is that we can use different models for different kinds of label errors, providing a way to encode prior knowledge about the process. The proposed extensions are built on top of Gaussian process classification and result in a modular framework where each component is capable of handling different kinds of incompleteness. These modules can be combined in many different ways, resulting in many different algorithms within one unified framework. We demonstrate the effectiveness of the framework on a variety of problems such as multi-sensor affect recognition, image classification and object detection and segmentation.by Ashish Kapoor.Ph.D
Object Recognition
Vision-based object recognition tasks are very familiar in our everyday activities, such as driving our car in the correct lane. We do these tasks effortlessly in real-time. In the last decades, with the advancement of computer technology, researchers and application developers are trying to mimic the human's capability of visually recognising. Such capability will allow machine to free human from boring or dangerous jobs
Semiautomated 3D liver segmentation using computed tomography and magnetic resonance imaging
Le foie est un organe vital ayant une capacité de régénération exceptionnelle et un rôle crucial dans le fonctionnement de l’organisme. L’évaluation du volume du foie est un outil important pouvant être utilisé comme marqueur biologique de sévérité de maladies hépatiques. La volumétrie du foie est indiquée avant les hépatectomies majeures, l’embolisation de la veine porte et la transplantation.
La méthode la plus répandue sur la base d'examens de tomodensitométrie (TDM) et d'imagerie par résonance magnétique (IRM) consiste à délimiter le contour du foie sur plusieurs coupes consécutives, un processus appelé la «segmentation».
Nous présentons la conception et la stratégie de validation pour une méthode de segmentation semi-automatisée développée à notre institution. Notre méthode représente une approche basée sur un modèle utilisant l’interpolation variationnelle de forme ainsi que l’optimisation de maillages de Laplace. La méthode a été conçue afin d’être compatible avec la TDM ainsi que l' IRM.
Nous avons évalué la répétabilité, la fiabilité ainsi que l’efficacité de notre méthode semi-automatisée de segmentation avec deux études transversales conçues rétrospectivement. Les résultats de nos études de validation suggèrent que la méthode de segmentation confère une fiabilité et répétabilité comparables à la segmentation manuelle. De plus, cette méthode diminue de façon significative le temps d’interaction, la rendant ainsi adaptée à la pratique clinique courante.
D’autres études pourraient incorporer la volumétrie afin de déterminer des marqueurs biologiques de maladie hépatique basés sur le volume tels que la présence de stéatose, de fer, ou encore la mesure de fibrose par unité de volume.The liver is a vital abdominal organ known for its remarkable regenerative
capacity and fundamental role in organism viability. Assessment of liver volume is
an important tool which physicians use as a biomarker of disease severity. Liver
volumetry is clinically indicated prior to major hepatectomy, portal vein
embolization and transplantation.
The most popular method to determine liver volume from computed
tomography (CT) and magnetic resonance imaging (MRI) examinations involves
contouring the liver on consecutive imaging slices, a process called
“segmentation”. Segmentation can be performed either manually or in an
automated fashion.
We present the design concept and validation strategy for an innovative
semiautomated liver segmentation method developed at our institution. Our
method represents a model-based approach using variational shape interpolation
and Laplacian mesh optimization techniques. It is independent of training data,
requires limited user interactions and is robust to a variety of pathological cases.
Further, it was designed for compatibility with both CT and MRI examinations.
We evaluated the repeatability, agreement and efficiency of our
semiautomated method in two retrospective cross-sectional studies. The results of
our validation studies suggest that semiautomated liver segmentation can provide
strong agreement and repeatability when compared to manual segmentation.
Further, segmentation automation significantly shortens interaction time, thus
making it suitable for daily clinical practice.
Future studies may incorporate liver volumetry to determine volume-averaged
biomarkers of liver disease, such as such as fat, iron or fibrosis measurements per
unit volume. Segmental volumetry could also be assessed based on
subsegmentation of vascular anatomy
- …