2,044 research outputs found
Segmentation-based mesh design for motion estimation
Dans la plupart des codec vidéo standard, l'estimation des mouvements entre deux images se fait généralement par l'algorithme de concordance des blocs ou encore BMA pour « Block Matching Algorithm ». BMA permet de représenter l'évolution du contenu des images en décomposant normalement une image par blocs 2D en mouvement translationnel. Cette technique de prédiction conduit habituellement à de sévères distorsions de 1'artefact de bloc lorsque Ie mouvement est important. De plus, la décomposition systématique en blocs réguliers ne dent pas compte nullement du contenu de l'image. Certains paramètres associes aux blocs, mais inutiles, doivent être transmis; ce qui résulte d'une augmentation de débit de transmission. Pour paillier a ces défauts de BMA, on considère les deux objectifs importants dans Ie codage vidéo, qui sont de recevoir une bonne qualité d'une part et de réduire la transmission a très bas débit d'autre part. Dans Ie but de combiner les deux exigences quasi contradictoires, il est nécessaire d'utiliser une technique de compensation de mouvement qui donne, comme transformation, de bonnes caractéristiques subjectives et requiert uniquement, pour la transmission, l'information de mouvement. Ce mémoire propose une technique de compensation de mouvement en concevant des mailles 2D triangulaires a partir d'une segmentation de l'image. La décomposition des mailles est construite a partir des nœuds repartis irrégulièrement Ie long des contours dans l'image. La décomposition résultant est ainsi basée sur Ie contenu de l'image. De plus, étant donné la même méthode de sélection des nœuds appliquée à l'encodage et au décodage, la seule information requise est leurs vecteurs de mouvement et un très bas débit de transmission peut ainsi être réalise. Notre approche, comparée avec BMA, améliore à la fois la qualité subjective et objective avec beaucoup moins d'informations de mouvement. Dans la premier chapitre, une introduction au projet sera présentée. Dans Ie deuxième chapitre, on analysera quelques techniques de compression dans les codec standard et, surtout, la populaire BMA et ses défauts. Dans Ie troisième chapitre, notre algorithme propose et appelé la conception active des mailles a base de segmentation, sera discute en détail. Ensuite, les estimation et compensation de mouvement seront décrites dans Ie chapitre 4. Finalement, au chapitre 5, les résultats de simulation et la conclusion seront présentés.Abstract: In most video compression standards today, the generally accepted method for temporal prediction is motion compensation using block matching algorithm (BMA). BMA represents the scene content evolution with 2-D rigid translational moving blocks. This kind of predictive scheme usually leads to distortions such as block artefacts especially when the motion is important. The two most important aims in video coding are to receive a good quality on one hand and a low bit-rate on the other. This thesis proposes a motion compensation scheme using segmentation-based 2-D triangular mesh design method. The mesh is constructed by irregularly spread nodal points selected along image contour. Based on this, the generated mesh is, to a great extent, image content based. Moreover, the nodes are selected with the same method on the encoder and decoder sides, so that the only information that has to be transmitted are their motion vectors, and thus very low bit-rate can be achieved. Compared with BMA, our approach could improve subjective and objective quality with much less motion information."--Résumé abrégé par UM
Foetal echocardiographic segmentation
Congenital heart disease affects just under one percentage of all live births [1].
Those defects that manifest themselves as changes to the cardiac chamber volumes
are the motivation for the research presented in this thesis.
Blood volume measurements in vivo require delineation of the cardiac chambers and
manual tracing of foetal cardiac chambers is very time consuming and operator
dependent. This thesis presents a multi region based level set snake deformable
model applied in both 2D and 3D which can automatically adapt to some extent
towards ultrasound noise such as attenuation, speckle and partial occlusion artefacts.
The algorithm presented is named Mumford Shah Sarti Collision Detection (MSSCD).
The level set methods presented in this thesis have an optional shape prior term for
constraining the segmentation by a template registered to the image in the presence
of shadowing and heavy noise.
When applied to real data in the absence of the template the MSSCD algorithm is
initialised from seed primitives placed at the centre of each cardiac chamber. The
voxel statistics inside the chamber is determined before evolution. The MSSCD stops
at open boundaries between two chambers as the two approaching level set fronts
meet. This has significance when determining volumes for all cardiac compartments
since cardiac indices assume that each chamber is treated in isolation. Comparison
of the segmentation results from the implemented snakes including a previous level
set method in the foetal cardiac literature show that in both 2D and 3D on both real
and synthetic data, the MSSCD formulation is better suited to these types of data.
All the algorithms tested in this thesis are within 2mm error to manually traced
segmentation of the foetal cardiac datasets. This corresponds to less than 10% of
the length of a foetal heart. In addition to comparison with manual tracings all the
amorphous deformable model segmentations in this thesis are validated using a
physical phantom. The volume estimation of the phantom by the MSSCD
segmentation is to within 13% of the physically determined volume
Recommended from our members
Advances in the Contour Method for Residual Stress Measurement
The aim of this PhD thesis is to extend the capability of the contour method for residual stress measurement in metallic components by resolving two of its main limitations. The contour method involves sectioning a body into two equal parts that have mirror-symmetric geometry, stiffness and residual stress field. The deformations of the cut surfaces introduced by sectioning are then measured. These raw measured data are processed using filtering and smoothing techniques. The last step involves back calculating the residual stress distribution acting out of the cut plane that has been relaxed. This is equal to the original residual stress present at the cut plane before the body was cut. A major limitation of the contour technique is that it is strictly only applicable to flat cuts along a symmetry plane of a body.
Another fundamental assumption in the contour method is that residual stresses re-distribute elastically during the sectioning cut. However, this assumption can be violated if the residual stress magnitude is close to the material yield strength value and lead to plasticity cutting induced errors in the contour method results. Thus another major limitation of the technique is the risk of cutting induced plasticity that can introduce significant stress measurement errors.
This PhD thesis contributes to knowledge in the field through first presenting a novel contour data analysis approach for the more general case of sectioning at an arbitrary plane where the cut parts do not possess mirror-symmetry. This greatly extends the types of structure, and the volume of material within structures, where residual stresses can be measured using the contour method. The second contribution to knowledge is the invention of an incremental contour measurement method involving multiple cuts. This new approach can be applied to sequentially reduce residual stresses in the structure of interest and thereby lower or eliminate the risk of inducing plasticity during cutting and reduce consequent measurement errors. Both new approaches proposed in this PhD thesis are successfully demonstrated through numerical simulation using the finite element method and experimentally on benchmark steel specimens and against neutron diffraction measurements
Metrological characterization of 3D imaging systems: progress report on standards developments
A significant issue for companies or organizations integrating non-contact three-dimensional (3D) imaging systems into their production pipeline is deciding in which technology to invest. Quality non-contact 3D imaging systems typically involve a significant investment when considering the cost of equipment, training, software, and maintenance contracts over the functional lifetime of a given system or systems notwithstanding the requirements of the global nature of manufacturing activities. Numerous methods have been published to “help” users navigate the many products and specifications claims about “quality”. Moreover, the “best” system for one application may not be ideally suited for another application. The lack of publically-available characterization methods from trusted sources for certain areas of 3D imaging make it difficult for a typical user to select a system based on information written on a specification sheet alone. An internationally-recognized standard is a vehicle that allows better communication between users and manufacturers. It is in this context that we present a progress report on standards developments to date in the diverse, but finite, world of non-contact 3D imaging systems from the nanometre to the 100 m range
Automated retinal layer segmentation and pre-apoptotic monitoring for three-dimensional optical coherence tomography
The aim of this PhD thesis was to develop segmentation algorithm adapted and optimized to retinal OCT data that will provide objective 3D layer thickness which might be used to improve diagnosis and monitoring of retinal pathologies. Additionally, a 3D stack registration method was produced by modifying an existing algorithm. A related project was to develop a pre-apoptotic retinal monitoring based on the changes in texture parameters of the OCT scans in order to enable treatment before the changes become irreversible; apoptosis refers to the programmed cell death that can occur in retinal tissue and lead to blindness. These issues can be critical for the examination of tissues within the central nervous system. A novel statistical model for segmentation has been created and successfully applied to a large data set. A broad range of future research possibilities into advanced pathologies has been created by the results obtained. A separate model has been created for choroid segmentation located deep in retina, as the appearance of choroid is very different from the top retinal layers. Choroid thickness and structure is an important index of various pathologies (diabetes etc.). As part of the pre-apoptotic monitoring project it was shown that an increase in proportion of apoptotic cells in vitro can be accurately quantified. Moreover, the data obtained indicates a similar increase in neuronal scatter in retinal explants following axotomy (removal of retinas from the eye), suggesting that UHR-OCT can be a novel non-invasive technique for the in vivo assessment of neuronal health. Additionally, an independent project within the computer science department in collaboration with the school of psychology has been successfully carried out, improving analysis of facial dynamics and behaviour transfer between individuals. Also, important improvements to a general signal processing algorithm, dynamic time warping (DTW), have been made, allowing potential application in a broad signal processing field.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Automating the Reconstruction of Neuron Morphological Models: the Rivulet Algorithm Suite
The automatic reconstruction of single neuron cells is essential to enable large-scale data-driven investigations in computational neuroscience. The problem remains an open challenge due to various imaging artefacts that are caused by the fundamental limits of light microscopic imaging. Few previous methods were able to generate satisfactory neuron reconstruction models automatically without human intervention. The manual tracing of neuron models is labour heavy and time-consuming, making the collection of large-scale neuron morphology database one of the major bottlenecks in morphological neuroscience. This thesis presents a suite of algorithms that are developed to target the challenge of automatically reconstructing neuron morphological models with minimum human intervention. We first propose the Rivulet algorithm that iteratively backtracks the neuron fibres from the termini points back to the soma centre. By refining many details of the Rivulet algorithm, we later propose the Rivulet2 algorithm which not only eliminates a few hyper-parameters but also improves the robustness against noisy images. A soma surface reconstruction method was also proposed to make the neuron models biologically plausible around the soma body. The tracing algorithms, including Rivulet and Rivulet2, normally need one or more hyper-parameters for segmenting the neuron body out of the noisy background. To make this pipeline fully automatic, we propose to use 2.5D neural network to train a model to enhance the curvilinear structures of the neuron fibres. The trained neural networks can quickly highlight the fibres of interests and suppress the noise points in the background for the neuron tracing algorithms. We evaluated the proposed methods in the data released by both the DIADEM and the BigNeuron challenge. The experimental results show that our proposed tracing algorithms achieve the state-of-the-art results
- …