2,201 research outputs found

    Characterization of carotid artery plaques using noninvasive vascular ultrasound elastography

    Full text link
    L'athérosclérose est une maladie vasculaire complexe qui affecte la paroi des artères (par l'épaississement) et les lumières (par la formation de plaques). La rupture d'une plaque de l'artère carotide peut également provoquer un accident vasculaire cérébral ischémique et des complications. Bien que plusieurs modalités d'imagerie médicale soient actuellement utilisées pour évaluer la stabilité d'une plaque, elles présentent des limitations telles que l'irradiation, les propriétés invasives, une faible disponibilité clinique et un coût élevé. L'échographie est une méthode d'imagerie sûre qui permet une analyse en temps réel pour l'évaluation des tissus biologiques. Il est intéressant et prometteur d’appliquer une échographie vasculaire pour le dépistage et le diagnostic précoces des plaques d’artère carotide. Cependant, les ultrasons vasculaires actuels identifient uniquement la morphologie d'une plaque en termes de luminosité d'écho ou l’impact de cette plaque sur les caractéristiques de l’écoulement sanguin, ce qui peut ne pas être suffisant pour diagnostiquer l’importance de la plaque. La technique d’élastographie vasculaire non-intrusive (« noninvasive vascular elastography (NIVE) ») a montré le potentiel de détermination de la stabilité d'une plaque. NIVE peut déterminer le champ de déformation de la paroi vasculaire en mouvement d’une artère carotide provoqué par la pulsation cardiaque naturelle. En raison des différences de module de Young entre les différents tissus des vaisseaux, différents composants d’une plaque devraient présenter différentes déformations, caractérisant ainsi la stabilité de la plaque. Actuellement, les performances et l’efficacité numérique sous-optimales limitent l’acceptation clinique de NIVE en tant que méthode rapide et efficace pour le diagnostic précoce des plaques vulnérables. Par conséquent, il est nécessaire de développer NIVE en tant qu’outil d’imagerie non invasif, rapide et économique afin de mieux caractériser la vulnérabilité liée à la plaque. La procédure à suivre pour effectuer l’analyse NIVE consiste en des étapes de formation et de post-traitement d’images. Cette thèse vise à améliorer systématiquement la précision de ces deux aspects de NIVE afin de faciliter la prédiction de la vulnérabilité de la plaque carotidienne. Le premier effort de cette thèse a été dédié à la formation d'images (Chapitre 5). L'imagerie par oscillations transversales a été introduite dans NIVE. Les performances de l’imagerie par oscillations transversales couplées à deux estimateurs de contrainte fondés sur un modèle de déformation fine, soit l’ « affine phase-based estimator (APBE) » et le « Lagrangian speckle model estimator (LSME) », ont été évaluées. Pour toutes les études de simulation et in vitro de ce travail, le LSME sans imagerie par oscillation transversale a surperformé par rapport à l'APBE avec imagerie par oscillations transversales. Néanmoins, des estimations de contrainte principales comparables ou meilleures pourraient être obtenues avec le LSME en utilisant une imagerie par oscillations transversales dans le cas de structures tissulaires complexes et hétérogènes. Lors de l'acquisition de signaux ultrasonores pour la formation d'images, des mouvements hors du plan perpendiculaire au plan de balayage bidimensionnel (2-D) existent. Le deuxième objectif de cette thèse était d'évaluer l'influence des mouvements hors plan sur les performances du NIVE 2-D (Chapitre 6). À cette fin, nous avons conçu un dispositif expérimental in vitro permettant de simuler des mouvements hors plan de 1 mm, 2 mm et 3 mm. Les résultats in vitro ont montré plus d'artefacts d'estimation de contrainte pour le LSME avec des amplitudes croissantes de mouvements hors du plan principal de l’image. Malgré tout, nous avons néanmoins obtenu des estimations de déformations robustes avec un mouvement hors plan de 2.0 mm (coefficients de corrélation supérieurs à 0.85). Pour un jeu de données cliniques de 18 participants présentant une sténose de l'artère carotide, nous avons proposé d'utiliser deux jeux de données d'analyses sur la même plaque carotidienne, soit des images transversales et longitudinales, afin de déduire les mouvements hors plan (qui se sont avérés de 0.25 mm à 1.04 mm). Les résultats cliniques ont montré que les estimations de déformations restaient reproductibles pour toutes les amplitudes de mouvement, puisque les coefficients de corrélation inter-images étaient supérieurs à 0.70 et que les corrélations croisées normalisées entre les images radiofréquences étaient supérieures à 0.93, ce qui a permis de démontrer une plus grande confiance lors de l'analyse de jeu de données cliniques de plaques carotides à l'aide du LSME. Enfin, en ce qui concerne le post-traitement des images, les algorithmes NIVE doivent estimer les déformations des parois des vaisseaux à partir d’images reconstituées dans le but d’identifier les tissus mous et durs. Ainsi, le dernier objectif de cette thèse était de développer un algorithme d'estimation de contrainte avec une résolution de la taille d’un pixel ainsi qu'une efficacité de calcul élevée pour l'amélioration de la précision de NIVE (Chapitre 7). Nous avons proposé un estimateur de déformation de modèle fragmenté (SMSE) avec lequel le champ de déformation dense est paramétré avec des descriptions de transformées en cosinus discret, générant ainsi des composantes de déformations affines (déformations axiales et latérales et en cisaillement) sans opération mathématique de dérivées. En comparant avec le LSME, le SMSE a réduit les erreurs d'estimation lors des tests de simulations, ainsi que pour les mesures in vitro et in vivo. De plus, la faible mise en oeuvre de la méthode SMSE réduit de 4 à 25 fois le temps de traitement par rapport à la méthode LSME pour les simulations, les études in vitro et in vivo, ce qui pourrait permettre une implémentation possible de NIVE en temps réel.Atherosclerosis is a complex vascular disease that affects artery walls (by thickening) and lumens (by plaque formation). The rupture of a carotid artery plaque may also induce ischemic stroke and complications. Despite the use of several medical imaging modalities to evaluate the stability of a plaque, they present limitations such as irradiation, invasive property, low clinical availability and high cost. Ultrasound is a safe imaging method with a real time capability for assessment of biological tissues. It is clinically used for early screening and diagnosis of carotid artery plaques. However, current vascular ultrasound technologies only identify the morphology of a plaque in terms of echo brightness or the impact of the vessel narrowing on flow properties, which may not be sufficient for optimum diagnosis. Noninvasive vascular elastography (NIVE) has been shown of interest for determining the stability of a plaque. Specifically, NIVE can determine the strain field of the moving vessel wall of a carotid artery caused by the natural cardiac pulsation. Due to Young’s modulus differences among different vessel tissues, different components of a plaque can be detected as they present different strains thereby potentially helping in characterizing the plaque stability. Currently, sub-optimum performance and computational efficiency limit the clinical acceptance of NIVE as a fast and efficient method for the early diagnosis of vulnerable plaques. Therefore, there is a need to further develop NIVE as a non-invasive, fast and low computational cost imaging tool to better characterize the plaque vulnerability. The procedure to perform NIVE analysis consists in image formation and image post-processing steps. This thesis aimed to systematically improve the accuracy of these two aspects of NIVE to facilitate predicting carotid plaque vulnerability. The first effort of this thesis has been targeted on improving the image formation (Chapter 5). Transverse oscillation beamforming was introduced into NIVE. The performance of transverse oscillation imaging coupled with two model-based strain estimators, the affine phase-based estimator (APBE) and the Lagrangian speckle model estimator (LSME), were evaluated. For all simulations and in vitro studies, the LSME without transverse oscillation imaging outperformed the APBE with transverse oscillation imaging. Nonetheless, comparable or better principal strain estimates could be obtained with the LSME using transverse oscillation imaging in the case of complex and heterogeneous tissue structures. During the acquisition of ultrasound signals for image formation, out-of-plane motions which are perpendicular to the two-dimensional (2-D) scan plane are existing. The second objective of this thesis was to evaluate the influence of out-of-plane motions on the performance of 2-D NIVE (Chapter 6). For this purpose, we designed an in vitro experimental setup to simulate out-of-plane motions of 1 mm, 2 mm and 3 mm. The in vitro results showed more strain estimation artifacts for the LSME with increasing magnitudes of out-of-plane motions. Even so, robust strain estimations were nevertheless obtained with 2.0 mm out-of-plane motion (correlation coefficients higher than 0.85). For a clinical dataset of 18 participants with carotid artery stenosis, we proposed to use two datasets of scans on the same carotid plaque, one cross-sectional and the other in a longitudinal view, to deduce the out-of-plane motions (estimated to be ranging from 0.25 mm to 1.04 mm). Clinical results showed that strain estimations remained reproducible for all motion magnitudes since inter-frame correlation coefficients were higher than 0.70, and normalized cross-correlations between radiofrequency images were above 0.93, which indicated that confident motion estimations can be obtained when analyzing clinical dataset of carotid plaques using the LSME. Finally, regarding the image post-processing component of NIVE algorithms to estimate strains of vessel walls from reconstructed images with the objective of identifying soft and hard tissues, we developed a strain estimation method with a pixel-wise resolution as well as a high computation efficiency for improving NIVE (Chapter 7). We proposed a sparse model strain estimator (SMSE) for which the dense strain field is parameterized with Discrete Cosine Transform descriptions, thereby deriving affine strain components (axial and lateral strains and shears) without mathematical derivative operations. Compared with the LSME, the SMSE reduced estimation errors in simulations, in vitro and in vivo tests. Moreover, the sparse implementation of the SMSE reduced the processing time by a factor of 4 to 25 compared with the LSME based on simulations, in vitro and in vivo results, which is suggesting a possible implementation of NIVE in real time

    Novel block-based motion estimation and segmentation for video coding

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Registration of ultrasound and computed tomography for guidance of laparoscopic liver surgery

    Get PDF
    Laparoscopic Ultrasound (LUS) imaging is a standard tool used for image-guidance during laparoscopic liver resection, as it provides real-time information on the internal structure of the liver. However, LUS probes are di cult to handle and their resulting images hard to interpret. Additionally, some anatomical targets such as tumours are not always visible, making the LUS guidance less e ective. To solve this problem, registration between the LUS images and a pre-operative Computed Tomography (CT) scan using information from blood vessels has been previously proposed. By merging these two modalities, the relative position between the LUS images and the anatomy of CT is obtained and both can be used to guide the surgeon. The problem of LUS to CT registration is specially challenging, as besides being a multi-modal registration, the eld of view of LUS is signi cantly smaller than that of CT. Therefore, this problem becomes poorly constrained and typically an accurate initialisation is needed. Also, the liver is highly deformed during laparoscopy, complicating the problem further. So far, the methods presented in the literature are not clinically feasible as they depend on manually set correspondences between both images. In this thesis, a solution for this registration problem that may be more transferable to the clinic is proposed. Firstly, traditional registration approaches comprised of manual initialisation and optimisation of a cost function are studied. Secondly, it is demonstrated that a globally optimal registration without a manual initialisation is possible. Finally, a new globally optimal solution that does not require commonly used tracking technologies is proposed and validated. The resulting approach provides clinical value as it does not require manual interaction in the operating room or tracking devices. Furthermore, the proposed method could potentially be applied to other image-guidance problems that require registration between ultrasound and a pre-operative scan

    Improving RANSAC for Efficient and Precise Model Fitting with Statistical Analysis

    Full text link
    RANSAC (random sample consensus) has been widely used as a benchmark algorithm for model fitting in the presence of outliers for more than thirty years. It is robust for outlier removal and rough model fitting, but neither reliable nor efficient enough for many applications where precision and time is critical. Many other algorithms have been proposed for the improvement of RANSAC. However, no much effort has been done to systematically tackle its limitations on model fitting repeatability, quality indication, iteration termination, and multi-model fitting.A new paradigm, named as SASAC (statistical analysis for sample consensus), is introduced in this paper to relinquish the limitations of RANSAC above. Unlike RANSAC that does not consider sampling noise, which is true in most sampling cases, a term named as ? rate is defined in SASAC. It is used both as an indicator for the quality of model fitting and as a criterion for terminating iterative model searching. Iterative least square is advisably integrated in SASAC for optimal model estimation, and a strategy is proposed to handle a multi-model situation.Experiment results for linear and quadratic function model fitting demonstrate that SASAC can significantly improve the quality and reliability of model fitting and largely reduce the number of iterations for model searching. Using the ? rate as an indicator for the quality of model fitting can effectively avoid wrongly estimated model. In addition, SASAC works very well to a multi-model dataset and can provide reliable estimations to all the models. SASAC can be combined with RANSAC and its variants to dramatically improve their performance.</jats:p

    Video modeling via implicit motion representations

    Get PDF
    Video modeling refers to the development of analytical representations for explaining the intensity distribution in video signals. Based on the analytical representation, we can develop algorithms for accomplishing particular video-related tasks. Therefore video modeling provides us a foundation to bridge video data and related-tasks. Although there are many video models proposed in the past decades, the rise of new applications calls for more efficient and accurate video modeling approaches.;Most existing video modeling approaches are based on explicit motion representations, where motion information is explicitly expressed by correspondence-based representations (i.e., motion velocity or displacement). Although it is conceptually simple, the limitations of those representations and the suboptimum of motion estimation techniques can degrade such video modeling approaches, especially for handling complex motion or non-ideal observation video data. In this thesis, we propose to investigate video modeling without explicit motion representation. Motion information is implicitly embedded into the spatio-temporal dependency among pixels or patches instead of being explicitly described by motion vectors.;Firstly, we propose a parametric model based on a spatio-temporal adaptive localized learning (STALL). We formulate video modeling as a linear regression problem, in which motion information is embedded within the regression coefficients. The coefficients are adaptively learned within a local space-time window based on LMMSE criterion. Incorporating a spatio-temporal resampling and a Bayesian fusion scheme, we can enhance the modeling capability of STALL on more general videos. Under the framework of STALL, we can develop video processing algorithms for a variety of applications by adjusting model parameters (i.e., the size and topology of model support and training window). We apply STALL on three video processing problems. The simulation results show that motion information can be efficiently exploited by our implicit motion representation and the resampling and fusion do help to enhance the modeling capability of STALL.;Secondly, we propose a nonparametric video modeling approach, which is not dependent on explicit motion estimation. Assuming the video sequence is composed of many overlapping space-time patches, we propose to embed motion-related information into the relationships among video patches and develop a generic sparsity-based prior for typical video sequences. First, we extend block matching to more general kNN-based patch clustering, which provides an implicit and distributed representation for motion information. We propose to enforce the sparsity constraint on a higher-dimensional data array signal, which is generated by packing the patches in the similar patch set. Then we solve the inference problem by updating the kNN array and the wanted signal iteratively. Finally, we present a Bayesian fusion approach to fuse multiple-hypothesis inferences. Simulation results in video error concealment, denoising, and deartifacting are reported to demonstrate its modeling capability.;Finally, we summarize the proposed two video modeling approaches. We also point out the perspectives of implicit motion representations in applications ranging from low to high level problems

    Motion compensation for image compression: pel-recursive motion estimation algorithm

    Get PDF
    In motion pictures there is a certain amount of redundancy between consecutive frames. These redundancies can be exploited by using interframe prediction techniques. To further enhance the efficiency of interframe prediction, motion estimation and compensation, various motion compensation techniques can be used. There are two distinct techniques for motion estimation block matching and pel-recursive block matching has been widely used as it produces a better signal-to-noise ratio or a lower bit rate for transmission than the pel-recursive method. In this thesis, various pel-recursive motion estimation techniques such as steepest descent gradient algorithm have been considered and simulated. [Continues.

    Performance-driven control of nano-motion systems

    Get PDF
    The performance of high-precision mechatronic systems is subject to ever increasing demands regarding speed and accuracy. To meet these demands, new actuator drivers, sensor signal processing and control algorithms have to be derived. The state-of-the-art scientific developments in these research directions can significantly improve the performance of high-precision systems. However, translation of the scientific developments to usable technology is often non-trivial. To improve the performance of high-precision systems and to bridge the gap between science and technology, a performance-driven control approach has been developed. First, the main performance limiting factor (PLF) is identified. Then, a model-based compensation method is developed for the identified PLF. Experimental validation shows the performance improvement and reveals the next PLF to which the same procedure is applied. The compensation method can relate to the actuator driver, the sensor system or the control algorithm. In this thesis, the focus is on nano-motion systems that are driven by piezo actuators and/or use encoder sensors. Nano-motion systems are defined as the class of systems that require velocities ranging from nanometers per second to millimeters per second with a (sub)nanometer resolution. The main PLFs of such systems are the actuator driver, hysteresis, stick-slip effects, repetitive disturbances, coupling between degrees-of-freedom (DOFs), geometric nonlinearities and quantization errors. The developed approach is applied to three illustrative experimental cases that exhibit the above mentioned PLFs. The cases include a nano-motion stage driven by a walking piezo actuator, a metrological AFM and an encoder system. The contributions of this thesis relate to modeling, actuation driver development, control synthesis and encoder sensor signal processing. In particular, dynamic models are derived of the bimorph piezo legs of the walking piezo actuator and of the nano-motion stage with the walking piezo actuator containing the switching actuation principle, stick-slip effects and contact dynamics. Subsequently, a model-based optimization is performed to obtain optimal drive waveforms for a constant stage velocity. Both the walking piezo actuator and the AFM case exhibit repetitive disturbances with a non-constant period-time, for which dedicated repetitive control methods are developed. Furthermore, control algorithms have been developed to cope with the present coupling between and hysteresis in the different axes of the AFM. Finally, sensor signal processing algorithms have been developed to cope with the quantization effects and encoder imperfections in optical incremental encoders. The application of the performance-driven control approach to the different cases shows that the different identified PLFs can be successfully modeled and compensated for. The experiments show that the performance-driven control approach can largely improve the performance of nano-motion systems with piezo actuators and/or encoder sensors

    Object Search Strategy in Tracking Algorithms

    Get PDF
    The demand for real-time video surveillance systems is increasing rapidly. The purpose of these systems includes surveillance as well as monitoring and controlling the events. Today there are several real-time computer vision applications based on image understanding which emulate the human vision and intelligence. These machines include object tracking as their primary task. Object tracking refers to estimating the trajectory of an object of interest in a video. A tracking system works on the principle of video processing algorithms. Video processing includes a huge amount of data to be processed and this fact dictates while implementing the algorithms on any hardware. However, the problems becomes challenging due to unexpected motion of the object, scene appearance change, object appearance change, structures of objects that are not rigid. Besides this full and partial occlusions and motion of the camera also pose challenges. Current tracking algorithms treat this problem as a classification task and use online learning algorithms to update the object model. Here, we explore the data redundancy in the sampling techniques and develop a highly structured kernel. This kernel acquires a circulant structure which is extremely easy to manipulate. Also, we take it further by using mean shift density algorithm and optical flow by Lucas Kanade method which gives us a heavy improvement in the results

    A Framework for SAR-Optical Stereogrammetry over Urban Areas

    Get PDF
    Currently, numerous remote sensing satellites provide a huge volume of diverse earth observation data. As these data show different features regarding resolution, accuracy, coverage, and spectral imaging ability, fusion techniques are required to integrate the different properties of each sensor and produce useful information. For example, synthetic aperture radar (SAR) data can be fused with optical imagery to produce 3D information using stereogrammetric methods. The main focus of this study is to investigate the possibility of applying a stereogrammetry pipeline to very-high-resolution (VHR) SAR-optical image pairs. For this purpose, the applicability of semi-global matching is investigated in this unconventional multi-sensor setting. To support the image matching by reducing the search space and accelerating the identification of correct, reliable matches, the possibility of establishing an epipolarity constraint for VHR SAR-optical image pairs is investigated as well. In addition, it is shown that the absolute geolocation accuracy of VHR optical imagery with respect to VHR SAR imagery such as provided by TerraSAR-X can be improved by a multi-sensor block adjustment formulation based on rational polynomial coefficients. Finally, the feasibility of generating point clouds with a median accuracy of about 2m is demonstrated and confirms the potential of 3D reconstruction from SAR-optical image pairs over urban areas.Comment: This is the pre-acceptance version, to read the final version, please go to ISPRS Journal of Photogrammetry and Remote Sensing on ScienceDirec
    corecore