54 research outputs found

    Facilitating Colorectal Cancer Diagnosis with Computed Tomographic Colonography

    Get PDF
    Computed tomographic colonography (CTC) is a diagnostic technique involving helical volume acquisition of the cleansed, distended colorectum to detect colorectal cancer or potentially premalignant polyps. This Thesis summarises the evidence base, identifies areas in need of further research, quantifies sources of bias and presents novel techniques to facilitate colorectal cancer diagnosis using CTC. CTC literature is reviewed to justify the rationale for current implementation and to identify fruitful areas for research. This confirms excellent diagnostic performance can be attained providing CTC is interpreted by trained, experienced observers employing state-of-the-art implementation. The technique is superior to barium enema and consequently, it has been embraced by radiologists, clinicians and health policy-makers. Factors influencing generalisability of CTC research are investigated, firstly with a survey of European educational workshop participants which revealed limited CTC experience and training, followed by a systematic review exploring bias in research studies of diagnostic test accuracy which established that studies focussing on these aspects were lacking. Experiments to address these sources of bias are presented, using novel methodology: Conjoint analysis is used to ascertain patients‘ and clinicians’ attitudes to false-positive screening diagnoses, showing that both groups overwhelmingly value sensitivity over specificity. The results inform a weighted statistical analysis for CAD which is applied to the results of two previous studies showing the incremental benefit is significantly higher for novices than experienced readers. We have employed eye-tracking technology to establish the visual search patterns of observers reading CTC, demonstrated feasibility and developed metrics for analysis. We also describe development and validation of computer software to register prone and supine endoluminal surface locations demonstrating accurate matching of corresponding points when applied to a phantom and a generalisable, publically available, CTC database. Finally, areas in need of future development are suggested

    Sparse Volumetric Deformation

    Get PDF
    Volume rendering is becoming increasingly popular as applications require realistic solid shape representations with seamless texture mapping and accurate filtering. However rendering sparse volumetric data is difficult because of the limited memory and processing capabilities of current hardware. To address these limitations, the volumetric information can be stored at progressive resolutions in the hierarchical branches of a tree structure, and sampled according to the region of interest. This means that only a partial region of the full dataset is processed, and therefore massive volumetric scenes can be rendered efficiently. The problem with this approach is that it currently only supports static scenes. This is because it is difficult to accurately deform massive amounts of volume elements and reconstruct the scene hierarchy in real-time. Another problem is that deformation operations distort the shape where more than one volume element tries to occupy the same location, and similarly gaps occur where deformation stretches the elements further than one discrete location. It is also challenging to efficiently support sophisticated deformations at hierarchical resolutions, such as character skinning or physically based animation. These types of deformation are expensive and require a control structure (for example a cage or skeleton) that maps to a set of features to accelerate the deformation process. The problems with this technique are that the varying volume hierarchy reflects different feature sizes, and manipulating the features at the original resolution is too expensive; therefore the control structure must also hierarchically capture features according to the varying volumetric resolution. This thesis investigates the area of deforming and rendering massive amounts of dynamic volumetric content. The proposed approach efficiently deforms hierarchical volume elements without introducing artifacts and supports both ray casting and rasterization renderers. This enables light transport to be modeled both accurately and efficiently with applications in the fields of real-time rendering and computer animation. Sophisticated volumetric deformation, including character animation, is also supported in real-time. This is achieved by automatically generating a control skeleton which is mapped to the varying feature resolution of the volume hierarchy. The output deformations are demonstrated in massive dynamic volumetric scenes

    Augmented reality for computer assisted orthopaedic surgery

    Get PDF
    In recent years, computer-assistance and robotics have established their presence in operating theatres and found success in orthopaedic procedures. Benefits of computer assisted orthopaedic surgery (CAOS) have been thoroughly explored in research, finding improvements in clinical outcomes, through increased control and precision over surgical actions. However, human-computer interaction in CAOS remains an evolving field, through emerging display technologies including augmented reality (AR) – a fused view of the real environment with virtual, computer-generated holograms. Interactions between clinicians and patient-specific data generated during CAOS are limited to basic 2D interactions on touchscreen monitors, potentially creating clutter and cognitive challenges in surgery. Work described in this thesis sought to explore the benefits of AR in CAOS through: an integration between commercially available AR and CAOS systems, creating a novel AR-centric surgical workflow to support various tasks of computer-assisted knee arthroplasty, and three pre–clinical studies exploring the impact of the new AR workflow on both existing and newly proposed quantitative and qualitative performance metrics. Early research focused on cloning the (2D) user-interface of an existing CAOS system onto a virtual AR screen and investigating any resulting impacts on usability and performance. An infrared-based registration system is also presented, describing a protocol for calibrating commercial AR headsets with optical trackers, calculating a spatial transformation between surgical and holographic coordinate frames. The main contribution of this thesis is a novel AR workflow designed to support computer-assisted patellofemoral arthroplasty. The reported workflow provided 3D in-situ holographic guidance for CAOS tasks including patient registration, pre-operative planning, and assisted-cutting. Pre-clinical experimental validation on a commercial system (NAVIO¼, Smith & Nephew) for these contributions demonstrates encouraging early-stage results showing successful deployment of AR to CAOS systems, and promising indications that AR can enhance the clinician’s interactions in the future. The thesis concludes with a summary of achievements, corresponding limitations and future research opportunities.Open Acces

    Unsupervised deep learning of human brain diffusion magnetic resonance imaging tractography data

    Get PDF
    L'imagerie par rĂ©sonance magnĂ©tique de diffusion est une technique non invasive permettant de connaĂźtre la microstructure organisationnelle des tissus biologiques. Les mĂ©thodes computationnelles qui exploitent la prĂ©fĂ©rence orientationnelle de la diffusion dans des structures restreintes pour rĂ©vĂ©ler les voies axonales de la matiĂšre blanche du cerveau sont appelĂ©es tractographie. Ces derniĂšres annĂ©es, diverses mĂ©thodes de tractographie ont Ă©tĂ© utilisĂ©es avec succĂšs pour dĂ©couvrir l'architecture de la matiĂšre blanche du cerveau. Pourtant, ces techniques de reconstruction souffrent d'un certain nombre de dĂ©fauts dĂ©rivĂ©s d'ambiguĂŻtĂ©s fondamentales liĂ©es Ă  l'information orientationnelle. Cela a des consĂ©quences dramatiques, puisque les cartes de connectivitĂ© de la matiĂšre blanche basĂ©es sur la tractographie sont dominĂ©es par des faux positifs. Ainsi, la grande proportion de voies invalides rĂ©cupĂ©rĂ©es demeure un des principaux dĂ©fis Ă  rĂ©soudre par la tractographie pour obtenir une description anatomique fiable de la matiĂšre blanche. Des approches mĂ©thodologiques innovantes sont nĂ©cessaires pour aider Ă  rĂ©soudre ces questions. Les progrĂšs rĂ©cents en termes de puissance de calcul et de disponibilitĂ© des donnĂ©es ont rendu possible l'application rĂ©ussie des approches modernes d'apprentissage automatique Ă  une variĂ©tĂ© de problĂšmes, y compris les tĂąches de vision par ordinateur et d'analyse d'images. Ces mĂ©thodes modĂ©lisent et trouvent les motifs sous-jacents dans les donnĂ©es, et permettent de faire des prĂ©dictions sur de nouvelles donnĂ©es. De mĂȘme, elles peuvent permettre d'obtenir des reprĂ©sentations compactes des caractĂ©ristiques intrinsĂšques des donnĂ©es d'intĂ©rĂȘt. Les approches modernes basĂ©es sur les donnĂ©es, regroupĂ©es sous la famille des mĂ©thodes d'apprentissage profond, sont adoptĂ©es pour rĂ©soudre des tĂąches d'analyse de donnĂ©es d'imagerie mĂ©dicale, y compris la tractographie. Dans ce contexte, les mĂ©thodes deviennent moins dĂ©pendantes des contraintes imposĂ©es par les approches classiques utilisĂ©es en tractographie. Par consĂ©quent, les mĂ©thodes inspirĂ©es de l'apprentissage profond conviennent au changement de paradigme requis, et peuvent ouvrir de nouvelles possibilitĂ©s de modĂ©lisation, en amĂ©liorant ainsi l'Ă©tat de l'art en tractographie. Dans cette thĂšse, un nouveau paradigme basĂ© sur les techniques d'apprentissage de reprĂ©sentation est proposĂ© pour gĂ©nĂ©rer et analyser des donnĂ©es de tractographie. En exploitant les architectures d'autoencodeurs, ce travail tente d'explorer leur capacitĂ© Ă  trouver un code optimal pour reprĂ©senter les caractĂ©ristiques des fibres de la matiĂšre blanche. Les contributions proposĂ©es exploitent ces reprĂ©sentations pour une variĂ©tĂ© de tĂąches liĂ©es Ă  la tractographie, y compris (i) le filtrage et (ii) le regroupement efficace sur les rĂ©sultats gĂ©nĂ©rĂ©s par d'autres mĂ©thodes, ainsi que (iii) la reconstruction proprement dite des fibres de la matiĂšre blanche en utilisant une mĂ©thode gĂ©nĂ©rative. Ainsi, les mĂ©thodes issues de cette thĂšse ont Ă©tĂ© nommĂ©es (i) FINTA (Filtering in Tractography using Autoencoders), (ii) CINTA (Clustering in Tractography using Autoencoders), et (iii) GESTA (Generative Sampling in Bundle Tractography using Autoencoders), respectivement. Les performances des mĂ©thodes proposĂ©es sont Ă©valuĂ©es par rapport aux mĂ©thodes de l'Ă©tat de l'art sur des donnĂ©es de diffusion synthĂ©tiques et des donnĂ©es de cerveaux humains chez l'adulte sain in vivo. Les rĂ©sultats montrent que (i) la mĂ©thode de filtrage proposĂ©e offre une sensibilitĂ© et spĂ©cificitĂ© supĂ©rieures par rapport Ă  d'autres mĂ©thodes de l'Ă©tat de l'art; (ii) le regroupement des tractes dans des faisceaux est fait de maniĂšre consistante; et (iii) l'approche gĂ©nĂ©rative Ă©chantillonnant des tractes comble mieux l'espace de la matiĂšre blanche dans des rĂ©gions difficiles Ă  reconstruire. Enfin, cette thĂšse rĂ©vĂšle les possibilitĂ©s des autoencodeurs pour l'analyse des donnĂ©es des fibres de la matiĂšre blanche, et ouvre la voie Ă  fournir des donnĂ©es de tractographie plus fiables.Abstract : Diffusion magnetic resonance imaging is a non-invasive technique providing insights into the organizational microstructure of biological tissues. The computational methods that exploit the orientational preference of the diffusion in restricted structures to reveal the brain's white matter axonal pathways are called tractography. In recent years, a variety of tractography methods have been successfully used to uncover the brain's white matter architecture. Yet, these reconstruction techniques suffer from a number of shortcomings derived from fundamental ambiguities inherent to the orientation information. This has dramatic consequences, since current tractography-based white matter connectivity maps are dominated by false positive connections. Thus, the large proportion of invalid pathways recovered remains one of the main challenges to be solved by tractography to obtain a reliable anatomical description of the white matter. Methodological innovative approaches are required to help solving these questions. Recent advances in computational power and data availability have made it possible to successfully apply modern machine learning approaches to a variety of problems, including computer vision and image analysis tasks. These methods model and learn the underlying patterns in the data, and allow making accurate predictions on new data. Similarly, they may enable to obtain compact representations of the intrinsic features of the data of interest. Modern data-driven approaches, grouped under the family of deep learning methods, are being adopted to solve medical imaging data analysis tasks, including tractography. In this context, the proposed methods are less dependent on the constraints imposed by current tractography approaches. Hence, deep learning-inspired methods are suit for the required paradigm shift, may open new modeling possibilities, and thus improve the state of the art in tractography. In this thesis, a new paradigm based on representation learning techniques is proposed to generate and to analyze tractography data. By harnessing autoencoder architectures, this work explores their ability to find an optimal code to represent the features of the white matter fiber pathways. The contributions exploit such representations for a variety of tractography-related tasks, including efficient (i) filtering and (ii) clustering on results generated by other methods, and (iii) the white matter pathway reconstruction itself using a generative method. The methods issued from this thesis have been named (i) FINTA (Filtering in Tractography using Autoencoders), (ii) CINTA (Clustering in Tractography using Autoencoders), and (iii) GESTA (Generative Sampling in Bundle Tractography using Autoencoders), respectively. The proposed methods' performance is assessed against current state-of-the-art methods on synthetic data and healthy adult human brain in vivo data. Results show that the (i) introduced filtering method has superior sensitivity and specificity over other state-of-the-art methods; (ii) the clustering method groups streamlines into anatomically coherent bundles with a high degree of consistency; and (iii) the generative streamline sampling technique successfully improves the white matter coverage in hard-to-track bundles. In summary, this thesis unlocks the potential of deep autoencoder-based models for white matter data analysis, and paves the way towards delivering more reliable tractography data

    Enhanced computer assisted detection of polyps in CT colonography

    Get PDF
    This thesis presents a novel technique for automatically detecting colorectal polyps in computed tomography colonography (CTC). The objective of the documented computer assisted diagnosis (CAD) technique is to deal with the issue of false positive detections without adversely affecting polyp detection sensitivity. The thesis begins with an overview of CTC and a review of the associated research areas, with particular attention given to CAD-CTC. This review identifies excessive false positive detections as a common problem associated with current CAD-CTC techniques. Addressing this problem constitutes the major contribution of this thesis. The documented CAD-CTC technique is trained with, and evaluated using, a series of clinical CTC data sets These data sets contain polyps with a range of different sizes and morphologies. The results presented m this thesis indicate the validity of the developed CAD-CTC technique and demonstrate its effectiveness m accurately detecting colorectal polyps while significantly reducing the number of false positive detections

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis

    Image guided robotic assistance for the diagnosis and treatment of tumor

    Get PDF
    The aim of this thesis is to demonstrate the feasibility and the potentiality of introduction of robotics and image guidance in the overall oncologic workflow, from the diagnosis to the treatment phase. The popularity of robotics in the operating room has grown in recent years. Currently the most popular systems is the da Vinci telemanipulator (Intuitive Surgical), it is based on a master-slave control, for minimally invasive surgery and it is used in several surgical fields such us urology, general, gynecology, cardiothoracic. An accurate study of this system, from a technological field of view, has been conducted addressing all drawbacks and advantages of this system. The da Vinci System creates an immersive operating environment for the surgeon by providing both high quality stereo visualization and a human-machine interface that directly connects the surgeon’s hands to the motion of the surgical tool tips inside the patient’s body. It has undoubted advantages for the surgeon work and for the patient health, at least for some interventions, while its very high costs leaves many doubts on its price benefit ratio. In the robotic surgery field many researchers are working on the optimization and miniaturization robots mechanic, while others are trying to obtain smart functionalities to realize robotic systems, that, “knowing” the patient anatomy from radiological images, can assists the surgeon in an active way. Regarding the second point, image guided systems can be useful to plan and to control medical robots motion and to provide the surgeon pre-operative and intra-operative images with augmented reality visualization to enhance his/her perceptual capacities and, as a consequence, to improve the quality of treatments. To demonstrate this thesis some prototypes has been designed, implemented and tested. The development of image guided medical devices, comprehensive of augmented reality, virtual navigation and robotic surgical features, requires to address several problems. The first ones are the choosing of the robotic platform and of the image source to employ. An industrial anthropomorphic arm has been used as testing platform. The idea of integrating industrial robot components in the clinical workflow has been supported by the da Vinci technical analysis. The algorithms and methods developed, regarding in particular robot calibration, based on literature theories and on an easily integration in the clinical scenario, can be adapted to each anthropomorphic arm. In this way this work can be integrated with light-weight robots, for industrial or clinical use, able to work in close contact to humans, which will become numerous in the early future. Regarding the medical image source, it has been decided to work with ultrasound imaging. Two-dimensional ultrasound imaging is widely used in clinical practice because is not dangerous for the patient, inexpensive, compact and it is a highly flexible imaging that allows users to study many anatomic structures. It is routinely used for diagnosis and as guidance in percutaneous treatments. However the use of 2D ultrasound imaging presents some disadvantages that require great ability of the user: it requires that the clinician mentally integrates many images to reconstruct a complete idea of the anatomy in 3D. Furthermore the freehand control of the probe make it difficult to individuate anatomic positions and orientations and probe repositioning to reach a particular location. To overcome these problems it has been developed an image guided system that fuse 2D US real time images with routinely CT or MRI 3D images, previously acquired from the patient, to enhance clinician orientation and probe guidance. The implemented algorithms for robot calibration and US image guidance has been used to realize two applications responding to specific clinical needs. The first one to speed up the execution of routinely and very recurrently procedures like percutaneous biopsy or ablation. The second one to improve a new completely non invasive type of treatment for solid tumors, the HIFU (High Intensity Focused Ultrasound). An ultrasound guided robotic system has been developed to assist the clinician to execute complicated biopsies, or percutaneous ablations, in particular for deep abdominal organs. It was developed an integrated system that provides the clinician two types of assistance: a mixed reality visualization allows accurate and easy planning of needle trajectory and target reaching verification; the robot arm equipped with a six-degree-of-freedom force sensor allows the precise positioning of the needle holder and allows the clinician to adjust, by means of a cooperative control, the planned trajectory to overcome needle deflection and target motion. The second application consists in an augmented reality navigation system for HIFU treatment. HIFU represents a completely non invasive method for treatment of solid tumors, hemostasis and other vascular features in human tissues. The technology for HIFU treatments is still evolving and the systems available on the market have some limitations and drawbacks. A disadvantage resulting from our experience with the machinery available in our hospital (JC200 therapeutic system Haifu (HIFU) by Tech Co., Ltd, Chongqing), which is similar to other analogous machines, is the long time required to perform the procedure due to the difficulty to find the target, using the remote motion of an ultrasound probe under the patient. This problem has been addressed developing an augmented reality navigation system to enhance US guidance during HIFU treatments allowing an easy target localization. The system was implemented using an additional free hand ultrasound probe coupled with a localizer and CT fused imaging. It offers a simple and an economic solution to an easy HIFU target localization. This thesis demonstrates the utility and usability of robots for diagnosis and treatment of the tumor, in particular the combination of automatic positioning and cooperative control allows the surgeon and the robot to work in synergy. Further the work demonstrates the feasibility and the potentiality of the use of a mixed reality navigation system to facilitate the target localization and consequently to reduce the times of sittings, to increase the number of possible diagnosis/treatments and to decrease the risk of potential errors. The proposed solutions for the integration of robotics and image guidance in the overall oncologic workflow, take into account current available technologies, traditional clinical procedures and cost minimization

    Optimising mobile laser scanning for underground mines

    Full text link
    Despite several technological advancements, underground mines are still largely relied on visual inspections or discretely placed direct-contact measurement sensors for routine monitoring. Such approaches are manual and often yield inconclusive, unreliable and unscalable results besides exposing mine personnel to field hazards. Mobile laser scanning (MLS) promises an automated approach that can generate comprehensive information by accurately capturing large-scale 3D data. Currently, the application of MLS has relatively remained limited in mining due to challenges in the post-registration of scans and the unavailability of suitable processing algorithms to provide a fully automated mapping solution. Additionally, constraints such as the absence of a spatial positioning network and the deficiency of distinguishable features in underground mining spaces pose challenges in mobile mapping. This thesis aims to address these challenges in mine inspections by optimising different aspects of MLS: (1) collection of large-scale registered point cloud scans of underground environments, (2) geological mapping of structural discontinuities, and (3) inspection of structural support features. Firstly, a spatial positioning network was designed using novel three-dimensional unique identifiers (3DUID) tags and a 3D registration workflow (3DReG), to accurately obtain georeferenced and coregistered point cloud scans, enabling multi-temporal mapping. Secondly, two fully automated methods were developed for mapping structural discontinuities from point cloud scans – clustering on local point descriptors (CLPD) and amplitude and phase decomposition (APD). These methods were tested on both surface and underground rock mass for discontinuity characterisation and kinematic analysis of the failure types. The developed algorithms significantly outperformed existing approaches, including the conventional method of compass and tape measurements. Finally, different machine learning approaches were used to automate the recognition of structural support features, i.e. roof bolts from point clouds, in a computationally efficient manner. Roof bolts being mapped from a scanned point cloud provided an insight into their installation pattern, which underpinned the applicability of laser scanning to inspect roof supports rapidly. Overall, the outcomes of this study lead to reduced human involvement in field assessments of underground mines using MLS, demonstrating its potential for routine multi-temporal monitoring
    • 

    corecore