178 research outputs found

    Lv volume quantification via spatiotemporal analysis of real-time 3-d echocardiography

    Get PDF
    Abstract—This paper presents a method of four-dimensional (4-D) (3-D + Time) space–frequency analysis for directional denoising and enhancement of real-time three-dimensional (RT3D) ultrasound and quantitative measures in diagnostic cardiac ultrasound. Expansion of echocardiographic volumes is performed with complex exponential wavelet-like basis functions called brushlets. These functions offer good localization in time and frequency and decompose a signal into distinct patterns of oriented harmonics, which are invariant to intensity and contrast range. Deformable-model segmentation is carried out on denoised data after thresholding of transform coefficients. This process attenuates speckle noise while preserving cardiac structure location. The superiority of 4-D over 3-D analysis for decorrelating additive white noise and multiplicative speckle noise on a 4-D phantom volume expanding in time is demonstrated. Quantitative validation, computed for contours and volumes, is performed on in vitro balloon phantoms. Clinical applications of this spaciotemporal analysis tool are reported for six patient cases providing measures of left ventricular volumes and ejection fraction. Index Terms—Echocardiography, LV volume, spaciotemporal analysis, speckle denoising. I

    Automated analysis of 3D echocardiography

    Get PDF
    In this thesis we aim at automating the analysis of 3D echocardiography, mainly targeting the functional analysis of the left ventricle. Manual analysis of these data is cumbersome, time-consuming and is associated with inter-observer and inter-institutional variability. Methods for reconstruction of 3D echocardiographic images from fast rotating ultrasound transducers is presented and methods for analysis of 3D echocardiography in general, using tracking, detection and model-based segmentation techniques to ultimately fully automatically segment the left ventricle for functional analysis. We show that reliable quantification of left ventricular volume and mitral valve displacement can be achieved using the presented techniques.SenterNovem (IOP Beeldverwerking, grant IBVC02003), Dutch Technology Foundation STW (grant 06666)UBL - phd migration 201

    Augmented Image-Guidance for Transcatheter Aortic Valve Implantation

    Get PDF
    The introduction of transcatheter aortic valve implantation (TAVI), an innovative stent-based technique for delivery of a bioprosthetic valve, has resulted in a paradigm shift in treatment options for elderly patients with aortic stenosis. While there have been major advancements in valve design and access routes, TAVI still relies largely on single-plane fluoroscopy for intraoperative navigation and guidance, which provides only gross imaging of anatomical structures. Inadequate imaging leading to suboptimal valve positioning contributes to many of the early complications experienced by TAVI patients, including valve embolism, coronary ostia obstruction, paravalvular leak, heart block, and secondary nephrotoxicity from contrast use. A potential method of providing improved image-guidance for TAVI is to combine the information derived from intra-operative fluoroscopy and TEE with pre-operative CT data. This would allow the 3D anatomy of the aortic root to be visualized along with real-time information about valve and prosthesis motion. The combined information can be visualized as a `merged\u27 image where the different imaging modalities are overlaid upon each other, or as an `augmented\u27 image, where the location of key target features identified on one image are displayed on a different imaging modality. This research develops image registration techniques to bring fluoroscopy, TEE, and CT models into a common coordinate frame with an image processing workflow that is compatible with the TAVI procedure. The techniques are designed to be fast enough to allow for real-time image fusion and visualization during the procedure, with an intra-procedural set-up requiring only a few minutes. TEE to fluoroscopy registration was achieved using a single-perspective TEE probe pose estimation technique. The alignment of CT and TEE images was achieved using custom-designed algorithms to extract aortic root contours from XPlane TEE images, and matching the shape of these contours to a CT-derived surface model. Registration accuracy was assessed on porcine and human images by identifying targets (such as guidewires or coronary ostia) on the different imaging modalities and measuring the correspondence of these targets after registration. The merged images demonstrated good visual alignment of aortic root structures, and quantitative assessment measured an accuracy of less than 1.5mm error for TEE-fluoroscopy registration and less than 6mm error for CT-TEE registration. These results suggest that the image processing techniques presented have potential for development into a clinical tool to guide TAVI. Such a tool could potentially reduce TAVI complications, reducing morbidity and mortality and allowing for a safer procedure

    Three-dimensional ultrasound scanning

    Get PDF
    The past two decades have witnessed developments of new imaging techniques that provide three-dimensional images about the interior of the human body in a manner never before available. Ultrasound (US) imaging is an important cost-effective technique used routinely in the management of a number of diseases. However, two-dimensional viewing of three-dimensional anatomy, using conventional two-dimensional US, limits our ability to quantify and visualize the anatomy and guide therapy, because multiple two-dimensional images must be integrated mentally. This practice is inefficient, and may lead to variability and incorrect diagnoses. Investigators and companies have addressed these limitations by developing three-dimensional US techniques. Thus, in this paper, we review the various techniques that are in current use in three-dimensional US imaging systems, with a particular emphasis placed on the geometric accuracy of the generation of three-dimensional images. The principles involved in three-dimensional US imaging are then illustrated with a diagnostic and an interventional application: (i) three-dimensional carotid US imaging for quantification and monitoring of carotid atherosclerosis and (ii) three-dimensional US-guided prostate biopsy

    Towards the development of safe, collaborative robotic freehand ultrasound

    Get PDF
    The use of robotics in medicine is of growing importance for modern health services, as robotic systems have the capacity to improve upon human tasks, thereby enhancing the treatment ability of a healthcare provider. In the medical sector, ultrasound imaging is an inexpensive approach without the high radiation emissions often associated with other modalities, especially when compared to MRI and CT imaging respectively. Over the past two decades, considerable effort has been invested into freehand ultrasound robotics research and development. However, this research has focused on the feasibility of the application, not the robotic fundamentals, such as motion control, calibration, and contextual awareness. Instead, much of the work is concentrated on custom designed robots, ultrasound image generation and visual servoing, or teleoperation. Research based on these topics often suffer from important limitations that impede their use in an adaptable, scalable, and real-world manner. Particularly, while custom robots may be designed for a specific application, commercial collaborative robots are a more robust and economical solution. Otherwise, various robotic ultrasound studies have shown the feasibility of using basic force control, but rarely explore controller tuning in the context of patient safety and deformable skin in an unstructured environment. Moreover, many studies evaluate novel visual servoing approaches, but do not consider the practicality of relying on external measurement devices for motion control. These studies neglect the importance of robot accuracy and calibration, which allow a system to safely navigate its environment while reducing the imaging errors associated with positioning. Hence, while the feasibility of robotic ultrasound has been the focal point in previous studies, there is a lack of attention to what occurs between system design and image output. This thesis addresses limitations of the current literature through three distinct contributions. Given the force-controlled nature of an ultrasound robot, the first contribution presents a closed-loop calibration approach using impedance control and low-cost equipment. Accuracy is a fundamental requirement for high-quality ultrasound image generation and targeting. This is especially true when following a specified path along a patient or synthesizing 2D slices into a 3D ultrasound image. However, even though most industrial robots are inherently precise, they are not necessarily accurate. While robot calibration itself has been extensively studied, many of the approaches rely on expensive and highly delicate equipment. Experimental testing showed that this method is comparable in quality to traditional calibration using a laser tracker. As demonstrated through an experimental study and validated with a laser tracker, the absolute accuracy of a collaborative robot was improved to a maximum error of 0.990mm, representing a 58.4% improvement when compared to the nominal model. The second contribution explores collisions and contact events, as they are a natural by-product of applications involving physical human-robot interaction (pHRI) in unstructured environments. Robot-assisted medical ultrasound is an example of a task where simply stopping the robot upon contact detection may not be an appropriate reaction strategy. Thus, the robot should have an awareness of body contact location to properly plan force-controlled trajectories along the human body using the imaging probe. This is especially true for remote ultrasound systems where safety and manipulability are important elements to consider when operating a remote medical system through a communication network. A framework is proposed for robot contact classification using the built-in sensor data of a collaborative robot. Unlike previous studies, this classification does not discern between intended vs. unintended contact scenarios, but rather classifies what was involved in the contact event. The classifier can discern different ISO/TS 15066:2016 specific body areas along a human-model leg with 89.37% accuracy. Altogether, this contact distinction framework allows for more complex reaction strategies and tailored robot behaviour during pHRI. Lastly, given that the success of an ultrasound task depends on the capability of the robot system to handle pHRI, pure motion control is insufficient. Force control techniques are necessary to achieve effective and adaptable behaviour of a robotic system in the unstructured ultrasound environment while also ensuring safe pHRI. While force control does not require explicit knowledge of the environment, to achieve an acceptable dynamic behaviour, the control parameters must be tuned. The third contribution proposes a simple and effective online tuning framework for force-based robotic freehand ultrasound motion control. Within the context of medical ultrasound, different human body locations have a different stiffness and will require unique tunings. Through real-world experiments with a collaborative robot, the framework tuned motion control for optimal and safe trajectories along a human leg phantom. The optimization process was able to successfully reduce the mean absolute error (MAE) of the motion contact force to 0.537N through the evolution of eight motion control parameters. Furthermore, contextual awareness through motion classification can offer a framework for pHRI optimization and safety through predictive motion behaviour with a future goal of autonomous pHRI. As such, a classification pipeline, trained using the tuning process motion data, was able to reliably classify the future force tracking quality of a motion session with an accuracy of 91.82 %
    • …
    corecore