1,260 research outputs found

    Enhanced ultrasound for advanced diagnostics, ultrasound tomography for volume limb imaging and prosthetic fitting

    Get PDF
    Ultrasound imaging methods hold the potential to deliver low-cost, high-resolution, operator-independent and nonionizing imaging systems-such systems couple appropriate algorithms with imaging devices and techniques. The increasing demands on general practitioners motivate us to develop more usable and productive diagnostic imaging equipment. Ultrasound, specifically freehand ultrasound, is a low cost and safe medical imaging technique. It doesn't expose a patient to ionizing radiation. Its safety and versatility make it very well suited for the increasing demands on general practitioners, or for providing improved medical care in rural regions or the developing world. However it typically suffers from sonographer variability; we will discuss techniques to address user variability. We also discuss our work to combine cylindrical scanning systems with state of the art inversion algorithms to deliver ultrasound systems for imaging and quantifying limbs in 3-D in vivo. Such systems have the potential to track the progression of limb health at a low cost and without radiation exposure, as well as, improve prosthetic socket fitting. Current methods of prosthetic socket fabrication remain subjective and ineffective at creating an interface to the human body that is both comfortable and functional. Though there has been recent success using methods like magnetic resonance imaging and biomechanical modeling, a low-cost, streamlined, and quantitative process for prosthetic cup design and fabrication has not been fully demonstrated. Medical ultrasonography may inform the design process of prosthetic sockets in a more objective manner. This keynote talk presents the results of progress in this area. Keywords: Clinical ultrasound, Force control, 3-D ultrasound, Tomograph

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots

    Towards the development of safe, collaborative robotic freehand ultrasound

    Get PDF
    The use of robotics in medicine is of growing importance for modern health services, as robotic systems have the capacity to improve upon human tasks, thereby enhancing the treatment ability of a healthcare provider. In the medical sector, ultrasound imaging is an inexpensive approach without the high radiation emissions often associated with other modalities, especially when compared to MRI and CT imaging respectively. Over the past two decades, considerable effort has been invested into freehand ultrasound robotics research and development. However, this research has focused on the feasibility of the application, not the robotic fundamentals, such as motion control, calibration, and contextual awareness. Instead, much of the work is concentrated on custom designed robots, ultrasound image generation and visual servoing, or teleoperation. Research based on these topics often suffer from important limitations that impede their use in an adaptable, scalable, and real-world manner. Particularly, while custom robots may be designed for a specific application, commercial collaborative robots are a more robust and economical solution. Otherwise, various robotic ultrasound studies have shown the feasibility of using basic force control, but rarely explore controller tuning in the context of patient safety and deformable skin in an unstructured environment. Moreover, many studies evaluate novel visual servoing approaches, but do not consider the practicality of relying on external measurement devices for motion control. These studies neglect the importance of robot accuracy and calibration, which allow a system to safely navigate its environment while reducing the imaging errors associated with positioning. Hence, while the feasibility of robotic ultrasound has been the focal point in previous studies, there is a lack of attention to what occurs between system design and image output. This thesis addresses limitations of the current literature through three distinct contributions. Given the force-controlled nature of an ultrasound robot, the first contribution presents a closed-loop calibration approach using impedance control and low-cost equipment. Accuracy is a fundamental requirement for high-quality ultrasound image generation and targeting. This is especially true when following a specified path along a patient or synthesizing 2D slices into a 3D ultrasound image. However, even though most industrial robots are inherently precise, they are not necessarily accurate. While robot calibration itself has been extensively studied, many of the approaches rely on expensive and highly delicate equipment. Experimental testing showed that this method is comparable in quality to traditional calibration using a laser tracker. As demonstrated through an experimental study and validated with a laser tracker, the absolute accuracy of a collaborative robot was improved to a maximum error of 0.990mm, representing a 58.4% improvement when compared to the nominal model. The second contribution explores collisions and contact events, as they are a natural by-product of applications involving physical human-robot interaction (pHRI) in unstructured environments. Robot-assisted medical ultrasound is an example of a task where simply stopping the robot upon contact detection may not be an appropriate reaction strategy. Thus, the robot should have an awareness of body contact location to properly plan force-controlled trajectories along the human body using the imaging probe. This is especially true for remote ultrasound systems where safety and manipulability are important elements to consider when operating a remote medical system through a communication network. A framework is proposed for robot contact classification using the built-in sensor data of a collaborative robot. Unlike previous studies, this classification does not discern between intended vs. unintended contact scenarios, but rather classifies what was involved in the contact event. The classifier can discern different ISO/TS 15066:2016 specific body areas along a human-model leg with 89.37% accuracy. Altogether, this contact distinction framework allows for more complex reaction strategies and tailored robot behaviour during pHRI. Lastly, given that the success of an ultrasound task depends on the capability of the robot system to handle pHRI, pure motion control is insufficient. Force control techniques are necessary to achieve effective and adaptable behaviour of a robotic system in the unstructured ultrasound environment while also ensuring safe pHRI. While force control does not require explicit knowledge of the environment, to achieve an acceptable dynamic behaviour, the control parameters must be tuned. The third contribution proposes a simple and effective online tuning framework for force-based robotic freehand ultrasound motion control. Within the context of medical ultrasound, different human body locations have a different stiffness and will require unique tunings. Through real-world experiments with a collaborative robot, the framework tuned motion control for optimal and safe trajectories along a human leg phantom. The optimization process was able to successfully reduce the mean absolute error (MAE) of the motion contact force to 0.537N through the evolution of eight motion control parameters. Furthermore, contextual awareness through motion classification can offer a framework for pHRI optimization and safety through predictive motion behaviour with a future goal of autonomous pHRI. As such, a classification pipeline, trained using the tuning process motion data, was able to reliably classify the future force tracking quality of a motion session with an accuracy of 91.82 %

    An Inertial-Optical Tracking System for Quantitative, Freehand, 3D Ultrasound

    Get PDF
    Three dimensional (3D) ultrasound has become an increasingly popular medical imaging tool over the last decade. It offers significant advantages over Two Dimensional (2D) ultrasound, such as improved accuracy, the ability to display image planes that are physically impossible with 2D ultrasound, and reduced dependence on the skill of the sonographer. Among 3D medical imaging techniques, ultrasound is the only one portable enough to be used by first responders, on the battlefield, and in rural areas. There are three basic methods of acquiring 3D ultrasound images. In the first method, a 2D array transducer is used to capture a 3D volume directly, using electronic beam steering. This method is mainly used for echocardiography. In the second method, a linear array transducer is mechanically actuated, giving a slower and less expensive alternative to the 2D array. The third method uses a linear array transducer that is moved by hand. This method is known as freehand 3D ultrasound. Whether using a 2D array or a mechanically actuated linear array transducer, the position and orientation of each image is known ahead of time. This is not the case for freehand scanning. To reconstruct a 3D volume from a series of 2D ultrasound images, assumptions must be made about the position and orientation of each image, or a mechanism for detecting the position and orientation of each image must be employed. The most widely used method for freehand 3D imaging relies on the assumption that the probe moves along a straight path with constant orientation and speed. This method requires considerable skill on the part of the sonographer. Another technique uses features within the images themselves to form an estimate of each image\u27s relative location. However, these techniques are not well accepted for diagnostic use because they are not always reliable. The final method for acquiring position and orientation information is to use a six Degree-of-Freedom (6 DoF) tracking system. Commercially available 6 DoF tracking systems use magnetic fields, ultrasonic ranging, or optical tracking to measure the position and orientation of a target. Although accurate, all of these systems have fundamental limitations in that they are relatively expensive and they all require sensors or transmitters to be placed in fixed locations to provide a fixed frame of reference. The goal of the work presented here is to create a probe tracking system for freehand 3D ultrasound that does not rely on any fixed frame of reference. This system tracks the ultrasound probe using only sensors integrated into the probe itself. The advantages of such a system are that it requires no setup before it can be used, it is more portable because no extra equipment is required, it is immune from environmental interference, and it is less expensive than external tracking systems. An ideal tracking system for freehand 3D ultrasound would track in all 6 DoF. However, current sensor technology limits this system to five. Linear transducer motion along the skin surface is tracked optically and transducer orientation is tracked using MEMS gyroscopes. An optical tracking system was developed around an optical mouse sensor to provide linear position information by tracking the skin surface. Two versions were evaluated. One included an optical fiber bundle and the other did not. The purpose of the optical fiber is to allow the system to integrate more easily into existing probes by allowing the sensor and electronics to be mounted away from the scanning end of the probe. Each version was optimized to track features on the skin surface while providing adequate Depth Of Field (DOF) to accept variation in the height of the skin surface. Orientation information is acquired using a 3 axis MEMS gyroscope. The sensor was thoroughly characterized to quantify performance in terms of accuracy and drift. This data provided a basis for estimating the achievable 3D reconstruction accuracy of the complete system. Electrical and mechanical components were designed to attach the sensor to the ultrasound probe in such a way as to simulate its being embedded in the probe itself. An embedded system was developed to perform the processing necessary to translate the sensor data into probe position and orientation estimates in real time. The system utilizes a Microblaze soft core microprocessor and a set of peripheral devices implemented in a Xilinx Spartan 3E field programmable gate array. The Xilinx Microkernel real time operating system performs essential system management tasks and provides a stable software platform for implementation of the inertial tracking algorithm. Stradwin 3D ultrasound software was used to provide a user interface and perform the actual 3D volume reconstruction. Stradwin retrieves 2D ultrasound images from the Terason t3000 portable ultrasound system and communicates with the tracking system to gather position and orientation data. The 3D reconstruction is generated and displayed on the screen of the PC in real time. Stradwin also provides essential system features such as storage and retrieval of data, 3D data interaction, reslicing, manual 3D segmentation, and volume calculation for segmented regions. The 3D reconstruction performance of the system was evaluated by freehand scanning a cylindrical inclusion in a CIRS model 044 ultrasound phantom. Five different motion profiles were used and each profile was repeated 10 times. This entire test regimen was performed twice, once with the optical tracking system using the optical fiber bundle, and once with the optical tracking system without the optical fiber bundle. 3D reconstructions were performed with and without the position and orientation data to provide a basis for comparison. Volume error and surface error were used as the performance metrics. Volume error ranged from 1.3% to 5.3% with tracking information versus 15.6% to 21.9% without for the version of the system without the optical fiber bundle. Volume error ranged from 3.7% to 7.6% with tracking information versus 8.7% to 13.7% without for the version of the system with the optical fiber bundle. Surface error ranged from 0.319 mm RMS to 0.462 mm RMS with tracking information versus 0.678 mm RMS to 1.261 mm RMS without for the version of the system without the optical fiber bundle. Surface error ranged from 0.326 mm RMS to 0.774 mm RMS with tracking information versus 0.538 mm RMS to 1.657 mm RMS without for the version of the system with the optical fiber bundle. The prototype tracking system successfully demonstrated that accurate 3D ultrasound volumes can be generated from 2D freehand data using only sensors integrated into the ultrasound probe. One serious shortcoming of this system is that it only tracks 5 of the 6 degrees of freedom required to perform complete 3D reconstructions. The optical system provides information about linear movement but because it tracks a surface, it cannot measure vertical displacement. Overcoming this limitation is the most obvious candidate for future research using this system. The overall tracking platform, meaning the embedded tracking computer and the PC software, developed and integrated in this work, is ready to take advantage of vertical displacement data, should a method be developed for sensing it

    3D Quasi-Static Ultrasound Elastography With Plane Wave In Vivo

    Full text link
    In biological tissue, an increase in elasticity is often a marker of abnormalities. Techniques such as quasi-static ultrasound elastography have been developed to assess the strain distribution in soft tissues in two dimensions using a quasi-static compression. However, as abnormalities can exhibit very heterogeneous shapes, a three dimensional approach would be necessary to accurately measure their volume and remove operator dependency. Acquisition of volumes at high rates is also critical to performing real-time imaging with a simple freehand compression. In this study, we developed for the first time a 3D quasi-static ultrasound elastography method with plane waves that estimates axial strain distribution in vivo in entire volumes at high volume rate. Acquisitions were performed with a 2D matrix array probe of 2.5MHz frequency and 256 elements. Plane waves were emitted at a volume rate of 100 volumes/s during a continuous motorized and freehand compression. 3D B-mode volumes and 3D cumulative axial strain volumes were successfully estimated in inclusion phantoms and in ex vivo canine liver before and after a high intensity focused ultrasound ablation. We also demonstrated the in vivo feasibility of the method using freehand compression on the calf muscle of a human volunteer and were able to retrieve 3D axial strain volume at a high volume rate depicting the differences in stiffness of the two muscles which compose the calf muscle. 3D ultrasound quasi-static elastography with plane waves could become an important technique for the imaging of the elasticity in human bodies in three dimensions using simple freehand scanning

    Typical m. triceps surae morphology and architecture measurement from 0 to 18 years: A narrative review.

    Get PDF
    The aim of this review was to report on the imaging modalities used to assess morphological and architectural properties of the m. triceps surae muscle in typically developing children, and the available reliability analyses. Scopus and MEDLINE (Pubmed) were searched systematically for all original articles published up to September 2020 measuring morphological and architectural properties of the m. triceps surae in typically developing children (18 years or under). Thirty eligible studies were included in this analysis, measuring fibre bundle length (FBL) (n = 11), pennation angle (PA) (n = 10), muscle volume (MV) (n = 16) and physiological cross-sectional area (PCSA) (n = 4). Three primary imaging modalities were utilised to assess these architectural parameters in vivo: two-dimensional ultrasound (2DUS; n = 12), three-dimensional ultrasound (3DUS; n = 9) and magnetic resonance imaging (MRI; n = 6). The mean age of participants ranged from 1.4 years to 18 years old. There was an apparent increase in m. gastrocnemius medialis MV and pCSA with age; however, no trend was evident with FBL or PA. Analysis of correlations of muscle variables with age was limited by a lack of longitudinal data and methodological variations between studies affecting outcomes. Only five studies evaluated the reliability of the methods. Imaging methodologies such as MRI and US may provide valuable insight into the development of skeletal muscle from childhood to adulthood; however, variations in methodological approaches can significantly influence outcomes. Researchers wishing to develop a model of typical muscle development should carry out longitudinal architectural assessment of all muscles comprising the m. triceps surae utilising a consistent approach that minimises confounding errors

    Three-dimensional ultrasound image-guided robotic system for accurate microwave coagulation of malignant liver tumours

    Full text link
    Background The further application of conventional ultrasound (US) image-guided microwave (MW) ablation of liver cancer is often limited by two-dimensional (2D) imaging, inaccurate needle placement and the resulting skill requirement. The three-dimensional (3D) image-guided robotic-assisted system provides an appealing alternative option, enabling the physician to perform consistent, accurate therapy with improved treatment effectiveness. Methods Our robotic system is constructed by integrating an imaging module, a needle-driven robot, a MW thermal field simulation module, and surgical navigation software in a practical and user-friendly manner. The robot executes precise needle placement based on the 3D model reconstructed from freehand-tracked 2D B-scans. A qualitative slice guidance method for fine registration is introduced to reduce the placement error caused by target motion. By incorporating the 3D MW specific absorption rate (SAR) model into the heat transfer equation, the MW thermal field simulation module determines the MW power level and the coagulation time for improved ablation therapy. Two types of wrists are developed for the robot: a ‘remote centre of motion’ (RCM) wrist and a non-RCM wrist, which is preferred in real applications. Results The needle placement accuracies were < 3 mm for both wrists in the mechanical phantom experiment. The target accuracy for the robot with the RCM wrist was improved to 1.6 ± 1.0 mm when real-time 2D US feedback was used in the artificial-tissue phantom experiment. By using the slice guidance method, the robot with the non-RCM wrist achieved accuracy of 1.8 ± 0.9 mm in the ex vivo experiment; even target motion was introduced. In the thermal field experiment, a 5.6% relative mean error was observed between the experimental coagulated neurosis volume and the simulation result. Conclusion The proposed robotic system holds promise to enhance the clinical performance of percutaneous MW ablation of malignant liver tumours. Copyright © 2010 John Wiley & Sons, Ltd.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/78054/1/313_ftp.pd

    Visual Feedback System for Ultrasound Training

    Get PDF
    • 

    corecore