6 research outputs found

    Marker-less respiratory motion modeling using the Microsoft Kinect for Windows

    Full text link

    The feasibility of using Microsoft Kinect v2 sensors during radiotherapy delivery

    Get PDF
    Consumer-grade distance sensors, such as the Microsoft Kinect devices (v1 and v2), have been investigated for use as marker-free motion monitoring systems for radiotherapy. The radiotherapy delivery environment is challenging for such sensors because of the proximity to electromagnetic interference (EMI) from the pulse forming network which fires the magnetron and electron gun of a linear accelerator (linac) during radiation delivery, as well as the requirement to operate them from the control area. This work investigated whether using Kinect v2 sensors as motion monitors was feasible during radiation delivery. Three sensors were used each with a 12 m USB 3.0 active cable which replaced the supplied 3 m USB 3.0 cable. Distance output data from the Kinect v2 sensors was recorded under four conditions of linac operation: (i) powered up only, (ii) pulse forming network operating with no radiation, (iii) pulse repetition frequency varied between 6 Hz and 400 Hz, (iv) dose rate varied between 50 and 1450 monitor units (MU) per minute. A solid water block was used as an object and imaged when static, moved in a set of steps from 0.6 m to 2.0 m from the sensor and moving dynamically in two sinusoidal-like trajectories. Few additional image artifacts were observed and there was no impact on the tracking of the motion patterns (root mean squared accuracy of 1.4 and 1.1 mm, respectively). The sensors' distance accuracy varied by 2.0 to 3.8 mm (1.2 to 1.4 mm post distance calibration) across the range measured; the precision was 1 mm. There was minimal effect from the EMI on the distance calibration data: 0 mm or 1 mm reported distance change (2 mm maximum change at one position). Kinect v2 sensors operated with 12 m USB 3.0 active cables appear robust to the radiotherapy treatment environment

    Respiratory level tracking with visual biofeedback for consistent breath-hold level with potential application in image-guided interventions

    Get PDF
    Background: To present and evaluate a new respiratory level biofeedback system that aids the patient to return to a consistent breath-hold level with potential application in image-guided interventions. Methods: The study was approved by the local ethics committee and written informed consent was waived. Respiratory motion was recorded in eight healthy volunteers in the supine and prone positions, using a depth camera that measures the mean distance to thorax, abdomen and back. Volunteers were provided with real-time visual biofeedback on a screen, as a ball moving up and down with respiratory motion. For validation purposes, a conversion factor from mean distance (in mm) to relative lung volume (in mL) was determined using spirometry. Subsequently, without spirometry, volunteers were given breathing instructions and were asked to return to their initial breath-hold level at expiration ten times, in both positions, with and without visual biofeedback. For both positions, the median and interquartile range (IQR) of the absolute error in lung volume from initial breath-hold were determined with and without biofeedback and compared using Wilcoxon signed rank tests. Results: Without visual biofeedback, the median difference from initial breath-hold was 124.6 mL (IQR 55.7-259.7 mL) for the supine position and 156.3 mL (IQR 90.9-334.7 mL) for the prone position. With the biofeedback, the difference was significantly decreased to 32.7 mL (IQR 12.8-59.6 mL) (p < 0.001) and 22.3 mL (IQR 7.7-47.0 mL) (p < 0.001), respectively. Conclusions: The use of a depth camera to provide visual biofeedback increased the reproducibility of breath-hold expiration level in healthy volunteers, with a potential to eliminate targeting errors caused by respiratory movement during lung image-guided procedures

    3D Measurement of Large Deformations on a Tensile Structure during Wind Tunnel Tests Using Microsoft Kinect V2

    Get PDF
    Wind tunnel tests often require deformation and displacement measures to determine the behavior of structures to evaluate their response to wind excitation. However, common measurement techniques make it possible to measure these quantities only at a few specific points. Moreover, these kinds of measurements, such as Linear Variable Differential Transformer LVDTs or fiber optics, usually influence the downstream and upstream air fluxes and the structure under test. In order to characterize the displacement of the structure not just at a few points, but for the entire structure, in this article, the application of 3D cameras during a wind tunnel test is presented. In order to validate this measurement technique in this application field, a wind tunnel test was executed. Three Kinect V2 depth sensors were used for a 3D displacement measurement of a test structure that did not present any optical marker or feature. The results highlighted that by using a low-cost and user-friendly measurement system, it is possible to obtain 3D measurements in a volume of several cubic meters (4 m x 4 m x 4 m wind tunnel chamber), without significant disturbance of wind flux and by means of a simple calibration of sensors, executed directly inside the wind tunnel. The obtained results highlighted a displacement directed to the internal part of the structure for the side most exposed to wind, while the sides, parallel to the wind flux, were more subjected to vibrations and with an outwards average displacement. These results are compliant with the expected behavior of the structure

    Personalised Procedures for Thoracic Radiotherapy

    Get PDF
    This thesis presents the investigation, development, and estimation of two personalised procedures for thoracic cancer therapy in Shenzhen, China and two projects were carried out: (1) respiratory motion management of a lung tumour, and (2) the application of a three-dimensional (3D) printing technique for postmastectomy irradiation. For the first project, all subjects attended sessions of free-breathing (FB) and personalised vocal coaching (VC) for respiratory regulation. Thoracic and abdominal breathing signals were extracted from the subjects’ surface area then estimated as kernel density estimation (KDE) for motion visualisation. The mutual information (MI) and correlation coefficient (CC) calculated from KDEs indicate the variation in the relationship between the two signals. From the 1D signal, through VC, the variation of cycle time and the signal value of end-of-exhale/inhale increased in the patient group but decreased in volunteers. Mixed results were presented on KDE and MI. Compared with FB, VC improves movement consistency between the two signals in eight of eleven subjects by increasing MI. The fixed instruction method showed no improvement for day-to-day variation, while the daily generated instruction enhanced the respiratory regularity in three of five volunteers. VC addresses the variation of the single signal, while the outcome of the two signals, thoracic and abdominal signals, requires further interpretation. The second project aims to address both the enhancement of the skin dose and avoidance of hotspots of critical organs, focusing on improving irradiative treatment for post-mastectomy patients. A 3D-printed bolus was presented as a solution for the air gap between the bolus and skin. The results showed no evidence of significant skin dose enhancement with the printed bolus. Additionally, an air gap larger than 5 mm was evident in all patients. Until a solution for complete bolus adhesion is found, this customised bolus is not suitable for clinical use
    corecore