45 research outputs found

    Investigation Of The Microsoft Kinect V2 Sensor As A Multi-Purpose Device For A Radiation Oncology Clinic

    Get PDF
    For a radiation oncology clinic, the number of devices available to assist in the workflow for radiotherapy treatments are quite numerous. Processes such as patient verification, motion management, or respiratory motion tracking can all be improved upon by devices currently on the market. These three specific processes can directly impact patient safety and treatment efficacy and, as such, are important to track and quantify. Most products available will only provide a solution for one of these processes and may be outside the reach of a typical radiation oncology clinic due to difficult implementation and incorporation with already existing hardware. This manuscript investigates the use of the Microsoft Kinect v2 sensor to provide solutions for all three processes all while maintaining a relatively simple and easy to use implementation. To assist with patient verification, the Kinect system was programmed to create a facial recognition and recall process. The basis of the facial recognition algorithm was created by utilizing a facial mapping library distributed by Microsoft within the Software Developers Toolkit (SDK). Here, the system extracts 31 fiducial points representing various facial landmarks. 3D vectors are created between each of the 31 points and the magnitude of each vector is calculated by the system. This allows for a face to be defined as a collection of 465 specific vector magnitudes. The 465 vector magnitudes defining a face are then used in both the creation of a facial reference data set and subsequent evaluations of real-time sensor data in the matching algorithm. To test the algorithm, a database of 39 faces was created, each with 465 vectors derived from the fiducial points, and a one-to-one matching procedure was performed to obtain sensitivity and specificity data of the facial identification system. In total, 5299 trials were performed and threshold parameters were created for match determination. Optimization of said parameters in the matching algorithm by way of ROC curves indicated the sensitivity of the system for was 96.5% and the specificity was 96.7%. These results indicate a fairly robust methodology for verifying, in real-time, a specific face through comparison from a pre-collected reference data set. In its current implementation, the process of data collection for each face and subsequent matching session averaged approximately 30 seconds, which may be too onerous to provide a realistic supplement to patient identification in a clinical setting. Despite the time commitment, the data collection process was well tolerated by all participants. It was found that ambient light played a crucial role in the accuracy and reproducibility of the facial recognition system. Testing with various light levels found that ambient light greater than 200 lux produced the most accurate results. As such, the acquisition process should be setup in such a way to ensure consistent ambient light conditions across both the reference recording session and subsequent real-time identification sessions. In developing a motion management process with the Kinect, two separate, but complimentary processes were created. First, to track large scale anatomical movements, the automatic skeletal tracking capabilities of the Kinect were utilized. 25 specific body joints (head, elbow, knee, etc) make up the skeletal frame and are locked to relative positions on the body. Using code written in C#, these joints are tracked, in 3D space, and compared to an initial state of the patient allowing for an indication of anatomical motion. Additionally, to track smaller, more subtle movements on a specific area of the body, a user drawn ROI can be created. Here, the depth values of all pixels associated with the body in the ROI are compared to the initial state. The system counts the number of live pixels with a depth difference greater than a specified threshold compared to the initial state and the area of each of those pixels is calculated based on their depth. The percentage of area moved (PAM) compared to the ROI area then becomes an indication of gross movement within the ROI. In this study, 9 specific joints proved to be stable during data acquisition. When moved in orthogonal directions, each coordinate recorded had a relatively linear trend of movement but not the expected 1:1 relationship to couch movement. Instead, calculation of the vector magnitude between the initial and current position proved a better indicator of movement. 5 of the 9 joints (Left/Right Elbow, Left/Right Hip, and Spine-Base) showed relatively consistent values for radial movements of 5mm and 10mm, achieving 20% - 25% coefficient of variation. For these 5 joints, this allowed for threshold values for calculated radial distances of 3mm and 7.5 mm to be set for 5mm and 10mm of actual movement, respectively. When monitoring a drawn ROI, it was found that the depth sensor had very little sensitivity of movement in the X (Left/Right) or Y (Superior/Inferior) direction, but exceptional sensitivity in the Z (Anterior/Posterior) direction. As such, the PAM values could only be coordinated with motion in the Z direction. PAM values of over 60% were shown to be indicative of movement in the Z direction equal to that of the threshold value set for movement as small as 3mm. Lastly, the Kinect was utilized to create a marker-less, respiratory motion tracking system. Code was written to access the Kinect’s depth sensor and create a process to track the respiratory motion of a subject by recording the depth (distance) values obtained at several user selected points on the subject, with each point representing one pixel on the depth image. As a patient breathes, a specific anatomical point on the chest/abdomen will move slightly within the depth image across a number of pixels. By tracking how depth values change for a specific pixel, instead of how the anatomical point moves throughout the image, a respiratory trace can be obtained based on changing depth values of the selected pixel. Tracking of these values can then be implemented via marker-less setup. Varian’s RPM system and the Anzai belt system were used in tandem with the Kinect in order to compare respiratory traces obtained by each using two different subjects. Analysis of the depth information from the Kinect for purposes of phase based and amplitude based binning proved to be correlated well with the RPM and Anzai systems. IQR values were obtained which compared times correlated with specific amplitude and phase percentage values against each product. The IQR spans of time indicated the Kinect would measure a specific percentage value within 0.077 s for Subject 1 and 0.164s for Subject 2 when compared to values obtained with RPM or Anzai. For 4D-CT scans, these times correlate to less than 1mm of couch movement and would create an offset of one half an acquired slice. These minimal deviations between the traces created by the Kinect and RPM or Anzai indicate that by tracking the depth values of user selected pixels within the depth image, rather than tracking specific anatomical locations, respiratory motion can be tracked and visualized utilizing the Kinect with results comparable to that of commercially available products

    The feasibility of using Microsoft Kinect v2 sensors during radiotherapy delivery

    Get PDF
    Consumer-grade distance sensors, such as the Microsoft Kinect devices (v1 and v2), have been investigated for use as marker-free motion monitoring systems for radiotherapy. The radiotherapy delivery environment is challenging for such sensors because of the proximity to electromagnetic interference (EMI) from the pulse forming network which fires the magnetron and electron gun of a linear accelerator (linac) during radiation delivery, as well as the requirement to operate them from the control area. This work investigated whether using Kinect v2 sensors as motion monitors was feasible during radiation delivery. Three sensors were used each with a 12 m USB 3.0 active cable which replaced the supplied 3 m USB 3.0 cable. Distance output data from the Kinect v2 sensors was recorded under four conditions of linac operation: (i) powered up only, (ii) pulse forming network operating with no radiation, (iii) pulse repetition frequency varied between 6 Hz and 400 Hz, (iv) dose rate varied between 50 and 1450 monitor units (MU) per minute. A solid water block was used as an object and imaged when static, moved in a set of steps from 0.6 m to 2.0 m from the sensor and moving dynamically in two sinusoidal-like trajectories. Few additional image artifacts were observed and there was no impact on the tracking of the motion patterns (root mean squared accuracy of 1.4 and 1.1 mm, respectively). The sensors' distance accuracy varied by 2.0 to 3.8 mm (1.2 to 1.4 mm post distance calibration) across the range measured; the precision was 1 mm. There was minimal effect from the EMI on the distance calibration data: 0 mm or 1 mm reported distance change (2 mm maximum change at one position). Kinect v2 sensors operated with 12 m USB 3.0 active cables appear robust to the radiotherapy treatment environment

    Optical Methods in Sensing and Imaging for Medical and Biological Applications

    Get PDF
    The recent advances in optical sources and detectors have opened up new opportunities for sensing and imaging techniques which can be successfully used in biomedical and healthcare applications. This book, entitled ‘Optical Methods in Sensing and Imaging for Medical and Biological Applications’, focuses on various aspects of the research and development related to these areas. The book will be a valuable source of information presenting the recent advances in optical methods and novel techniques, as well as their applications in the fields of biomedicine and healthcare, to anyone interested in this subject

    Smart Sensors for Healthcare and Medical Applications

    Get PDF
    This book focuses on new sensing technologies, measurement techniques, and their applications in medicine and healthcare. Specifically, the book briefly describes the potential of smart sensors in the aforementioned applications, collecting 24 articles selected and published in the Special Issue “Smart Sensors for Healthcare and Medical Applications”. We proposed this topic, being aware of the pivotal role that smart sensors can play in the improvement of healthcare services in both acute and chronic conditions as well as in prevention for a healthy life and active aging. The articles selected in this book cover a variety of topics related to the design, validation, and application of smart sensors to healthcare

    Contextual game design: from interface development to human activity recognition

    Get PDF
    A dissertação focasse nos seguintes pontos: Criação de diferentes interfaces usando o sensor Kinect, para reablitação de pacientes com cancro de mama. E, reconhecimento de atividade humana (problema derivado aquando a criação das interfaces)

    Can 3D Camera Imaging Provide Improved Information to Assess and Manage Lymphoedema in Clinical Practice?

    Get PDF
    Background Accurate diagnosis and measurement of limb volume in people with lymphoedema is important in order to provide best information for treatment, management and self-management. Current assessment methods lack detail and accuracy. Three-dimensional camera imaging (3DCI) holds the potential to be cheap, accurate, and provide additional material about limb shape not provided by current methods. However, there is a need to ensure that this assessment method is valid and reliable. Methodology This prospective, observational, longitudinal study utilised a diagnostic test study framework to determine the validity, reliability and accuracy of 3DCI compared to circumferential tape measurement (CTM) and perometry and to explore whether shape is a feasible alternative to measure upper limb lymphoedema. Twenty women with breast cancer-related lymphoedema were recruited. Phase one assessed criterion validity, intra-rater reliability, and accuracy of 3DCI by measuring limb volume of each participant with CTM, perometry and 3DCI four times over six months. Phase two investigated the use of limb shape as a method of lymphoedema assessment using oedema maps and calculations of shape redundancy derived from the 3DCI images in phase one. These data sets were matched against limb volume to determine criterion validity, intra-rater reliability and accuracy. Results 3DCI had high intra-rater correlation (ICC=0.87; p<0.00). Concurrent validity ranged from 0.82 to 0.86 against perometry and CTM, with good sensitivity (91.7% to 100%) and moderate specificity (50% to 66.7%). Limb shape calculation (shape redundancy) had moderate intra-rater correlation (ICC=0.71; p=0.01); but correlated poorly with limb volume (r=0.19 to 0.39). Coloured oedema maps were sensitive to change over time with colours clearly identifying problem areas and fluctuations within the affected limb. Conclusion Our study shows that 3DCI is a reliable, valid and accurate method of limb volume measurement, and that it could provide supportive information in clinical assessment. In addition, limb shape provides insight into localised areas of swelling, which other methods of lymphoedema measurement do not. However, shape redundancy requires further refinement

    Dual-camera infrared guidance for computed tomography biopsy procedures

    Get PDF
    A CT-guided biopsy is a specialised surgical procedure whereby a needle is used to withdraw tissue or fluid specimen from a lesion of interest. The needle is guided while being viewed by a clinician on a computed tomography (CT) scan. CT guided biopsies invariably expose patients and operators to high dosage of radiation and are lengthy procedures where the lack of spatial referencing while guiding the needle along the required entry path are some of the diffculties currently encountered. This research focuses on addressing two of the challenges clinicians currently face when performing CT-guided biopsy procedures. The first challenge is the lack of spatial referencing during a biopsy procedure, with the requirement for improved accuracy and reduction in the number of repeated scans. In order to achieve this an infrared navigation system was designed and implemented where an existing approach was subsequently extended to help guide the clinician in advancing the biopsy needle. This extended algorithm computed a scaled estimate of the needle endpoint and assists with navigating the biopsy needle through a dedicated and custom built graphical user interface. The second challenge was to design and implement a training environment where clinicians could practice different entry angles and scenarios. A prototype training module was designed and built to provide simulated biopsy procedures in order to help increase spatial referencing. Various experiments and different scenarios were designed and tested to demonstrate the correctness of the algorithm and provide real-life simulated scenarios where the operators had a chance to practice different entry angles and familiarise themselves with the equipment. A comprehensive survey was also undertaken to investigate the advantages and disadvantages of the system
    corecore