43 research outputs found

    Utilising the Intel RealSense camera for measuring health outcomes in clinical research

    Get PDF
    Applications utilising 3D Camera technologies for the measurement of health outcomes in the health and wellness sector continues to expand. The Intel® RealSense™ is one of the leading 3D depth sensing cameras currently available on the market and aligns itself for use in many applications, including robotics, automation, and medical systems. One of the most prominent areas is the production of interactive solutions for rehabilitation which includes gait analysis and facial tracking. Advancements in depth camera technology has resulted in a noticeable increase in the integration of these technologies into portable platforms, suggesting significant future potential for pervasive in-clinic and field based health assessment solutions. This paper reviews the Intel RealSense technology’s technical capabilities and discusses its application to clinical research and includes examples where the Intel RealSense camera range has been used for the measurement of health outcomes. This review supports the use of the technology to develop robust, objective movement and mobility-based endpoints to enable accurate tracking of the effects of treatment interventions in clinical trials

    Creating a Worker-Individual Physical Ability Profile Using a Low-Cost Depth Camera

    Get PDF
    Assembly workers suffer from long-term damage performing physically intensive tasks due to workstations that are not ergonomically designed for the individual’s needs. Current approaches towards ergonomic improvements of workstations only assess the workstations themselves without taking the individual worker and abilities into account. Therefore, physical limitations, such as age-related loss of range of motion, are not addressed. Work-induced long-term damages result in employee absences, especially of workers close to their pension. Regarding the demographic change, this issue will be even more prevalent in the future. The current approaches, like the functional capacity evaluation, allow movement analysis of individuals, but are too time-consuming to be performed on all workers of a production site. This paper presents a method to assess the individual ability of a worker using a low-cost depth camera with full body tracking to determine the angles between body segments. A set of ergonomic exercises is used to demonstrate relevant abilities for assembly and commissioning tasks. By capturing the motion sequence of these exercises, a physical ability profile can be created with little effort

    Comparison of depth cameras for three-dimensional Reconstruction in Medicine

    Get PDF
    KinectFusion is a typical three-dimensional reconstruction technique which enables generation of individual three-dimensional human models from consumer depth cameras for understanding body shapes. The aim of this study was to compare three-dimensional reconstruction results obtained using KinectFusion from data collected with two different types of depth camera (time-of-flight and stereoscopic cameras) and compare these results with those of a commercial three-dimensional scanning system to determine which type of depth camera gives improved reconstruction. Torso mannequins and machined aluminium cylinders were used as the test objects for this study. Two depth cameras, Microsoft Kinect V2 and Intel Realsense D435, were selected as the representatives of time-of-flight and stereoscopic cameras, respectively, to capture scan data for the reconstruction of three-dimensional point clouds by KinectFusion techniques. The results showed that both time-of-flight and stereoscopic cameras, using the developed rotating camera rig, provided repeatable body scanning data with minimal operator-induced error. However, the time-of-flight camera generated more accurate three-dimensional point clouds than the stereoscopic sensor. Thus, this suggests that applications requiring the generation of accurate three-dimensional human models by KinectFusion techniques should consider using a time-of-flight camera, such as the Microsoft Kinect V2, as the image capturing sensor

    Translational Research of Audiovisual Biofeedback: An investigation of respiratory-guidance in lung and liver cancer patient radiation therapy

    Get PDF
    Through the act of breathing, thoracic and abdominal anatomy is in constant motion and is typically irregular. This irregular motion can exacerbate errors in radiation therapy, breathing guidance interventions operate to minimise these errors. However, much of the breathing guidance investigations have not directly quantified the impact of regular breathing on radiation therapy accuracy. The first aim of this thesis was to critically appraise the literature in terms of the use of breathing guidance interventions via systematic review. This review found that 21 of the 27 identified studies yielded significant improvements from the use of breathing guidance. None of the studies were randomised and no studies quantified the impact on 4DCT image quality. The second aim of this thesis was to quantify the impact of audiovisual biofeedback breathing guidance on 4DCT. This study utilised data from an MRI study to program the motion of a digital phantom prior to then simulating 4DCT imaging. Audiovisual biofeedback demonstrated to significantly improved 4DCT image quality over free breathing. The third aim of this thesis was to assess the impact of audiovisual biofeedback on liver cancer patient breathing over a course of stereotactic body radiation therapy (SBRT). The findings of this study demonstrated the effectiveness of audiovisual biofeedback in producing consistent interfraction respiratory motion over a course of SBRT. The fourth aim of this thesis was to design and implement a phase II clinical trial investigating the use and impact of audiovisual biofeedback in lung cancer radiation therapy. The findings of a retrospective analysis were utilised to design and determine the statistics of the most comprehensive breathing guidance study to date: a randomised, stratified, multi-site, phase II clinical trial.. The fifth aim of this thesis was to explore the next stages of audiovisual biofeedback in terms of translating evidence into broader clinical use through commercialisation. This aim was achieved by investigating the the product-market fit of the audiovisual biofeedback technology. The culmination of these findings demonstrates the clinical benefit of the audiovisual biofeedback respiratory guidance system and the possibility to make breathing guidance systems more widely available to patients

    Non-contact video-based assessment of the respiratory function using a RGB-D camera

    Get PDF
    A fully automatic, non-contact method for the assessment of the respiratory function is proposed using an RGB-D camera-based technology. The proposed algorithm relies on the depth channel of the camera to estimate the movements of the body’s trunk during breathing. It solves in fixed-time complexity, O(1), as the acquisition relies on the mean depth value of the target regions only using the color channels to automatically locate them. This simplicity allows the extraction of real-time values of the respiration, as well as the synchronous assessment on multiple body parts. Two different experiments have been performed: a first one conducted on 10 users in a single region and with a fixed breathing frequency, and a second one conducted on 20 users considering a simultaneous acquisition in two regions. The breath rate has then been computed and compared with a reference measurement. The results show a non-statistically significant bias of 0.11 breaths/min and 96% limits of agreement of -2.21/2.34 breaths/min regarding the breath-by-breath assessment. The overall real-time assessment shows a RMSE of 0.21 breaths/min. We have shown that this method is suitable for applications where respiration needs to be monitored in non-ambulatory and static environments.This research was funded by Ministerio de Ciencia e Innovación with grant number PID2020-116011.Postprint (published version

    3D Reconstruction using convolution smooth method

    Get PDF
    3D imagery is an image with depth data. The use of depth information in 3D images still has many drawbacks, especially in the image results. Raw data on the 3D camera even does not look smooth, and there is too much noise. Noise in the 3D image is in the form of imprecise data, which results in a rough image. This research will use the convolution smooth methods to improve the 3D image. Will smooth noise in the 3D image, so the resulting image will be better. This smoothing system is called the blurring effect. This research has been tested on flat objects and objects with a circle contour. The test results on the flat surface obtained a distance of 1.3177, the test in the object with a flat surface obtained a distance of 0.4937, and the test in circle contour obtained a distance of 0.3986. This research found that the 3D image will be better after applying the convolution smooth method

    Evaluation of a Low-Cost Virtual Reality Surround-Screen Projection System

    Get PDF
    [EN] Two of the most popular mediums for virtual reality are head-mounted displays and surround-screen projection systems, such as CAVE Automatic Virtual Environments. In recent years, HMDs suffered a significant reduction in cost and have become widespread consumer products. In contrast, CAVEs are still expensive and remain accessible to a limited number of researchers. This study aims to evaluate both objective and subjective characteristics of a CAVE-like monoscopic low-cost virtual reality surround-screen projection system compared to advanced setups and HMDs. For objective results, we measured the head position estimation accuracy and precision of a low-cost active infrared (IR) based tracking system, used in the proposed low-cost CAVE, relatively to an infrared marker-based tracking system, used in a laboratory-grade CAVE system. For subjective characteristics, we investigated the sense of presence and cybersickness elicited in users during a visual search task outside personal space, beyond arms reach, where the importance of stereo vision is diminished. Thirty participants rated their sense of presence and cybersickness after performing the VR search task with our CAVE-like system and a modern HMD. The tracking showed an accuracy error of 1.66 cm and .4 mm of precision jitter. The system was reported to elicit presence but at a lower level than the HMD, while causing significant lower cybersickness. Our results were compared to a previous study performed with a laboratory-grade CAVE and support that a VR system implemented with low-cost devices could be a viable alternative to laboratory-grade CAVEs for visual search tasks outside the users personal space.This work was supported by the Fundação para a Ciência e Tecnologia through the AHA project (CMUPERI/HCI/0046/2013), by the INTERREG program through the MACBIOIDI project (MAC/1.1.b/098), LARSyS (UIDB/50009/2020), NOVA-LINCS (UID/CEC/04516/2019), by Fundació la Marató de la TV3 (201701-10), and the European Union through the Operational Program of the European Regional Development Fund (ERDF) of the Valencian Community 2014-2020 (IDIFEDER/2018/029)Gonçalves, A.; Borrego, A.; Latorre, J.; Llorens Rodríguez, R.; Bermúdez, S. (2021). Evaluation of a Low-Cost Virtual Reality Surround-Screen Projection System. IEEE Transactions on Visualization and Computer Graphics. 1-12. https://doi.org/10.1109/TVCG.2021.3091485S11
    corecore