1,862 research outputs found

    Predicting respiratory motion for real-time tumour tracking in radiotherapy

    Full text link
    Purpose. Radiation therapy is a local treatment aimed at cells in and around a tumor. The goal of this study is to develop an algorithmic solution for predicting the position of a target in 3D in real time, aiming for the short fixed calibration time for each patient at the beginning of the procedure. Accurate predictions of lung tumor motion are expected to improve the precision of radiation treatment by controlling the position of a couch or a beam in order to compensate for respiratory motion during radiation treatment. Methods. For developing the algorithmic solution, data mining techniques are used. A model form from the family of exponential smoothing is assumed, and the model parameters are fitted by minimizing the absolute disposition error, and the fluctuations of the prediction signal (jitter). The predictive performance is evaluated retrospectively on clinical datasets capturing different behavior (being quiet, talking, laughing), and validated in real-time on a prototype system with respiratory motion imitation. Results. An algorithmic solution for respiratory motion prediction (called ExSmi) is designed. ExSmi achieves good accuracy of prediction (error 4−94-9 mm/s) with acceptable jitter values (5-7 mm/s), as tested on out-of-sample data. The datasets, the code for algorithms and the experiments are openly available for research purposes on a dedicated website. Conclusions. The developed algorithmic solution performs well to be prototyped and deployed in applications of radiotherapy

    Detection of Talking in Respiratory Signals: A Feasibility Study Using Machine Learning and Wearable Textile-Based Sensors

    Get PDF
    Social isolation and loneliness are major health concerns in young and older people. Traditional approaches to monitor the level of social interaction rely on self-reports. The goal of this study was to investigate if wearable textile-based sensors can be used to accurately detect if the user is talking as a future indicator of social interaction. In a laboratory study, fifteen healthy young participants were asked to talk while performing daily activities such as sitting, standing and walking. It is known that the breathing pattern differs significantly between normal and speech breathing (i.e., talking). We integrated resistive stretch sensors into wearable elastic bands, with a future integration into clothing in mind, to record the expansion and contraction of the chest and abdomen while breathing. We developed an algorithm incorporating machine learning and evaluated its performance in distinguishing between periods of talking and non-talking. In an intra-subject analysis, our algorithm detected talking with an average accuracy of 85%. The highest accuracy of 88% was achieved during sitting and the lowest accuracy of 80.6% during walking. Complete segments of talking were correctly identified with 96% accuracy. From the evaluated machine learning algorithms, the random forest classifier performed best on our dataset. We demonstrate that wearable textile-based sensors in combination with machine learning can be used to detect when the user is talking. In the future, this approach may be used as an indicator of social interaction to prevent social isolation and loneliness

    Wearable Wireless Devices

    Get PDF
    No abstract available

    Remote Human Vital Sign Monitoring Using Multiple-Input Multiple-Output Radar at Millimeter-Wave Frequencies

    Get PDF
    Non-contact respiration rate (RR) and heart rate (HR) monitoring using millimeter-wave (mmWave) radars has gained lots of attention for medical, civilian, and military applications. These mmWave radars are small, light, and portable which can be deployed to various places. To increase the accuracy of RR and HR detection, distributed multi-input multi-output (MIMO) radar can be used to acquire non-redundant information of vital sign signals from different perspectives because each MIMO channel has different fields of view with respect to the subject under test (SUT). This dissertation investigates the use of a Frequency Modulated Continuous Wave (FMCW) radar operating at 77-81 GHz for this application. Vital sign signal is first reconstructed with Arctangent Demodulation (AD) method using phase change’s information collected by the radar due to chest wall displacement from respiration and heartbeat activities. Since the heartbeat signals can be corrupted and concealed by the third/fourth harmonics of the respiratory signals as well as random body motion (RBM) from the SUT, we have developed an automatic Heartbeat Template (HBT) extraction method based on Constellation Diagrams of the received signals. The extraction method will automatically spot and extract signals’ portions that carry good amount of heartbeat signals which are not corrupted by the RBM. The extracted HBT is then used as an adapted wavelet for Continuous Wavelet Transform (CWT) to reduce interferences from respiratory harmonics and RBM, as well as magnify the heartbeat signals. As the nature of RBM is unpredictable, the extracted HBT may not completely cancel the interferences from RBM. Therefore, to provide better HR detection’s accuracy, we have also developed a spectral-based HR selection method to gather frequency spectra of heartbeat signals from different MIMO channels. Based on this gathered spectral information, we can determine an accurate HR even if the heartbeat signals are significantly concealed by the RBM. To further improve the detection’s accuracy of RR and HR, two deep learning (DL) frameworks are also investigated. First, a Convolutional Neural Network (CNN) has been proposed to optimally select clean MIMO channels and eliminate MIMO channels with low SNR of heartbeat signals. After that, a Multi-layer Perceptron (MLP) neural network (NN) is utilized to reconstruct the heartbeat signals that will be used to assess and select the final HR with high confidence

    Improved clinical outcome prediction in depression using neurodynamics in an emotional face-matching functional MRI task

    Get PDF
    Introduction: Approximately one in six people will experience an episode of major depressive disorder (MDD) in their lifetime. Effective treatment is hindered by subjective clinical decision-making and a lack of objective prognostic biomarkers. Functional MRI (fMRI) could provide such an objective measure but the majority of MDD studies has focused on static approaches, disregarding the rapidly changing nature of the brain. In this study, we aim to predict depression severity changes at 3 and 6 months using dynamic fMRI features.Methods: For our research, we acquired a longitudinal dataset of 32 MDD patients with fMRI scans acquired at baseline and clinical follow-ups 3 and 6 months later. Several measures were derived from an emotion face-matching fMRI dataset: activity in brain regions, static and dynamic functional connectivity between functional brain networks (FBNs) and two measures from a wavelet coherence analysis approach. All fMRI features were evaluated independently, with and without demographic and clinical parameters. Patients were divided into two classes based on changes in depression severity at both follow-ups.Results: The number of coherence clusters (nCC) between FBNs, reflecting the total number of interactions (either synchronous, anti-synchronous or causal), resulted in the highest predictive performance. The nCC-based classifier achieved 87.5% and 77.4% accuracy for the 3- and 6-months change in severity, respectively. Furthermore, regression analyses supported the potential of nCC for predicting depression severity on a continuous scale. The posterior default mode network (DMN), dorsal attention network (DAN) and two visual networks were the most important networks in the optimal nCC models. Reduced nCC was associated with a poorer depression course, suggesting deficits in sustained attention to and coping with emotion-related faces. An ensemble of classifiers with demographic, clinical and lead coherence features, a measure of dynamic causality, resulted in a 3-months clinical outcome prediction accuracy of 81.2%.Discussion: The dynamic wavelet features demonstrated high accuracy in predicting individual depression severity change. Features describing brain dynamics could enhance understanding of depression and support clinical decision-making. Further studies are required to evaluate their robustness and replicability in larger cohorts

    Wearable Wireless Devices

    Get PDF
    No abstract available
    • …
    corecore