613 research outputs found

    The D-bar Method for Diffuse Optical Tomography: a computational study

    Get PDF
    The D-bar method at negative energy is numerically implemented. Using the method we are able to numerically reconstruct potentials and investigate exceptional points at negative energy. Subsequently, applying the method to Diffusive Optical Tomography, a new way of reconstructing the diffusion coefficient from the associated Complex Geometrics Optics solution is suggested and numerically validated

    Validity of the Polar V800 heart rate monitor to measure RR intervals at rest

    Get PDF
    Purpose To assess the validity of RR intervals and short-term heart rate variability (HRV) data obtained from the Polar V800 heart rate monitor, in comparison to an electrocardiograph (ECG). Method Twenty participants completed an active orthostatic test using the V800 and ECG. An improved method for the identification and correction of RR intervals was employed prior to HRV analysis. Agreement of the data was assessed using intra-class correlation coefficients (ICC), Bland–Altman limits of agreement (LoA), and effect size (ES). Results A small number of errors were detected between ECG and Polar RR signal, with a combined error rate of 0.086 %. The RR intervals from ECG to V800 were significantly different, but with small ES for both supine corrected and standing corrected data (ES 0.999 for both supine and standing corrected intervals. When analysed with the same HRV software no significant differences were observed in any HRV parameters, for either supine or standing; the data displayed small bias and tight LoA, strong ICC (>0.99) and small ES (≤0.029). Conclusions The V800 improves over previous Polar models, with narrower LoA, stronger ICC and smaller ES for both the RR intervals and HRV parameters. The findings support the validity of the Polar V800 and its ability to produce RR interval recordings consistent with an ECG. In addition, HRV parameters derived from these recordings are also highly comparable

    Time-Frequency based Feature Selection for Discrimination of non stationary Biosignals.

    Get PDF
    This research proposes a generic methodology for dimensionality reduction upon time-frequency representations applied to the classification of different types of biosignals. The methodology directly deals with the highly redundant and irrelevant data contained in these representations, combining a first stage of irrelevant data removal by variable selection, with a second stage of redundancy reduction using methods based on linear transformations. The study addresses two techniques that provided a similar performance: the first one is based on the selection of a set of the most relevant time?frequency points, whereas the second one selects the most relevant frequency bands. The first methodology needs a lower quantity of components, leading to a lower feature space; but the second improves the capture of the time-varying dynamics of the signal, and therefore provides a more stable performance. In order to evaluate the generalization capabilities of the methodology proposed it has been applied to two types of biosignals with different kinds of non-stationary behaviors: electroencephalographic and phonocardiographic biosignals. Even when these two databases contain samples with different degrees of complexity and a wide variety of characterizing patterns, the results demonstrate a good accuracy for the detection of pathologies, over 98%.The results open the possibility to extrapolate the methodology to the study of other biosignals

    The D-Bar Method for Diffuse Optical Tomography : A Computational Study

    Get PDF
    The D-bar method at negative energy is numerically implemented. Using the method, we are able to numerically reconstruct potentials and investigate exceptional points at negative energy. Subsequently, applying the method to diffuse optical tomography, a new way of reconstructing the diffusion coefficient from the associated Complex Geometrics Optics solution is suggested and numerically validated.Peer reviewe

    Image reconstruction in quantitative photoacoustic tomography using adaptive optical Monte Carlo

    Get PDF
    In quantitative photoacoustic tomography (QPAT), distributions of optical parameters inside the target are reconstructed from photoacoustic images. In this work, we utilize the Monte Carlo (MC) method for light transport in the image reconstruction of QPAT. Modeling light transport accurately with the MC requires simulating a large number of photon packets, which can be computationally expensive. On the other hand, too low number of photon packets results in a high level of stochastic noise, which can lead to significant errors in reconstructed images. In this work, we use an adaptive approach, where the number of simulated photon packets is adjusted during an iterative image reconstruction. It is based on a norm test where the expected relative error of the minimization direction is controlled. The adaptive approach automatically determines the number of simulated photon packets to provide sufficiently accurate light transport modeling without unnecessary computational burden. The presented approach is studied with two-dimensional simulations

    Adaptive stochastic Gauss-Newton method with optical Monte Carlo for quantitative photoacoustic tomography

    Get PDF
    SIGNIFICANCE: The image reconstruction problem in quantitative photoacoustic tomography (QPAT) is an ill-posed inverse problem. Monte Carlo method for light transport can be utilized in solving this image reconstruction problem. AIM: The aim was to develop an adaptive image reconstruction method where the number of photon packets in Monte Carlo simulation is varied to achieve a sufficient accuracy with reduced computational burden. APPROACH: The image reconstruction problem was formulated as a minimization problem. An adaptive stochastic Gauss-Newton (A-SGN) method combined with Monte Carlo method for light transport was developed. In the algorithm, the number of photon packets used on Gauss-Newton (GN) iteration was varied utilizing a so-called norm test. RESULTS: The approach was evaluated with numerical simulations. With the proposed approach, the number of photon packets needed for solving the inverse problem was significantly smaller than in a conventional approach where the number of photon packets was fixed for each GN iteration. CONCLUSIONS: The A-SGN method with a norm test can be utilized in QPAT to provide accurate and computationally efficient solutions

    Moderate and heavy metabolic stress interval training improve arterial stiffness and heart rate dynamics in humans

    Get PDF
    Traditional continuous aerobic exercise training attenuates age-related increases of arterial stiffness, however, training studies have not determined whether metabolic stress impacts these favourable effects. Twenty untrained healthy participants (n = 11 heavy metabolic stress interval training, n = 9 moderate metabolic stress interval training) completed 6 weeks of moderate or heavy intensity interval training matched for total work and exercise duration. Carotid artery stiffness, blood pressure contour analysis, and linear and non-linear heart rate variability were assessed before and following training. Overall, carotid arterial stiffness was reduced (p  0.05). This study demonstrates the effectiveness of interval training at improving arterial stiffness and autonomic function, however, the metabolic stress was not a mediator of this effect. In addition, these changes were also independent of improvements in aerobic capacity, which were only induced by training that involved a high metabolic stress

    On Learned Operator Correction in Inverse Problems

    Get PDF
    We discuss the possibility of learning a data-driven explicit model correction for inverse problems and whether such a model correction can be used within a variational framework to obtain regularized reconstructions. This paper discusses the conceptual difficulty of learning such a forward model correction and proceeds to present a possible solution as a forward-adjoint correction that explicitly corrects in both data and solution spaces. We then derive conditions under which solutions to the variational problem with a learned correction converge to solutions obtained with the correct operator. The proposed approach is evaluated on an application to limited view photoacoustic tomography and compared to the established framework of the Bayesian approximation error method
    corecore