18 research outputs found

    Design and Evaluation of a Fiber-Optic Grip Force Sensor with Compliant 3D-Printable Structure for (f)MRI Applications

    Get PDF
    Grip force sensors compatible with magnetic resonance imaging (MRI) are used in human motor control and decision-making research, providing objective and sensitive behavioral outcome measures. Commercial sensors are expensive, cover limited force ranges, rely on pneumatic force transmission that cannot detect fast force changes, or are electrically active, which increases the risk of electromagnetic interference. We present the design and evaluation of a low-cost, 3D-printed, inherently MRI-compatible grip force sensor based on a commercial intensity-based fiber-optic sensor. A compliant monobloc structure with flexible hinges transduces grip force to a linear displacement captured by the fiber-optic sensor. The structure can easily be adapted for different force ranges by changing the hinge thickness. A prototype designed for forces up to 800 N was manufactured and showed a highly linear behavior (nonlinearity of 2.37%) and an accuracy of 1.57% in a range between zero and 500 N. It can be printed and assembled within one day and for less than $300. Accurate performance was confirmed, both inside and outside a 3 T MRI scanner within a pilot study. Given its simple design allowing for customization of sensing properties and ergonomics for different applications and requirements, the proposed grip force handle offers researchers a valuable scientific tool

    Performance metrics for an application-driven selection and optimization of psychophysical sampling procedures

    No full text
    When estimating psychometric functions with sampling procedures, psychophysical assessments should be precise and accurate while being as efficient as possible to reduce assessment duration. The estimation performance of sampling procedures is commonly evaluated in computer simulations for single psychometric functions and reported using metrics as a function of number of trials. However, the estimation performance of a sampling procedure may vary for different psychometric functions. Therefore, the results of these type of evaluations may not be generalizable to a heterogeneous population of interest. In addition, the maximum number of trials is often imposed by time restrictions, especially in clinical applications, making trial-based metrics suboptimal. Hence, the benefit of these simulations to select and tune an ideal sampling procedure for a specific application is limited. We suggest to evaluate the estimation performance of sampling procedures in simulations covering the entire range of psychometric functions found in a population of interest, and propose a comprehensive set of performance metrics for a detailed analysis. To illustrate the information gained from these metrics in an application example, six sampling procedures were evaluated in a computer simulation based on prior knowledge on the population distribution and requirements from proprioceptive assessments. The metrics revealed limitations of the sampling procedures, such as inhomogeneous or systematically decreasing performance depending on the psychometric functions, which can inform the tuning process of a sampling procedure. More advanced metrics allowed directly comparing overall performances of different sampling procedures and select the best-suited sampling procedure for the example application. The proposed analysis metrics can be used for any sampling procedure and the estimation of any parameter of a psychometric function, independent of the shape of the psychometric function and of how such a parameter was estimated. This framework should help to accelerate the development process of psychophysical assessments.ISSN:1932-620

    Enhancing simulations with intra-subject variability for improved psychophysical assessments

    No full text
    Psychometric properties of perceptual assessments, like reliability, depend on stochastic properties of psychophysical sampling procedures resulting in method variability, as well as inter- and intra-subject variability. Method variability is commonly minimized by optimizing sampling procedures through computer simulations. Inter-subject variability is inherent to the population of interest and cannot be influenced. Intra-subject variability introduced by confounds (e.g., inattention or lack of motivation) cannot be simply quantified from experimental data, as these data also include method variability. Therefore, this aspect is generally neglected when developing assessments. Yet, comparing method variability and intra-subject variability could give insights on whether effort should be invested in optimizing the sampling procedure, or in addressing potential confounds instead. We propose a new approach to estimate intra-subject variability of psychometric functions by combining computer simulations and behavioral data, and to account for it when simulating experiments. The approach was illustrated in a real-world scenario of proprioceptive difference threshold assessments. The behavioral study revealed a test-retest reliability of r = 0.212. Computer simulations without considering intra-subject variability predicted a reliability of r = 0.768, whereas the new approach including an intra-subject variability model lead to a realistic estimate of reliability (r = 0.207). Such a model also allows computing the theoretically maximally attainable reliability (r = 0.552) assuming an ideal sampling procedure. Comparing the reliability estimates when exclusively accounting for method variability versus intra-subject variability reveals that intra-subject variability should be reduced by addressing confounds and that only optimizing the sampling procedure may be insufficient to achieve a high reliability. This new approach allows computing the intra-subject variability with only two measurements per subject, and predicting the reliability for a larger number of subjects and retests based on simulations, without requiring additional experiments. Such a tool of predictive value is especially valuable for target populations where time is scarce, e.g., for assessments in clinical settings.ISSN:1932-620

    Performance metrics for an application-driven selection and optimization of psychophysical sampling procedures.

    No full text
    When estimating psychometric functions with sampling procedures, psychophysical assessments should be precise and accurate while being as efficient as possible to reduce assessment duration. The estimation performance of sampling procedures is commonly evaluated in computer simulations for single psychometric functions and reported using metrics as a function of number of trials. However, the estimation performance of a sampling procedure may vary for different psychometric functions. Therefore, the results of these type of evaluations may not be generalizable to a heterogeneous population of interest. In addition, the maximum number of trials is often imposed by time restrictions, especially in clinical applications, making trial-based metrics suboptimal. Hence, the benefit of these simulations to select and tune an ideal sampling procedure for a specific application is limited. We suggest to evaluate the estimation performance of sampling procedures in simulations covering the entire range of psychometric functions found in a population of interest, and propose a comprehensive set of performance metrics for a detailed analysis. To illustrate the information gained from these metrics in an application example, six sampling procedures were evaluated in a computer simulation based on prior knowledge on the population distribution and requirements from proprioceptive assessments. The metrics revealed limitations of the sampling procedures, such as inhomogeneous or systematically decreasing performance depending on the psychometric functions, which can inform the tuning process of a sampling procedure. More advanced metrics allowed directly comparing overall performances of different sampling procedures and select the best-suited sampling procedure for the example application. The proposed analysis metrics can be used for any sampling procedure and the estimation of any parameter of a psychometric function, independent of the shape of the psychometric function and of how such a parameter was estimated. This framework should help to accelerate the development process of psychophysical assessments

    Enhancing simulations with intra-subject variability for improved psychophysical assessments.

    No full text
    Psychometric properties of perceptual assessments, like reliability, depend on stochastic properties of psychophysical sampling procedures resulting in method variability, as well as inter- and intra-subject variability. Method variability is commonly minimized by optimizing sampling procedures through computer simulations. Inter-subject variability is inherent to the population of interest and cannot be influenced. Intra-subject variability introduced by confounds (e.g., inattention or lack of motivation) cannot be simply quantified from experimental data, as these data also include method variability. Therefore, this aspect is generally neglected when developing assessments. Yet, comparing method variability and intra-subject variability could give insights on whether effort should be invested in optimizing the sampling procedure, or in addressing potential confounds instead. We propose a new approach to estimate intra-subject variability of psychometric functions by combining computer simulations and behavioral data, and to account for it when simulating experiments. The approach was illustrated in a real-world scenario of proprioceptive difference threshold assessments. The behavioral study revealed a test-retest reliability of r = 0.212. Computer simulations without considering intra-subject variability predicted a reliability of r = 0.768, whereas the new approach including an intra-subject variability model lead to a realistic estimate of reliability (r = 0.207). Such a model also allows computing the theoretically maximally attainable reliability (r = 0.552) assuming an ideal sampling procedure. Comparing the reliability estimates when exclusively accounting for method variability versus intra-subject variability reveals that intra-subject variability should be reduced by addressing confounds and that only optimizing the sampling procedure may be insufficient to achieve a high reliability. This new approach allows computing the intra-subject variability with only two measurements per subject, and predicting the reliability for a larger number of subjects and retests based on simulations, without requiring additional experiments. Such a tool of predictive value is especially valuable for target populations where time is scarce, e.g., for assessments in clinical settings

    Reliable and Rapid Robotic Assessment of Wrist Proprioception Using a Gauge Position Matching Paradigm

    No full text
    Quantitative assessments of position sense are essential for the investigation of proprioception, as well as for diagnosis, prognosis and treatment planning for patients with somatosensory deficits. Despite the development and use of various paradigms and robotic tools, their clinimetric properties are often poorly evaluated and reported. A proper evaluation of the latter is essential to compare results between different studies and to identify the influence of possible confounds on outcome measures. The aim of the present study was to perform a comprehensive evaluation of a rapid robotic assessment of wrist proprioception using a passive gauge position matching task. Thirty-two healthy subjects undertook six test-retests of proprioception of the right wrist on two different days. The constant error (CE) was 0.87°, the absolute error (AE) was 5.87°, the variable error (VE) was 4.59° and the total variability (E) was 6.83° in average for the angles presented in the range from 10° to 30°. The intraclass correlation analysis provided an excellent reliability for CE (0.75), good reliability for AE (0.68) and E (0.68), and fair reliability for VE (0.54). Tripling the assessment length had negligible effects on the reliabilities. Additional analysis revealed significant trends of larger overestimation (constant errors), as well as larger absolute and variable errors with increased flexion angles. No proprioceptive learning occurred, despite increased familiarity with the task, which was reflected in significantly decreased assessment duration by 30%. In conclusion, the proposed automated assessment can provide sensitive and reliable information on proprioceptive function of the wrist with an administration time of around 2.5 min, demonstrating the potential for its application in research or clinical settings. Moreover, this study highlights the importance of reporting the complete set of errors (CE, AE, VE, and E) in a matching experiment for the identification of trends and subsequent interpretation of results

    Algorithm for improving psychophysical threshold estimates by detecting sustained inattention in experiments using PEST

    Full text link
    Psychophysical procedures are applied in various fields to assess sensory thresholds. During experiments, sampled psychometric functions are usually assumed to be stationary. However, perception can be altered, for example by loss of attention to the presentation of stimuli, leading to biased data, which results in poor threshold estimates. The few existing approaches attempting to identify non-stationarities either detect only whether there was a change in perception, or are not suitable for experiments with a relatively small number of trials (e.g., [Formula: see text] 300). We present a method to detect inattention periods on a trial-by-trial basis with the aim of improving threshold estimates in psychophysical experiments using the adaptive sampling procedure Parameter Estimation by Sequential Testing (PEST). The performance of the algorithm was evaluated in computer simulations modeling inattention, and tested in a behavioral experiment on proprioceptive difference threshold assessment in 20 stroke patients, a population where attention deficits are likely to be present. Simulations showed that estimation errors could be reduced by up to 77% for inattentive subjects, even in sequences with less than 100 trials. In the behavioral data, inattention was detected in 14% of assessments, and applying the proposed algorithm resulted in reduced test-retest variability in 73% of these corrected assessments pairs. The novel algorithm complements existing approaches and, besides being applicable post hoc, could also be used online to prevent collection of biased data. This could have important implications in assessment practice by shortening experiments and improving estimates, especially for clinical settings

    Reliability, validity, and clinical feasibility of a rapid and objective assessment of post-stroke deficits in hand proprioception

    No full text
    Abstract Background Proprioceptive function can be affected after neurological injuries such as stroke. Severe and persistent proprioceptive impairments may be associated with a poor functional recovery after stroke. To better understand their role in the recovery process, and to improve diagnostics, prognostics, and the design of therapeutic interventions, it is essential to quantify proprioceptive deficits accurately and sensitively. However, current clinical assessments lack sensitivity due to ordinal scales and suffer from poor reliability and ceiling effects. Robotic technology offers new possibilities to address some of these limitations. Nevertheless, it is important to investigate the psychometric and clinimetric properties of technology-assisted assessments. Methods We present an automated robot-assisted assessment of proprioception at the level of the metacarpophalangeal joint, and evaluate its reliability, validity, and clinical feasibility in a study with 23 participants with stroke and an age-matched group of 29 neurologically intact controls. The assessment uses a two-alternative forced choice paradigm and an adaptive sampling procedure to identify objectively the difference threshold of angular joint position. Results Results revealed a good reliability (ICC(2,1) = 0.73) for assessing proprioception of the impaired hand of participants with stroke. Assessments showed similar task execution characteristics (e.g., number of trials and duration per trial) between participants with stroke and controls and a short administration time of approximately 12 min. A difference in proprioceptive function could be found between participants with a right hemisphere stroke and control subjects (p<0.001). Furthermore, we observed larger proprioceptive deficits in participants with a right hemisphere stroke compared to a left hemisphere stroke (p=0.028), despite the exclusion of participants with neglect. No meaningful correlation could be established with clinical scales for different modalities of somatosensation. We hypothesize that this is due to their low resolution and ceiling effects. Conclusions This study has demonstrated the assessment’s applicability in the impaired population and promising integration into clinical routine. In conclusion, the proposed assessment has the potential to become a powerful tool to investigate proprioceptive deficits in longitudinal studies as well as to inform and adjust sensorimotor rehabilitation to the patient’s deficits
    corecore