99 research outputs found

    Validation of Consumer-Based Hip and Wrist Activity Monitors in Older Adults With Varied Ambulatory Abilities

    Get PDF
    BACKGROUND: The accuracy of step detection in consumer-based wearable activity monitors in older adults with varied ambulatory abilities is not known. METHODS: We assessed the validity of two hip-worn (Fitbit One and Omron HJ-112) and two wrist-worn (Fitbit Flex and Jawbone UP) activity monitors in 99 older adults of varying ambulatory abilities and also included the validity results from the ankle-worn StepWatch as a comparison device. Nonimpaired, impaired (Short Physical Performance Battery Score < 9), cane-using, or walker-using older adults (62 and older) ambulated at a self-selected pace for 100 m wearing all activity monitors simultaneously. The criterion measure was directly observed steps. Intraclass correlation coefficients (ICC), mean percent error and mean absolute percent error, equivalency, and Bland-Altman plots were used to assess accuracy. RESULTS: Nonimpaired adults steps were underestimated by 4.4% for StepWatch (ICC = 0.87), 2.6% for Fitbit One (ICC = 0.80), 4.5% for Omron HJ-112 (ICC = 0.72), 26.9% for Fitbit Flex (ICC = 0.15), and 2.9% for Jawbone UP (ICC = 0.55). Impaired adults steps were underestimated by 3.5% for StepWatch (ICC = 0.91), 1.7% for Fitbit One (ICC = 0.96), 3.2% for Omron HJ-112 (ICC = 0.89), 16.3% for Fitbit Flex (ICC = 0.25), and 8.4% for Jawbone UP (ICC = 0.50). Cane-user and walker-user steps were underestimated by StepWatch by 1.8% (ICC = 0.98) and 1.3% (ICC = 0.99), respectively, where all other monitors underestimated steps by >11.5% (ICCs < 0.05). CONCLUSIONS: StepWatch, Omron HJ-112, Fitbit One, and Jawbone UP appeared accurate at measuring steps in older adults with nonimpaired and impaired ambulation during a self-paced walking test. StepWatch also appeared accurate at measuring steps in cane-users

    Evaluating Digital Health Interventions: Key Questions and Approaches

    Get PDF
    Digital health interventions have enormous potential as scalable tools to improve health and healthcare delivery by improving effectiveness, efficiency, accessibility, safety, and personalization. Achieving these improvements requires a cumulative knowledge base to inform development and deployment of digital health interventions. However, evaluations of digital health interventions present special challenges. This paper aims to examine these challenges and outline an evaluation strategy in terms of the research questions needed to appraise such interventions. As they are at the intersection of biomedical, behavioral, computing, and engineering research, methods drawn from all of these disciplines are required. Relevant research questions include defining the problem and the likely benefit of the digital health intervention, which in turn requires establishing the likely reach and uptake of the intervention, the causal model describing how the intervention will achieve its intended benefit, key components, and how they interact with one another, and estimating overall benefit in terms of effectiveness, cost effectiveness, and harms. Although RCTs are important for evaluation of effectiveness and cost effectiveness, they are best undertaken only when: (1) the intervention and its delivery package are stable; (2) these can be implemented with high fidelity; and (3) there is a reasonable likelihood that the overall benefits will be clinically meaningful (improved outcomes or equivalent outcomes at lower cost). Broadening the portfolio of research questions and evaluation methods will help with developing the necessary knowledge base to inform decisions on policy, practice, and research

    A decision framework for an adaptive behavioral intervention for physical activity using hybrid model predictive control: illustration with Just Walk

    Full text link
    [EN] Physical inactivity is a major contributor to morbidity and mortality worldwide. Many current physical activity behavioral interventions have shown limited success addressing the problem from a long-term perspective that includes maintenance. This paper proposes the design of a decision algorithm for a mobile and wireless health (mHealth) adaptive intervention that is based on control engineering concepts. The design process relies on a behavioral dynamical model based on Social Cognitive Theory (SCT), with a controller formulation based on hybrid model predictive control (HMPC) being used to implement the decision scheme. The discrete and logical features of HMPC coincide naturally with the categorical nature of the intervention components and the logical decisions that are particular to an intervention for physical activity. The intervention incorporates an online controller reconfiguration mode that applies changes in the penalty weights to accomplish the transition between the behavioral initiation and maintenance training stages. Controller performance is illustrated using an ARX model estimated from system identification data of a representative participant for Just Walk, a physical activity intervention designed on the basis of control systems  principles.[ES] La inactividad física es uno de los principales factores que contribuyen a la morbilidad y la mortalidad en todo el mundo. Muchas intervenciones comportamentales de actividad física en la actualidad han mostrado un éxito limitado al abordar el problema desde una perspectiva a largo plazo que incluye el mantenimiento. Este artículo propone el diseño de un algoritmo de decisión para una intervención adaptativa de salud móvil e inalámbrica (mHealth) que se basa en conceptos de ingeniería de control. El proceso de diseño se basa en un modelo dinámico que representa el comportamiento basada en la Teoría Cognitiva Social (TCS), con una formulación de controlador fundamentada en el control predictivo por modelo híbrido (HMPC por sus siglas en inglés) la cual se utiliza para implementar el esquema de decisión. Las características discretas y lógicas del HMPC coinciden naturalmente con la naturaleza categórica de los componentes de la intervención y las decisiones lógicas que son propias de una intervención para actividad física. La intervención incorpora un modo de reconfiguración del controlador en línea que aplica cambios en los pesos de penalización para lograr la transición entre las etapas de entrenamiento de iniciación comportamental y mantenimiento. Resultados de simulación se presentan para ilustrar el desempeño del controlador utilizando un modelo ARX estimado de datos de un participante representativo de Just Walk, una intervención de actividad física diseñada usando principios de sistemas de control.El apoyo para este trabajo ha sido proporcionado por la Fundación Nacional de Ciencias (NSF por sus siglas en inglés) a través de la subvención IIS-449751, y el Instituto Nacional de la Salud (NIH por sus siglas en inglés) a través de la subvención R01CA244777.Cevallos, D.; Martín, CA.; El Mistiri, M.; Rivera, DE.; Hekler, E. (2022). Un esquema de decisiones para intervenciones adaptativas comportamentales de actividad física basado en control predictivo por modelo híbrido: ilustración con Just Walk. Revista Iberoamericana de Automática e Informática industrial. 19(3):297-308. https://doi.org/10.4995/riai.2022.16798OJS29730819

    Determining who responds better to a computer vs. human-delivered physical activity intervention: Results from the community health advice by telephone (CHAT) trial

    Get PDF
    Background Little research has explored who responds better to an automated vs. human advisor for health behaviors in general, and for physical activity (PA) promotion in particular. The purpose of this study was to explore baseline factors (i.e., demographics, motivation, interpersonal style, and external resources) that moderate intervention efficacy delivered by either a human or automated advisor. Methods Data were from the CHAT Trial, a 12-month randomized controlled trial to increase PA among underactive older adults (full trial N = 218) via a human advisor or automated interactive voice response advisor. Trial results indicated significant increases in PA in both interventions by 12 months that were maintained at 18-months. Regression was used to explore moderation of the two interventions. Results Results indicated amotivation (i.e., lack of intent in PA) moderated 12-month PA (d = 0.55, p \u3c 0.01) and private self-consciousness (i.e., tendency to attune to one’s own inner thoughts and emotions) moderated 18-month PA (d = 0.34, p \u3c 0.05) but a variety of other factors (e.g., demographics) did not (p \u3e 0.12). Conclusions Results provide preliminary evidence for generating hypotheses about pathways for supporting later clinical decision-making with regard to the use of either human- vs. computer-delivered interventions for PA promotion

    Human-computer collaboration for skin cancer recognition

    Get PDF
    The rapid increase in telemedicine coupled with recent advances in diagnostic artificial intelligence (AI) create the imperative to consider the opportunities and risks of inserting AI-based support into new paradigms of care. Here we build on recent achievements in the accuracy of image-based AI for skin cancer diagnosis to address the effects of varied representations of AI-based support across different levels of clinical expertise and multiple clinical workflows. We find that good quality AI-based support of clinical decision-making improves diagnostic accuracy over that of either AI or physicians alone, and that the least experienced clinicians gain the most from AI-based support. We further find that AI-based multiclass probabilities outperformed content-based image retrieval (CBIR) representations of AI in the mobile technology environment, and AI-based support had utility in simulations of second opinions and of telemedicine triage. In addition to demonstrating the potential benefits associated with good quality AI in the hands of non-expert clinicians, we find that faulty AI can mislead the entire spectrum of clinicians, including experts. Lastly, we show that insights derived from AI class-activation maps can inform improvements in human diagnosis. Together, our approach and findings offer a framework for future studies across the spectrum of image-based diagnostics to improve human-computer collaboration in clinical practice

    Validity and reliability of subjective methods to assess sedentary behaviour in adults: a systematic review and meta-analysis.

    Get PDF
    BACKGROUND: Subjective measures of sedentary behaviour (SB) (i.e. questionnaires and diaries/logs) are widely implemented, and can be useful for capturing type and context of SBs. However, little is known about comparative validity and reliability. The aim of this systematic review and meta-analysis was to: 1) identify subjective methods to assess overall, domain- and behaviour-specific SB, and 2) examine the validity and reliability of these methods. METHODS: The databases MEDLINE, EMBASE and SPORTDiscus were searched up to March 2020. Inclusion criteria were: 1) assessment of SB, 2) evaluation of subjective measurement tools, 3) being performed in healthy adults, 4) manuscript written in English, and 5) paper was peer-reviewed. Data of validity and/or reliability measurements was extracted from included studies and a meta-analysis using random effects was performed to assess the pooled correlation coefficients of the validity. RESULTS: The systematic search resulted in 2423 hits. After excluding duplicates and screening on title and abstract, 82 studies were included with 75 self-reported measurement tools. There was wide variability in the measurement properties and quality of the studies. The criterion validity varied between poor-to-excellent (correlation coefficient [R] range - 0.01- 0.90) with logs/diaries (R = 0.63 [95%CI 0.48-0.78]) showing higher criterion validity compared to questionnaires (R = 0.35 [95%CI 0.32-0.39]). Furthermore, correlation coefficients of single- and multiple-item questionnaires were comparable (1-item R = 0.34; 2-to-9-items R = 0.35; ≥10-items R = 0.37). The reliability of SB measures was moderate-to-good, with the quality of these studies being mostly fair-to-good. CONCLUSION: Logs and diaries are recommended to validly and reliably assess self-reported SB. However, due to time and resources constraints, 1-item questionnaires may be preferred to subjectively assess SB in large-scale observations when showing similar validity and reliability compared to longer questionnaires. REGISTRATION NUMBER: CRD42018105994
    corecore