3,952 research outputs found

    Heart Rate Variability Measured Early in Patients with Evolving Acute Coronary Syndrome and 1-year Outcomes of Rehospitalization and Mortality

    Get PDF
    Objective: This study sought to examine the prognostic value of heart rate variability (HRV) measurement initiated immediately after emergency department presentation for patients with acute coronary syndrome (ACS). Background: Altered HRV has been associated with adverse outcomes in heart disease, but the value of HRV measured during the earliest phases of ACS related to risk of 1-year rehospitalization and death has not been established. Methods: Twenty-four-hour Holter recordings of 279 patients with ACS were initiated within 45 minutes of emergency department arrival; recordings with �18 hours of sinus rhythm were selected for HRV analysis (number [N] �193). Time domain, frequency domain, and nonlinear HRV were examined. Survival analysis was performed. Results: During the 1-year follow-up, 94 patients were event-free, 82 were readmitted, and 17 died. HRV was altered in relation to outcomes. Predictors of rehospitalization included increased normalized high frequency power, decreased normalized low frequency power, and decreased low/high frequency ratio. Normalized high frequency �42 ms2 predicted rehospitalization while controlling for clinical variables (hazard ratio [HR] �2.3; 95% confidence interval [CI] �1.4–3.8, P�0.001). Variables significantly associated with death included natural logs of total power and ultra low frequency power. A model with ultra low frequency power �8 ms2 ( HR �3.8; 95% CI �1.5–10.1; P�0.007) and troponin �0.3 ng/mL (HR �4.0; 95% CI �1.3–12.1; P�0.016) revealed that each contributed independently in predicting mortality. Nonlinear HRV variables were significant predictors of both outcomes. Conclusion: HRV measured close to the ACS onset may assist in risk stratification. HRV cut-points may provide additional, incremental prognostic information to established assessment guidelines, and may be worthy of additional study

    Prevalence and Prognostic Significance of Long QT Interval among Patients with Chest Pain: Selecting an Optimum QT Rate Correction Formula

    Get PDF
    Background: Little is known about the prevalence and prognostic significance of long QT interval among patients with chest pain during the acute phase of suspected cardiovascular injury. Objectives: Our aim was to investigate the prevalence and prognostic significance of long QT interval among patients presenting to the emergency department (ED) with chest pain using an optimum QT rate correction formula. Methods: We performed secondary analysis on data obtained from the IMMEDIATE AIM trial (N, 145). Data included 24-hour 12-lead Holter electrocardiographic recordings that were stored for offline computer analysis. The QT interval was measured automatically and rate corrected using seven QTc formulas including subject specific correction. The formula with the closer to zero absolute mean QTc/RR correlation was considered the most accurate. Results: Linear and logarithmic subject specific QT rate correction outperformed other QTc formulas and resulted in the closest to zero absolute mean QTc/RR correlations (mean ± SD: 0.003 ± 0.002 and 0.017 ± 0.016, respectively). These two formulas produced adequate correction in 100% of study participants. Other formulas (Bazett’s, Fridericia’s, Framingham\u27s, and study specific) resulted in inadequate correction in 47.6 to 95.2% of study participants. Using the optimum QTc formula, linear subject specific, the prevalence of long QTc interval was 14.5%. The QTc interval did not predict mortality or hospital admission at short and long term follow-up. Only the QT/RR slope predicted mortality at 7 year follow-up (odds ratio, 2.01; 95% CI, 1.02–3.96; p \u3c 0.05). Conclusions: Adequate QT rate correction can only be performed using subject specific correction. Long QT interval is not uncommon among patients presenting to the ED with chest pain

    Heart Rate Variability Measurement and Clinical Depression in Acute Coronary Syndrome Patients: Narrative Review of Recent Literature

    Get PDF
    Aim: We aimed to explore links between heart rate variability (HRV) and clinical depression in patients with acute coronary syndrome (ACS), through a review of recent clinical research literature. Background: Patients with ACS are at risk for both cardiac autonomic dysfunction and clinical depression. Both conditions can negatively impact the ability to recover from an acute physiological insult, such as unstable angina or myocardial infarction, increasing the risk for adverse cardiovascular outcomes. HRV is recognized as a reflection of autonomic function. Methods: A narrative review was undertaken to evaluate state-of-the-art clinical research, using the PubMed database, January 2013. The search terms “heart rate variability” and “depression” were used in conjunction with “acute coronary syndrome”, “unstable angina”, or “myocardial infarction” to find clinical studies published within the past 10 years related to HRV and clinical depression, in patients with an ACS episode. Studies were included if HRV measurement and depression screening were undertaken during an ACS hospitalization or within 2 months of hospital discharge. Results: Nine clinical studies met the inclusion criteria. The studies’ results indicate that there may be a relationship between abnormal HRV and clinical depression when assessed early after an ACS event, offering the possibility that these risk factors play a modest role in patient outcomes. Conclusion: While a definitive conclusion about the relevance of HRV and clinical depression measurement in ACS patients would be premature, the literature suggests that these measures may provide additional information in risk assessment. Potential avenues for further research are proposed

    Theoretical Analysis of the "Double-q" Magnetic Structure of CeAl2

    Full text link
    A model involving competing short-range isotropic Heisenberg interactions is developed to explain the "double-q" magnetic structure of CeAl2_2. For suitably chosen interactions, terms in the Landau expansion quadratic in the order parameters explain the condensation of incommensurate order at wavevectors in the star of (1/2 δ-\delta, 1/2 +δ+\delta, 1/2)(2π/a)(2\pi/a), where aa is the cubic lattice constant. We show that the fourth order terms in the Landau expansion lead to the formation of the so-called "double-q" magnetic structure in which long-range order develops simultaneously at two symmetry-related wavevectors, in striking agreement with the magnetic structure determinations. Based on the value of the ordering temperature and of the Curie-Weiss Θ\Theta of the susceptibility, we estimate that the nearest neighbor interaction K0K_0 is ferromagnetic, with K0/k=11±1K_0/k=-11\pm 1K and the next-nearest neighbor interaction JJ is antiferromagnetic with J/k=6±1J/k=6 \pm 1K. We also briefly comment on the analogous phenomenon seen in the similar system TmS.Comment: 22 pages, 6 figure

    Integrating monitor alarms with laboratory test results to enhance patient deterioration prediction

    Get PDF
    AbstractPatient monitors in modern hospitals have become ubiquitous but they generate an excessive number of false alarms causing alarm fatigue. Our previous work showed that combinations of frequently co-occurring monitor alarms, called SuperAlarm patterns, were capable of predicting in-hospital code blue events at a lower alarm frequency. In the present study, we extend the conceptual domain of a SuperAlarm to incorporate laboratory test results along with monitor alarms so as to build an integrated data set to mine SuperAlarm patterns. We propose two approaches to integrate monitor alarms with laboratory test results and use a maximal frequent itemsets mining algorithm to find SuperAlarm patterns. Under an acceptable false positive rate FPRmax, optimal parameters including the minimum support threshold and the length of time window for the algorithm to find the combinations of monitor alarms and laboratory test results are determined based on a 10-fold cross-validation set. SuperAlarm candidates are generated under these optimal parameters. The final SuperAlarm patterns are obtained by further removing the candidates with false positive rate>FPRmax. The performance of SuperAlarm patterns are assessed using an independent test data set. First, we calculate the sensitivity with respect to prediction window and the sensitivity with respect to lead time. Second, we calculate the false SuperAlarm ratio (ratio of the hourly number of SuperAlarm triggers for control patients to that of the monitor alarms, or that of regular monitor alarms plus laboratory test results if the SuperAlarm patterns contain laboratory test results) and the work-up to detection ratio, WDR (ratio of the number of patients triggering any SuperAlarm patterns to that of code blue patients triggering any SuperAlarm patterns). The experiment results demonstrate that when varying FPRmax between 0.02 and 0.15, the SuperAlarm patterns composed of monitor alarms along with the last two laboratory test results are triggered at least once for [56.7–93.3%] of code blue patients within an 1-h prediction window before code blue events and for [43.3–90.0%] of code blue patients at least 1-h ahead of code blue events. However, the hourly number of these SuperAlarm patterns occurring in control patients is only [2.0–14.8%] of that of regular monitor alarms with WDR varying between 2.1 and 6.5 in a 12-h window. For a given FPRmax threshold, the SuperAlarm set generated from the integrated data set has higher sensitivity and lower WDR than the SuperAlarm set generated from the regular monitor alarm data set. In addition, the McNemar’s test also shows that the performance of the SuperAlarm set from the integrated data set is significantly different from that of the SuperAlarm set from the regular monitor alarm data set. We therefore conclude that the SuperAlarm patterns generated from the integrated data set are better at predicting code blue events

    Feasibility and Compliance with Daily Home ECG Monitoring of the QT Interval in Heart Transplant Recipients

    Get PDF
    Background: Recent evidence suggests that acute allograft rejection after heart transplantation causes an increased QT interval on electrocardiogram (ECG). The aims of this pilot study were to (1) determine whether heart transplant recipients could achieve compliance in transmitting a 30-second ECG every day for 1 month using a simple ECG device and their home telephone, (2) evaluate the ease of device use and acceptability by transplant recipients, and (3) evaluate the quality of transmitted ECG tracings for QT-interval measurement. Methods: A convenience sample of adult heart transplant recipients were recruited and trained to use the device (HeartOne, Aerotel Medical Systems, Holon, Israel). Lead II was used with electrodes that were easy to slip on and off (expandable metal wrist watch-type electrode for right wrist and C-shaped band electrode for left ankle). Patients used a toll-free number with automated voice prompts to guide their ECG transmission to the core laboratory for analysis. Results: Thirty-one subjects (72% were male; mean age of 52 ± 17 years; 37% were nonwhite) achieved an ECG transmission compliance of 73.4% (daily) and 100% (weekly). When asked, how difficult do you think it was to record and transmit your ECG by phone, 90% of subjects replied “somewhat easy” or “extremely easy.” Of the total 644 ECGs that were transmitted by subjects, 569 (89%) were acceptable quality for QT-interval measurement. The mean QTc was 448 ± 44 ms (440 ± 41 ms for male subjects and 471 ± 45 ms for female subjects). Eleven subjects (35%) had an extremity tremor, and 19 subjects (55%) had ≥ 1+ left leg edema. Neither of these conditions interfered with ECG measurements. Conclusion: Transplant recipients are compliant with recording and transmitting daily and weekly ECGs

    Formative Analysis of Aging in Place:Implications for the Design of Caregiver Robots

    Get PDF
    Many have conceptualized caregiver robots as consumer products and studied elders’ perceived needs afor and preferences about such products. For reviews, please see (Broadbent, Stafford, &amp; MacDonald, 2009; Jones &amp; Schmidlin, 2011). That approach, though, could create robots that cannot satisfy elders’ actual caregiving needs. Alternatively, one can conceptualize caregiver robots as workers in complex socio-technical systems. To do so, one would need a detailed account of the caregiving that takes place in elders’ homes. Unfortunately, as noted in a National Research Council (2011) report, such a detailed account of caregiving does not exist. Accordingly, we sought to develop such an account. There are many ways to analyze work (for a discussion of general approaches, see Vicente, 1999). They can be categorized into 3 general types: normative, descriptive, and formative approaches (Vicente, 1999). We adopted a formative approach because formative approaches are tailored to the analysis of complex socio-technical systems (Vicente, 1999). They capture work requirements without specifying how that work must be done or who must do it. For example, the constraint “must not lose track of time” captures a work requirement but allows the associated work to be accomplished in a number of different ways (e.g., by checking a clock, setting an alarm) and by a number of different entities (e.g., family member, caregiver robot). To conduct our analysis, researchers observed caregiving in elders’ homes, and interviewed caregivers about their work activities. Researchers then organized their findings into an Abstraction Hierarchy (AH; Vicente, 1999), that is, a detailed account of the aging in place socio-technical system. Our primary aim was to create an AH that describes means-ends relations between the complex socio-technical caregiving system’s overall objectives, work tasks, and physical resources. Such a description provides a detailed account of the caregiving work domain, and serves as the foundation for subsequent formative analyses of caregiving. To create the AH, research team members completed 4 steps: 1) analyzing existing caregiving documentation, 2) observing caregiving and interviewing caregivers, 3) drafting and/or refining the AH, and 4) validating the AH. Steps 2 and 3 were iterative. This process is consistent with Naikar, Hopcraft, and Moylan’s (2005) recommendations regarding formative analyses. The AH made clear that caregiving for those who age in place is a complex and nuanced activity. More specifically, our analysis confirmed existing research regarding categories of caregiving tasks and revealed aspects of caregiving that have not been detailed so far. The existing literature indicates that caregivers assist older adults with self-maintenance activities of daily life (ADLs), such as eating, toileting, and dressing (Lawton, 1990), instrumental activities of daily life (IADLs), such as cooking, cleaning, and shopping (Lawton, 1990), and enhanced activities of daily life (EADLs), such as participating in social activities and pursuing hobbies (Rogers, et al., 1998). Our analysis confirmed those findings, and our AH provides a more detailed account of those tasks than was previously available. Our analysis also revealed aspects of caregiving for those who are aging in place that have not been detailed thus far in the research literature. For example, our AH contains a purpose-related function called Counseling, which concerns ensuring that the elder does not experience psychological distress. To perform this function, the caregiver must understand the elder’s situation (e.g., a family conflict), use information about that situation (e.g., experience with relevant family members and/or past conflicts; the elders’ past choices), and offer the elder advice about how to proceed (e.g., which family member’s advice to follow). The main implication of our AH for the design of caregiver robots is that such robots cannot be designed to perform purpose-related functions in a one-size-fits-all way; rather, caregiver robots must exhibit context-conditioned variability (Vicente, 1999). Our AH has many other important implications for the design of caregiver robots, which unfortunately cannot be detailed here due to space constraints. </jats:p

    Using the Stages of Change Model to Choose an Optimal Health Marketing Target

    Get PDF
    Background: In the transtheoretical model of behavior change, “stages of change” are defined as Precontemplation (not even thinking about changing), Contemplation, Preparation, Action, and Maintenance (maintaining the behavior change). Marketing principles suggest that efforts should be targeted at persons most likely to “buy the product.” Objectives: To examine the effect of intervening at different stages in populations of smokers, with various numbers of people in each “stage of change.” One type of intervention would increase by 10% the probability of a person moving to the next higher stage of change, such as from Precontemplation to Contemplation. The second type would decrease by 10% the probability of relapsing to the next lower stage, such as from Maintenance to Action, and also of changing from Never Smoker to Smoker. Nine hypothetical interventions were compared with the status quo, to determine which type of intervention would provide the most improvement in population smoking. Methods: Three datasets were used to estimate the probability of moving among the stages of change for smoking. Those probabilities were used to create multi-state life tables, which yielded estimates of the expected number of years the population would spend in each stage of change starting at age 40. We estimated the effect of each hypothetical intervention, and compared the intervention effects. Several initial conditions, time horizons, and criteria for success were examined. Results: A population of 40-year-olds in Precontemplation had a further life expectancy of 36 years, of which 26 would be spent in the Maintenance stage. In a population of former and current smokers, moving more persons from the Action to the Maintenance stage (a form of relapse prevention) decreased the number of years spent smoking more than the any other intervention. In a population of 40-year-olds that included Never Smokers, primary smoking prevention was the most effective. The results varied somewhat by the choice of criterion, the length of follow-up, the initial stage distribution, the data, and the sensitivity analyses. Conclusions: In a population of 40-year-olds, smokers were likely to achieve Maintenance without an intervention. On the population basis, targeting quitters and never-smokers was more effective than targeting current smokers. This finding is supported by some principles of health marketing. Additional research should target younger ages as well as other health behaviors
    corecore