246 research outputs found

    Cardiac resynchronization therapy during rest and exercise: comparison of two optimization methods

    Get PDF
    Optimal exercise programming of cardiac resynchronization therapy (CRT) devices is unknown. We aimed to: (i) investigate variations in optimal atrioventricular (AV) and interventricular (VV) delays from rest to exercise, assessed by both echocardiography and an automated intracardiac electrogram (IEGM) method; (ii) evaluate the acute haemodynamic impact of CRT optimization performed during exercise

    Shortening of atrioventricular delay at increased atrial paced heart rates improves diastolic filling and functional class in patients with biventricular pacing

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Use of rate adaptive atrioventricular (AV) delay remains controversial in patients with biventricular (Biv) pacing. We hypothesized that a shortened AV delay would provide optimal diastolic filling by allowing separation of early and late diastolic filling at increased heart rate (HR) in these patients.</p> <p>Methods</p> <p>34 patients (75 ± 11 yrs, 24 M, LVEF 34 ± 12%) with Biv and atrial pacing had optimal AV delay determined at baseline HR by Doppler echocardiography. Atrial pacing rate was then increased in 10 bpm increments to a maximum of 90 bpm. At each atrial pacing HR, optimal AV delay was determined by changing AV delay until best E and A wave separation was seen on mitral inflow pulsed wave (PW) Doppler (defined as increased atrial duration from baseline or prior pacemaker setting with minimal atrial truncation). Left ventricular (LV) systolic ejection time and velocity time integral (VTI) at fixed and optimal AV delay was also tested in 13 patients. Rate adaptive AV delay was then programmed according to the optimal AV delay at the highest HR tested and patients were followed for 1 month to assess change in NYHA class and Quality of Life Score as assessed by Minnesota Living with Heart Failure Questionnaire.</p> <p>Results</p> <p>81 AV delays were evaluated at different atrial pacing rates. Optimal AV delay decreased as atrial paced HR increased (201 ms at 60 bpm, 187 ms at 70 bpm, 146 ms at 80 bpm and 123 ms at 90 bpm (ANOVA F-statistic = 15, p = 0.0010). Diastolic filling time (P < 0.001 vs. fixed AV delay), mitral inflow VTI (p < 0.05 vs fixed AV delay) and systolic ejection time (p < 0.02 vs. fixed AV delay) improved by 14%, 5% and 4% respectively at optimal versus fixed AV delay at the same HR. NYHA improved from 2.6 ± 0.7 at baseline to 1.7 ± 0.8 (p < 0.01) 1 month post optimization. Physical component of Quality of Life Score improved from 32 ± 17 at baseline to 25 ± 12 (p < 0.05) at follow up.</p> <p>Conclusions</p> <p>Increased heart rate by atrial pacing in patients with Biv pacing causes compromise in diastolic filling time which can be improved by AV delay shortening. Aggressive AV delay shortening was required at heart rates in physiologic range to achieve optimal diastolic filling and was associated with an increase in LV ejection time during optimization. Functional class improved at 1 month post optimization using aggressive AV delay shortening algorithm derived from echo-guidance at the time of Biv pacemaker optimization.</p

    Cardiac resynchronization therapy: a comparison among left ventricular bipolar, quadripolar and active fixation leads

    Get PDF
    We evaluated the performance of 3 different left ventricular leads (LV) for resynchronization therapy: bipolar (BL), quadripolar (QL) and active fixation leads (AFL). We enrolled 290 consecutive CRTD candidates implanted with BL (n = 136) or QL (n = 97) or AFL (n = 57). Over a minimum 10 months follow-up, we assessed: (a) composite technical endpoint (TE) (phrenic nerve stimulation at 8 [email protected] ms, safety margin between myocardial and phrenic threshold &lt;2V, LV dislodgement and failure to achieve the target pacing site), (b) composite clinical endpoint (CE) (death, hospitalization for heart failure, heart transplantation, lead extraction for infection), (c) reverse remodeling (RR) (reduction of end systolic volume &gt;15%). Baseline characteristics of the 3 groups were similar. At follow-up the incidence of TE was 36.3%, 14.3% and 19.9% in BL, AFL and QL, respectively (p &lt; 0.01). Moreover, the incidence of RR was 56%, 64% and 68% in BL, AFL and QL respectively (p = 0.02). There were no significant differences in CE (p = 0.380). On a multivariable analysis, \u201cnon-BL leads\u201d was the single predictor of an improved clinical outcome. QL and AFL are superior to conventional BL by enhancing pacing of the target site: AFL through prevention of lead dislodgement while QL through improved management of phrenic nerve stimulation

    Neurophysiological and neuroradiological test for early poor outcome (Cerebral Performance Categories 3\u20135) prediction after cardiac arrest: Prospective multicentre prognostication data

    Get PDF
    The data presented here are related to our research article entitled \u201cNeurophysiology and neuroimaging accurately predict poor neurological outcome within 24 hours after cardiac arrest: a prospective multicentre prognostication study (ProNeCA)\u201d [1]. We report a secondary analysis on the ability of somatosensory evoked potentials (SEPs), brain computed tomography (CT) and electroencephalography (EEG) to predict poor neurological outcome at 6 months in 346 patients who were comatose after cardiac arrest. Differently from the related research article, here we included cerebral performance category (CPC) 3 among poor outcomes, so that the outcomes are dichotomised as CPC 1\u20132 (absent to mild neurological disability: good outcome) vs. CPC 3\u20135 (severe neurological disability, persistent vegetative state, or death: poor outcome). The accuracy of the index tests was recalculated accordingly. A bilaterally absent/absent-pathological amplitude (AA/AP) N20 SEPs wave, a Grey Matter/White Matter (GM/WM) ratio &lt;1.21 on brain CT and an isoelectric or burst suppression EEG predicted poor outcome with 49.6%, 42.2% and 29.8% sensitivity, respectively, and 100% specificity. The distribution of positive results of the three predictors did not overlap completely in the population of patients with poor outcome, so that when combining them the overall sensitivity raised to 61.2%

    When is an optimization not an optimization? Evaluation of clinical implications of information content (signal-to-noise ratio) in optimization of cardiac resynchronization therapy, and how to measure and maximize it

    Get PDF
    Impact of variability in the measured parameter is rarely considered in designing clinical protocols for optimization of atrioventricular (AV) or interventricular (VV) delay of cardiac resynchronization therapy (CRT). In this article, we approach this question quantitatively using mathematical simulation in which the true optimum is known and examine practical implications using some real measurements. We calculated the performance of any optimization process that selects the pacing setting which maximizes an underlying signal, such as flow or pressure, in the presence of overlying random variability (noise). If signal and noise are of equal size, for a 5-choice optimization (60, 100, 140, 180, 220 ms), replicate AV delay optima are rarely identical but rather scattered with a standard deviation of 45 ms. This scatter was overwhelmingly determined (ρ = −0.975, P < 0.001) by Information Content, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}SignalSignal+Noise {\frac{\text{Signal}}{{{\text{Signal}} + {\text{Noise}}}}} \end{document}, an expression of signal-to-noise ratio. Averaging multiple replicates improves information content. In real clinical data, at resting, heart rate information content is often only 0.2–0.3; elevated pacing rates can raise information content above 0.5. Low information content (e.g. <0.5) causes gross overestimation of optimization-induced increment in VTI, high false-positive appearance of change in optimum between visits and very wide confidence intervals of individual patient optimum. AV and VV optimization by selecting the setting showing maximum cardiac function can only be accurate if information content is high. Simple steps to reduce noise such as averaging multiple replicates, or to increase signal such as increasing heart rate, can improve information content, and therefore viability, of any optimization process

    Are patients with GBA-Parkinson disease good candidates for deep brain stimulation? A longitudinal multicentric study on a large Italian cohort

    Get PDF
    Background: GBA variants increase the risk of developing Parkinson disease (PD) and influence its outcome. Deep brain stimulation (DBS) is a recognised therapeutic option for advanced PD. Data on DBS long-term outcome in GBA carriers are scarce. Objective: To elucidate the impact of GBA variants on long-term DBS outcome in a large Italian cohort. Methods: We retrospectively recruited a multicentric Italian DBS-PD cohort and assessed: (1) GBA prevalence; (2) pre-DBS clinical features; and (3) outcomes of motor, cognitive and other non-motor features up to 5 years post-DBS. Results: We included 365 patients with PD, of whom 73 (20%) carried GBA variants. 5-year follow-up data were available for 173 PD, including 32 mutated subjects. GBA-PD had an earlier onset and were younger at DBS than non-GBA-PD. They also had shorter disease duration, higher occurrence of dyskinesias and orthostatic hypotension symptoms. At post-DBS, both groups showed marked motor improvement, a significant reduction of fluctuations, dyskinesias and impulsive-compulsive disorders (ICD) and low occurrence of most complications. Only cognitive scores worsened significantly faster in GBA-PD after 3 years. Overt dementia was diagnosed in 11% non-GBA-PD and 25% GBA-PD at 5-year follow-up. Conclusions: Evaluation of long-term impact of GBA variants in a large Italian DBS-PD cohort supported the role of DBS surgery as a valid therapeutic strategy in GBA-PD, with long-term benefit on motor performance and ICD. Despite the selective worsening of cognitive scores since 3 years post-DBS, the majority of GBA-PD had not developed dementia at 5-year follow-up

    Frequency and outcome of olfactory impairment and sinonasal involvement in hospitalized patients with COVID-19

    Get PDF
    Background: Olfactory dysfunction has shown to accompany COVID-19. There are varying data regarding the exact frequency in the various study population. The outcome of the olfactory impairment is also not clearly defined. Objective: To find the frequency of olfactory impairment and its outcome in hospitalized patients with positive swab test for COVID-19. Methods: This is a prospective descriptive study of 100 hospitalized COVID-19 patients, randomly sampled, from February to March 2020. Demographics, comorbidities, and laboratory findings were analyzed according to the olfactory loss or sinonasal symptoms. The olfactory impairment and sinonasal symptoms were evaluated by 9 Likert scale questions asked from the patients. Results: Ninety-two patients completed the follow-up (means 20.1 (± 7.42) days). Twenty-two (23.91) patients complained of olfactory loss and in 6 (6.52) patients olfactory loss was the first symptom of the disease. The olfactory loss was reported to be completely resolved in all but one patient. Thirty-nine (42.39) patients had notable sinonasal symptoms while rhinorrhea was the first symptom in 3 (3.26). Fifteen patients (16.3) had a taste impairment. Patients with sinonasal symptoms had a lower age (p = 0.01). There was no significant relation between olfactory loss and sinonasal symptoms (p = 0.07). Conclusions: Sudden olfactory dysfunction and sinonasal symptoms have a considerable prevalence in patients with COVID-19. No significant association was noted between the sinonasal symptoms and the olfactory loss, which may suggest that other mechanisms beyond upper respiratory tract involvement are responsible for the olfactory loss. © 2020, Fondazione Società Italiana di Neurologia

    Current treatment practice of Guillain-Barré syndrome

    Get PDF
    Objective: To define the current treatment practice of Guillain-Barré syndrome (GBS). Methods: The study was based on prospective observational data from the first 1,300 patients included in the International GBS Outcome Study. We described the treatment practice of GBS in general, and for (1) severe forms (unable to walk independently), (2) no recovery after initial treatment, (3) treatment-related fluctuations, (4) mild forms (able to walk independently), and (5) variant forms including Miller Fisher syndrome, taking patient characteristics and hospital type into account. Results: We excluded 88 (7%) patients because of missing data, protocol violation, or alternative diagnosis. Patients from Bangladesh (n = 189, 15%) were described separately because 83% were not treated. IV immunoglobulin (IVIg), plasma exchange (PE), or other immunotherapy was provided in 941 (92%) of the remaining 1,023 patients, including patients with severe GBS (724/743, 97%), mild GBS (126/168, 75%), Miller Fisher syndrome (53/70, 76%), and other variants (33/40, 83%). Of 235 (32%) patients who did not improve after their initial treatment, 82 (35%) received a second immune modulatory treatment. A treatment-related fluctuation was observed in 53 (5%) of 1,023 patients, of whom 36 (68%) were re-treated with IVIg or PE. Conclusions: In current practice, patients with mild and variant forms of GBS, or with treatment-related fluctuations and treatment failures, are frequently treated, even in absence of trial data to support this choice. The variability in treatment practice can be explained in part by the lack of evidence and guidelines for effective treatment in these situations

    Usefulness of NT-pro BNP monitoring to identify echocardiographic responders following cardiac resynchronization therapy

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Cardiac resynchronization therapy (CRT) improves left ventricular (LV) volumes, mitral regurgitation (MR) severity and symptoms of patients with heart failure (HF). However, ≥ 30% of patients have no significant clinical or echocardiographic improvement following CRT. Reverse remodeling after CRT correlates with improved clinical outcomes. We hypothesized that in NT-pro BNP monitoring is accurate to identify responders following CRT.</p> <p>Methods</p> <p>42 consecutive patients (mean age 66 ± 12 years, male 68%) with HF undergoing CRT were prospectively enrolled. Responders at follow-up were defined by echocardiography (decrease in LV end systolic volume ≥ 15%). Echocardiography and NT-pro BNP measurement were performed at baseline and repeated 3 to 6 month after CRT.</p> <p>Results</p> <p>There was no significant difference between responders (n = 29, 69%) and non-responders (n = 13, 31%) regarding baseline NT-pro BNP level. Responders had significantly higher decrease in NT-pro BNP levels during follow-up than non-responders (absolute: -1428 ± 1333 pg.ml<sup>-1 </sup>vs. -61 ± 959 pg.ml<sup>-1</sup>, p = 0.002; relative: -45 ± 28% vs. 2 ± 28%, p < 0.0001). A decrease of ≥ 15% in NT-pro BNP 3–6 months after CRT identifies echocardiographic responders with a sensitivity of 90% and a specificity of 77%.</p> <p>Conclusion</p> <p>NT-pro BNP monitoring can accurately identify echocardiographic responders after CRT.</p
    corecore