43 research outputs found

    Effects of remote monitoring on clinical outcomes and use of healthcare resources in heart failure patients with biventricular defibrillators: results of the MORE-CARE multicentre randomized controlled trial

    Get PDF
    Aims: The aim of this study was to evaluate the clinical efficacy and safety of remote monitoring in patients with heart failure implanted with a biventricular defibrillator (CRT-D) with advanced diagnostics. Methods and results: The MORE-CARE trial is an international, prospective, multicentre, randomized controlled trial. Within 8 weeks of de novo implant of a CRT-D, patients were randomized to undergo remote checks alternating with in-office follow-ups (Remote arm) or in-office follow-ups alone (Standard arm). The primary endpoint was a composite of death and cardiovascular (CV) and device-related hospitalization. Use of healthcare resources was also evaluated. A total of 865 eligible patients (mean age 66 \ub1 10 years) were included in the final analysis (437 in the Remote arm and 428 in the Standard arm) and followed for a median of 24 (interquartile range = 15\u201326) months. No significant difference was found in the primary endpoint between the Remote and Standard arms [hazard ratio 1.02, 95% confidence interval (CI) 0.80\u20131.30, P = 0.89] or in the individual components of the primary endpoint (P > 0.05). For the composite endpoint of healthcare resource utilization (i.e. 2-year rates of CV hospitalizations, CV emergency department admissions, and CV in-office follow-ups), a significant 38% reduction was found in the Remote vs. Standard arm (incidence rate ratio 0.62, 95% CI 0.58\u20130.66, P < 0.001) mainly driven by a reduction of in-office visits. Conclusions: In heart failure patients implanted with a CRT-D, remote monitoring did not reduce mortality or risk of CV or device-related hospitalization. Use of healthcare resources was significantly reduced as a result of a marked reduction of in-office visits without compromising patient safety. Trial registration: NCT00885677

    Bioaccumulation of Trace Elements in the Muscle of the Blackmouth Catshark Galeus melastomus from Mediterranean Waters

    Get PDF
    Environmental pollution, particularly in the marine environment, has become a significant concern due to the increasing presence of pollutants and their adverse effects on ecosystems and human health. This study focuses on the bioaccumulation of trace elements in the muscle tissue of the blackmouth catshark (Galeus melastomus) from different areas in the Mediterranean Sea. Trace elements are of interest due to their persistence, toxicity, and potential for bioaccumulation. This research aims to assess the distribution and accumulation of trace elements in the muscle tissue of G. melastomus and investigate their potential impact on the deep-sea environment of the Mediterranean. The focused areas include the Ligurian Sea, the northern and central Tyrrhenian Sea, the southern Tyrrhenian Sea, the Ionian Sea, the Pantelleria Waters, and the Gela Waters. Samples were collected following established protocols, and trace element analysis was conducted using inductively coupled plasma mass spectrometry. The study provides data on the concentrations of 17 trace elements, namely aluminum, arsenic, cadmium, cobalt, copper, manganese, molybdenum, nickel, zinc, selenium, strontium, lead, chromium, iron, barium, bismuth, and uranium. The findings contribute to a better understanding of trace element bioaccumulation patterns in elasmobranch species, specifically G. melastomus, and highlight the potential risks associated with chemical contamination in the Mediterranean Sea. This research emphasizes the importance of studying the impacts of pollutants on marine organisms, particularly those occupying key ecological roles, like sharks, to support effective conservation and management strategies

    The PaO2/FiO2 ratio on admission is independently associated with prolonged hospitalization in COVID-19 patients.

    Get PDF
    Introduction: The early identification of factors that predict the length of hospital stay (HS) in patients affected by coronavirus desease (COVID-19) might assist therapeutic decisions and patient flow management. Methodology: We collected, at the time of admission, routine clinical, laboratory, and imaging parameters of hypoxia, lung damage, inflammation, and organ dysfunction in a consecutive series of 50 COVID-19 patients admitted to the Respiratory Disease and Infectious Disease Units of the University Hospital of Sassari (North-Sardinia, Italy) and alive on discharge. Results: Prolonged HS (PHS, >21 days) patients had significantly lower PaO2/FiO2 ratio and lymphocytes, and significantly higher Chest CT severity score, C-reactive protein (CRP) and lactic dehydrogenase (LDH) when compared to non-PHS patients. In univariate logistic regression, Chest CT severity score (OR = 1.1891, p = 0.007), intensity of care (OR = 2.1350, p = 0.022), PaO2/FiO2 ratio (OR = 0.9802, p = 0.007), CRP (OR = 1.0952, p = 0.042) and platelet to lymphocyte ratio (OR = 1.0039, p = 0.036) were significantly associated with PHS. However, in multivariate logistic regression, only the PaO2/FiO2 ratio remained significantly correlated with PHS (OR = 0.9164; 95% CI 0.8479-0.9904, p = 0.0275). In ROC curve analysis, using a threshold of 248, the PaO2/FiO2 ratio predicted PHS with sensitivity and specificity of 60% and 91%, respectively (AUC = 0.780, 95% CI 0.637-0.886 p = 0.002). Conclusions: The PaO2/FiO2 ratio on admission is independently associated with PHS in COVID-19 patients. Larger prospective studies are needed to confirm this finding

    Sensitivity and specificity of in vivo COVID-19 screening by detection dogs: Results of the C19-Screendog multicenter study

    Get PDF
    Trained dogs can recognize the volatile organic compounds contained in biological samples of patients with COVID-19 infection. We assessed the sensitivity and specificity of in vivo SARS-CoV- 2 screening by trained dogs. We recruited five dog-handler dyads. In the operant conditioning phase, the dogs were taught to distinguish between positive and negative sweat samples collected from volunteers’ underarms in polymeric tubes. The conditioning was validated by tests involving 16 positive and 48 negative samples held or worn in such a way that the samples were invisible to the dog and handler. In the screening phase the dogs were led by their handlers to a drive-through facility for in vivo screening of volunteers who had just received a nasopharyngeal swab from nursing staff. Each volunteer who had already swabbed was subsequently tested by two dogs, whose responses were recorded as positive, negative, or inconclusive. The dogs’ behavior was constantly monitored for attentiveness and wellbeing. All the dogs passed the conditioning phase, their responses showing a sensitivity of 83-100% and a specificity of 94-100%. The in vivo screening phase involved 1251 subjects, of whom 205 had a COVID-19 positive swab and two dogs per each subject to be screened. Screeningsensitivity and specificity were respectively 91.6-97.6% and 96.3-100% when only one dog was involved, whereas combined screening by two dogs provided a higher sensitivity. Dog wellbeing was also analysed: monitoring of stress and fatigue suggested that the screening activity did not adversely impact the dogs’ wellbeing. This work, by screening a large number of subjects, strengthen recent findings that trained dogs can discriminate between COVID-19 infected and healthy human subjects and introduce two novel research aspects: i) assessement of signs of fatigue and stress in dogs during training and testing, and ii) combining screening by two dogs to improve detection sensitivity and specificity. Using some precautions to reduce the risk of infection and spillover, in vivo COVID-19 screening by a dog-handler dyad can be suitable to quickly screen large numbers of people: it is rapid, non- invasiveand economical, since it does not involve actual sampling, lab resources or waste management, and is suitable to screen large numbers of people

    Serum Albumin Is Inversely Associated With Portal Vein Thrombosis in Cirrhosis

    Get PDF
    We analyzed whether serum albumin is independently associated with portal vein thrombosis (PVT) in liver cirrhosis (LC) and if a biologic plausibility exists. This study was divided into three parts. In part 1 (retrospective analysis), 753 consecutive patients with LC with ultrasound-detected PVT were retrospectively analyzed. In part 2, 112 patients with LC and 56 matched controls were entered in the cross-sectional study. In part 3, 5 patients with cirrhosis were entered in the in vivo study and 4 healthy subjects (HSs) were entered in the in vitro study to explore if albumin may affect platelet activation by modulating oxidative stress. In the 753 patients with LC, the prevalence of PVT was 16.7%; logistic analysis showed that only age (odds ratio [OR], 1.024; P = 0.012) and serum albumin (OR, -0.422; P = 0.0001) significantly predicted patients with PVT. Analyzing the 112 patients with LC and controls, soluble clusters of differentiation (CD)40-ligand (P = 0.0238), soluble Nox2-derived peptide (sNox2-dp; P < 0.0001), and urinary excretion of isoprostanes (P = 0.0078) were higher in patients with LC. In LC, albumin was correlated with sCD4OL (Spearman's rank correlation coefficient [r(s)], -0.33; P < 0.001), sNox2-dp (r(s), -0.57; P < 0.0001), and urinary excretion of isoprostanes (r(s), -0.48; P < 0.0001) levels. The in vivo study showed a progressive decrease in platelet aggregation, sNox2-dp, and urinary 8-iso prostaglandin F2 alpha-III formation 2 hours and 3 days after albumin infusion. Finally, platelet aggregation, sNox2-dp, and isoprostane formation significantly decreased in platelets from HSs incubated with scalar concentrations of albumin. Conclusion: Low serum albumin in LC is associated with PVT, suggesting that albumin could be a modulator of the hemostatic system through interference with mechanisms regulating platelet activation

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe

    Red Cell Distribution Width as a Predictor of Survival in Patients with Hepatocellular Carcinoma

    No full text
    Background and Objectives. Hepatocellular carcinoma (HCC) and the intrahepatic biliary tract cancers are estimated to rank sixth for incidence among solid cancers worldwide, and third for mortality rates. A critical issue remains the need for accurate biomarkers for risk stratification and overall prognosis. The aim of this study was to investigate the ability of a biomarker of heterogeneity of the size of red blood cells, the red cell distribution width (RDW), to predict survival in patients with HCC. Materials and Methods. A consecutive series of patients with a histologic diagnosis of HCC were included into this study irrespective of their age, stage of the disease, and treatment administered, and followed-up for a period of three years. Demographic, anthropometric [age, sex, body mass index (BMI)], and clinical data (Charlson Comorbidity Index, Child–Pugh score, etc.), along with laboratory tests were retrieved from clinical records. Results. One-hundred and four patients were included in this study. Among them, 54 (69%) were deceased at the end of the follow-up. Higher RDW values, but not other hematological and biochemical parameters, were significantly associated with mortality in both univariate and multivariate analysis. The optimal RDW cut-off value identified with the Youden test for survival was 14.7%, with 65% sensitivity and 74% specificity (AUC  =  0.718, 95% CI 0.622–0.802, p  p 14.7%. Conclusions. The results of our study showed that RDW can perform better than other blood-based biomarkers in independently predicting prognosis in patients with HCC

    Two-port dry vitrectomy for rhegmatogenous retinal detachment: a pilot study

    No full text
    Objective: To evaluate the safety and efficacy of a new surgical technique for the management of primary rhegmatogenous retinal detachment (RRD), consisting of localized PPV near the retinal break(s), without infusion line, associated with a drainage of subretinal fluid and cryoretinopexy. Methods: Multicentric prospective study conducted at the University Hospital of Cagliari and IRCCS Fondazione Policlinico Universitario A. Gemelli, Roma. Twenty eyes affected by RRD with the causative retinal break(s) in the superior meridians were enrolled between February 2022 and June 2022. Patients with cataract ≥3, aphakia, significant posterior capsule opacification, giant retinal tears, retinal dialysis, history of trauma and PVR ≥C2 were excluded. All eyes underwent a two-port 25-gauge PPV with localized removal of the vitreous surrounding retinal break(s), followed by 20% SF6 injection and cryopexy. The surgical time was recorded for each procedure. Best-corrected visual acuity (BCVA) was measured at baseline and postoperative 6 months. Results: Primary anatomic success at 6 months was achieved by 85% of patients. No complications occurred, except for three (15%) retinal re-detachments. The average surgical time was 8.61 ± 2.16 min. Overall, the difference between pre- and last postoperative mean BCVA was statistically significant (p = 0.02). Conclusions: Two-port dry PPV demonstrated safety and efficacy for the treatment of RRD, reaching an 85% of anatomical success rate. Although further studies are necessary to confirm the efficacy and long-term benefit of this treatment, we believe that this surgical technique could be considered a valid and safe alternative for the management of primary RRD

    Laboratory test alterations in patients with COVID-19 and non COVID-19 interstitial pneumonia: a preliminary report

    No full text
    INTRODUCTION: Coronavirus disease 19 (COVID-19) is the greatest pandemic in modern history. Laboratory test alterations have been described in COVID-19 patients, but differences with other pneumonias have been poorly investigated to date, especially in Caucasian populations. The aim of this study was to investigate differences and prognostic potential of routine blood tests in a series of Italian patients with COVID-19 and non-COVID-19 interstitial pneumonia. METHODOLOGY: Clinical data and routine laboratory tests of a consecutive series of 30 COVID-19 patients and 30 age and sex matched patients with non COVID-19 interstitial pneumonia have been retrospectively collected. Differences in laboratory tests between patients with COVID-19 and non COVID-19 pneumonias have been investigated, as well as differences between COVID-19 survivors and non survivors. RESULTS: COVID-19 patients had lower white blood cells, monocytes, neutrophils, and higher platelet counts. In addition, COVID-19 patients showed higher mean platelet volume, lower C reactive protein concentrations, and higher De Ritis ratio. Combined blood cell indexes of systemic inflammation were significantly lower in COVID-19 patients. In further analysis of the COVID-19 group, the neutrophil count, neutrophil to lymphocyte ratio (NLR), derived NLR, systemic inflammation response index and De Ritis ratio, were significantly higher in non survivors than in survivors, while the number of platelets was significantly lower in non survivors. CONCLUSIONS: Our study showed several alterations in blood cell populations and indexes in patients with COVID-19 pneumonia in comparison with patients with non COVID-19 pneumonia. Some of these indexes showed promising prognostic abilities. Further studies are necessary to confirm these results
    corecore