542 research outputs found

    Metronomic Chemotherapy with Vinorelbine Produces Clinical Benefit and Low Toxicity in Frail Elderly Patients Affected by Advanced Non-Small Cell Lung Cancer

    Get PDF
    Lung cancer is the leading cause of death worldwide. The treatment choice for advanced stage of lung cancer may depend on histotype, performance status (PS), age, and comorbidities. In the present study, we focused on the effect of metronomic vinorelbine treatment in elderly patients with advanced unresectable non-small cell lung cancer (NSCLC). Methods. From January 2016 to December 2016, 44 patients affected by non-small cell lung cancer referred to our oncology day hospital were progressively analyzed. The patients were treated with oral vinorelbine 30 mg x 3/wk or 40 mg x 3/wk meaning one day on and one day off. The patients were older than 60, stage IIIB or IV, ECOG PS ≥ 1, and have at least one important comorbidity (renal, hepatic, or cardiovascular disease). The schedule was based on ECOG-PS and comorbidities. The primary endpoint was progression-free survival (PFS). PFS was used to compare patients based on different scheduled dosage (30 or 40 mg x3/weekly) and age (more or less than 75 years old) as exploratory analysis. We also evaluated as secondary endpoint toxicity according to Common Toxicity Criteria Version 2.0. Results. Vinorelbine showed a good safety profile at different doses taken orally and was effective in controlling cancer progression. The median overall survival (OS) was 12 months. The disease control rate (DCR) achieved 63%. The median PFS was 9 months. A significant difference in PFS was detected comparing patients aged below with those over 75, and the HR value was 0.72 (p<0.05). Not significant was the difference between groups with different schedules. Conclusions. This study confirmed the safety profile of metronomic vinorelbine and its applicability for patients unfit for standard chemotherapies and adds the possibility of considering this type of schedule not only for very elderly patients

    Cost-effectiveness of cerebrospinal biomarkers for the diagnosis of Alzheimer\u27s disease

    Get PDF
    Background: Accurate and timely diagnosis of Alzheimer\u27s disease (AD) is important for prompt initiation of treatment in patients with AD and to avoid inappropriate treatment of patients with false-positive diagnoses. Methods: Using a Markov model, we estimated the lifetime costs and quality-adjusted life-years (QALYs) of cerebrospinal fluid biomarker analysis in a cohort of patients referred to a neurologist or memory clinic with suspected AD who remained without a definitive diagnosis of AD or another condition after neuroimaging. Parametric values were estimated from previous health economic models and the medical literature. Extensive deterministic and probabilistic sensitivity analyses were performed to evaluate the robustness of the results. Results: At a 12.7% pretest probability of AD, biomarker analysis after normal neuroimaging findings has an incremental cost-effectiveness ratio (ICER) of 11,032perQALYgained.ResultsweresensitivetothepretestprevalenceofAD,andtheICERincreasedtoover 11,032 per QALY gained. Results were sensitive to the pretest prevalence of AD, and the ICER increased to over 50,000 per QALY when the prevalence of AD fell below 9%. Results were also sensitive to patient age (biomarkers are less cost-effective in older cohorts), treatment uptake and adherence, biomarker test characteristics, and the degree to which patients with suspected AD who do not have AD benefit from AD treatment when they are falsely diagnosed. Conclusions: The cost-effectiveness of biomarker analysis depends critically on the prevalence of AD in the tested population. In general practice, where the prevalence of AD after clinical assessment and normal neuroimaging findings may be low, biomarker analysis is unlikely to be cost-effective at a willingness-to-pay threshold of $ 50,000 per QALY gained. However, when at least 1 in 11 patients has AD after normal neuroimaging findings, biomarker analysis is likely cost-effective. Specifically, for patients referred to memory clinics with memory impairment who do not present neuroimaging evidence of medial temporal lobe atrophy, pretest prevalence of AD may exceed 15%. Biomarker analysis is a potentially cost-saving diagnostic method and should be considered for adoption in high-prevalence centers

    Cost-effectiveness of cerebrospinal biomarkers for the diagnosis of Alzheimer’s disease

    Get PDF
    Background: Accurate and timely diagnosis of Alzheimer’s disease (AD) is important for prompt initiation of treatment in patients with AD and to avoid inappropriate treatment of patients with false-positive diagnoses. Methods: Using a Markov model, we estimated the lifetime costs and quality-adjusted life-years (QALYs) of cerebrospinal fluid biomarker analysis in a cohort of patients referred to a neurologist or memory clinic with suspected AD who remained without a definitive diagnosis of AD or another condition after neuroimaging. Parametric values were estimated from previous health economic models and the medical literature. Extensive deterministic and probabilistic sensitivity analyses were performed to evaluate the robustness of the results. Results: At a 12.7% pretest probability of AD, biomarker analysis after normal neuroimaging findings has an incremental cost-effectiveness ratio (ICER) of 11,032perQALYgained.ResultsweresensitivetothepretestprevalenceofAD,andtheICERincreasedtoover11,032 per QALY gained. Results were sensitive to the pretest prevalence of AD, and the ICER increased to over 50,000 per QALY when the prevalence of AD fell below 9%. Results were also sensitive to patient age (biomarkers are less cost-effective in older cohorts), treatment uptake and adherence, biomarker test characteristics, and the degree to which patients with suspected AD who do not have AD benefit from AD treatment when they are falsely diagnosed. Conclusions: The cost-effectiveness of biomarker analysis depends critically on the prevalence of AD in the tested population. In general practice, where the prevalence of AD after clinical assessment and normal neuroimaging findings may be low, biomarker analysis is unlikely to be cost-effective at a willingness-to-pay threshold of $50,000 per QALY gained. However, when at least 1 in 11 patients has AD after normal neuroimaging findings, biomarker analysis is likely cost-effective. Specifically, for patients referred to memory clinics with memory impairment who do not present neuroimaging evidence of medial temporal lobe atrophy, pretest prevalence of AD may exceed 15%. Biomarker analysis is a potentially cost-saving diagnostic method and should be considered for adoption in high-prevalence centers

    Differences between Atrial Fibrillation Detected before and after Stroke and TIA: A Systematic Review and Meta-Analysis

    Get PDF
    Background: Preliminary evidence suggests that patients with atrial fibrillation (AF) detected after stroke (AFDAS) may have a lower prevalence of cardiovascular comorbidities and lower risk of stroke recurrence than AF known before stroke (KAF). Objective: We performed a systematic search and meta-analysis to compare the characteristics of AFDAS and KAF. Methods: We searched PubMed, Scopus, and EMBASE for articles reporting differences between AFDAS and KAF until June 30, 2021. We performed random- or fixed-effects meta-analyses to evaluate differences between AFDAS and KAF in demographic factors, vascular risk factors, prevalent vascular comorbidities, structural heart disease, stroke severity, insular cortex involvement, stroke recurrence, and death. Results: In 21 studies including 22,566 patients with ischemic stroke or transient ischemic attack, the prevalence of coronary artery disease, congestive heart failure, prior myocardial infarction, and a history of cerebrovascular events was significantly lower in AFDAS than KAF. Left atrial size was smaller, and left ventricular ejection fraction was higher in AFDAS than KAF. The risk of recurrent stroke was 26% lower in AFDAS than in KAF. There were no differences in age, sex, stroke severity, or death rates between AFDAS and KAF. There were not enough studies to report differences in insular cortex involvement between AF types. Conclusions: We found significant differences in the prevalence of vascular comorbidities, structural heart disease, and stroke recurrence rates between AFDAS and KAF, suggesting that they constitute different clinical entities within the AF spectrum. PROSPERO registration number is CRD42020202622

    Atrial cardiopathy and cognitive impairment

    Get PDF
    Cognitive impairment involves complex interactions between multiple pathways and mechanisms, one of which being cardiac disorders. Atrial cardiopathy (AC) is a structural and functional disorder of the left atrium that may be a substrate for other cardiac disorders such as atrial fibrillation (AF) and heart failure (HF). The association between AF and HF and cognitive decline is clear; however, the relationship between AC and cognition requires further investigation. Studies have shown that several markers of AC, such as increased brain natriuretic peptide and left atrial enlargement, are associated with an increased risk for cognitive impairment. The pathophysiology of cognitive decline in patients with AC is not yet well understood. Advancing our understanding of the relationship between AC and cognition may point to important treatable targets and inform future therapeutic advancements. This review presents our current understanding of the diagnosis of AC, as well as clinical characteristics and potential pathways involved in the association between AC and cognitive impairment

    Using Self-Organizing Maps for the Behavioral Analysis of Virtualized Network Functions

    Get PDF
    Detecting anomalous behaviors in a network function virtualization infrastructure is of the utmost importance for network operators. In this paper, we propose a technique, based on Self-Organizing Maps, to address such problem by leveraging on the massive amount of historical system data that is typically available in these infrastructures. Indeed, our method consists of a joint analysis of system-level metrics, provided by the virtualized infrastructure monitoring system and referring to resource consumption patterns of the physical hosts and the virtual machines (or containers) that run on top of them, and application-level metrics, provided by the individual virtualized network functions monitoring subsystems and related to the performance levels of the individual applications. The implementation of our approach has been validated on real data coming from a subset of the Vodafone infrastructure for network function virtualization, where it is currently employed to support the decisions of data center operators. Experimental results show that our technique is capable of identifying specific points in space (i.e., components of the infrastructure) and time of the recent evolution of the monitored infrastructure that are worth to be investigated by human operators in order to keep the system running under expected conditions

    Basin-scale interaction between post-LGM faulting and morpho-sedimentary processes in the S. Eufemia Gulf (Southern Tyrrhenian Sea)

    Get PDF
    The integrated interpretation of high-resolution multibeam bathymetry, seismic profiles and backscatter data in the S. Eufemia Gulf (SEG; Calabro-Tyrrhenian continental margin, south-eastern Tyrrhenian Sea) documents the relationship between postglacial fault activity and morpho-sedimentary processes. Three systems of active normal faults that affect the seafloor or the shallow subsurface, have been identified: 1) the S. Eufemia fault system located on the continental shelf with fault planes mainly oriented N26E-N40E; 2) the offshore fault system that lies on the continental slope off Capo Suvero with fault planes mainly oriented N28E-N60E; 3) the Angitola Canyon fault system located on the seafloor adjacent to the canyon having fault planes oriented N60EN85E. The faults produce a belt of linear escarpments with vertical displacement varying from a few decimeters to about 12 m. One of the most prominent active structures is the fault F1 with the highest fault length (about 9.5 km). Two main segments of this fault are identified: a segment characterised by seafloor deformation with metric slip affecting Holocene deposits; a segment characterised by folding of the seafloor. A combined tectonostratigraphic model of an extensional fault propagation fold is proposed here to explain such different deformation.In addition to the seabed escarpments produced by fault deformation, in the SEG, a strong control of fault activity on recent sedimentary processes is clearly observed. For example, canyons and channels frequently change their course in response to their interaction with main tectonic structures. Moreover, the upper branch of the Angitola Canyon shows straight flanks determined by fault scarps. Tectonics also determined different sediment accumulation rates and types of sedimentation (e.g., the accumulation of hanging wall turbidite deposits and the development of contourite deposits around the Maida Ridge). Furthermore, the distribution of landslides is often connected to main fault scarps and fluids are locally confined in the hanging wall side of faults and can escape at the seabed, generating pockmarks aligned along their footwall

    The intriguing association between patent foramen ovale and atrial fibrillation

    Get PDF

    Behavioral analysis for virtualized network functions: A som-based approach

    Get PDF
    In this paper, we tackle the problem of detecting anomalous behaviors in a virtualized infrastructure for network function virtualization, proposing to use self-organizing maps for analyzing historical data available through a data center. We propose a joint analysis of system-level metrics, mostly related to resource consumption patterns of the hosted virtual machines, as available through the virtualized infrastructure monitoring system, and the application-level metrics published by individual virtualized network functions through their own monitoring subsystems. Experimental results, obtained by processing real data from one of the NFV data centers of the Vodafone network operator, show that our technique is able to identify specific points in space and time of the recent evolution of the monitored infrastructure that are worth to be investigated by a human operator in order to keep the system running under expected conditions
    corecore