28 research outputs found

    Breast cancer recurrence after reoperation for surgical bleeding.

    Get PDF
    BACKGROUND: Bleeding activates platelets that can bind tumour cells, potentially promoting metastatic growth in patients with cancer. This study investigated whether reoperation for postoperative bleeding is associated with breast cancer recurrence. METHODS: Using the Danish Breast Cancer Group database and the Danish National Patient Register (DNPR), a cohort of women with incident stage I-III breast cancer, who underwent breast-conserving surgery or mastectomy during 1996-2008 was identified. Information on reoperation for bleeding within 14 days of the primary surgery was retrieved from the DNPR. Follow-up began 14 days after primary surgery and continued until breast cancer recurrence, death, emigration, 10 years of follow-up, or 1 January 2013. Incidence rates of breast cancer recurrence were calculated and Cox regression models were used to quantify the association between reoperation and recurrence, adjusting for potential confounders. Crude and adjusted hazard ratios according to site of recurrence were calculated. RESULTS: Among 30 711 patients (205 926 person-years of follow-up), 767 patients had at least one reoperation within 14 days of primary surgery, and 4769 patients developed breast cancer recurrence. Median follow-up was 7·0 years. The incidence of recurrence was 24·0 (95 per cent c.i. 20·2 to 28·6) per 1000 person-years for reoperated patients and 23·1 (22·5 to 23·8) per 1000 person-years for non-reoperated patients. The overall adjusted hazard ratio was 1·06 (95 per cent c.i. 0·89 to 1·26). The estimates did not vary by site of breast cancer recurrence. CONCLUSION: In this large cohort study, there was no evidence of an association between reoperation for bleeding and breast cancer recurrence

    Index-based approach for estimating vulnerability of Arctic biota to oil spills

    Get PDF
    Risk of an Arctic oil spill has become a global matter of concern. Climate change induced opening of shipping routes increases the Arctic maritime traffic which exposes the area to negative impacts of potential maritime accidents. Still, quantitative analyses of the likely environmental impacts of such accidents are scarce, and our understanding of the uncertainties related to both accidents and their consequences is poor. There is an obvious need for analysis tools that allow us to systematically analyze the impacts of oil spills on Arctic species, so the risks can be taken into account when new sea routes or previously unexploited oil reserves are utilized. In this paper, an index‐based approach is developed to study exposure potential (described via probability of becoming exposed to spilled oil) and sensitivity (described via oil‐induced mortality and recovery) of Arctic biota in the face of an oil spill. First, a conceptual model presenting the relevant variables that contribute to exposure potential and sensitivity of key Arctic marine functional groups was built. Second, based on an extensive literature review, a probabilistic estimate was assigned for each variable, and the variables were combined to an index representing the overall vulnerability of Arctic biota. The resulting index can be used to compare the relative risk between functional groups and accident scenarios. Results indicate that birds have the highest vulnerability to spilled oil, and seals and whales the lowest. Polar bears’ vulnerability varies greatly between seasons, while ice seals’ vulnerability remains the same in every accident scenario. Exposure potential of most groups depends strongly on type of oil, whereas their sensitivity contains less variation.Peer reviewe

    The Effect of Inappropriate Calibration: Three Case Studies in Molecular Ecology

    Get PDF
    Time-scales estimated from sequence data play an important role in molecular ecology. They can be used to draw correlations between evolutionary and palaeoclimatic events, to measure the tempo of speciation, and to study the demographic history of an endangered species. In all of these studies, it is paramount to have accurate estimates of time-scales and substitution rates. Molecular ecological studies typically focus on intraspecific data that have evolved on genealogical scales, but often these studies inappropriately employ deep fossil calibrations or canonical substitution rates (e.g., 1% per million years for birds and mammals) for calibrating estimates of divergence times. These approaches can yield misleading estimates of molecular time-scales, with significant impacts on subsequent evolutionary and ecological inferences. We illustrate this calibration problem using three case studies: avian speciation in the late Pleistocene, the demographic history of bowhead whales, and the Pleistocene biogeography of brown bears. For each data set, we compare the date estimates that are obtained using internal and external calibration points. In all three cases, the conclusions are significantly altered by the application of revised, internally-calibrated substitution rates. Collectively, the results emphasise the importance of judicious selection of calibrations for analyses of recent evolutionary events

    Exploring, exploiting and evolving diversity of aquatic ecosystem models: a community perspective

    Get PDF

    Prevalence and characteristics of patients with low levels of low-density lipoprotein cholesterol in northern Denmark: a descriptive study

    No full text
    Sigrun Alba Johannesdottir Schmidt,1 Uffe Heide-Jørgensen,1 Angelika D Manthripragada,2 Vera Ehrenstein1 1Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark; 2Center for Observational Research, Amgen Inc., Thousand Oaks, CA, USA Background: With the emergence of new lipid-lowering therapies, more patients are expected to achieve substantial lowering of low-density lipoprotein cholesterol (LDL-C). However, there are limited data examining the clinical experience of patients with low (<1.3 mmol/L) or very low (<0.65 mmol/L) levels of LDL-C. To provide information on patients with low LDL-C, we identified and characterized persons with low LDL-C using data from Danish medical databases. Methods: Using a population-based clinical laboratory database, we identified adults with at least one LDL-C measurement in northern Denmark between 1998 and 2011 (population approximately 1.5 million persons). Based on the lowest measurement during the study period, we divided patients into groups with low (<1.3 mmol/L), moderate (1.3–3.3 mmol/L), or high (>3.3 mmol/L) LDL-C. We described their demographic characteristics, entire comorbidity history, and 90-day prescription history prior to the lowest LDL-C value measured. Finally, we further restricted the analysis to individuals with very low LDL-C (<0.65 mmol/L). Results: Among 765,503 persons with an LDL-C measurement, 23% had high LDL-C, 73% had moderate LDL-C, and 4.8% had low LDL-C. In the latter group, 9.6% (0.46% of total) had very low LDL-C. Compared with the moderate and high LDL-C categories, the low LDL-C group included more males and older persons with a higher prevalence of cardiovascular disease, diabetes, chronic pulmonary disease, ulcer disease, and obesity, as measured by hospital diagnoses or relevant prescription drugs for these diseases. Cancer and use of psychotropic drugs were also more prevalent. These patterns of distribution became even more pronounced when restricting to individuals with very low LDL-C. Conclusion: Using Danish medical databases, we identified a cohort of patients with low LDL-C and found that cohort members differed from patients with higher LDL-C levels. These differences may be explained by various factors, including prescribing patterns of lipid-lowering therapies. Keywords: cross-sectional study, hyperlipidemia, registries, statin

    Sampling strategies for selecting general population comparison cohorts

    No full text
    Uffe Heide-Jørgensen, Kasper Adelborg, Johnny Kahlert, Henrik Toft Sørensen, Lars Pedersen Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark Background: For a patient cohort, access to linkable population-based registries permits sampling of a comparison cohort from the general population, thereby contributing to the understanding of the disease in a population context. However, sampling without replacement in random order can lead to immortal time bias by conditioning on the future.Aim: We compared the following strategies for sampling comparison cohorts in matched cohort studies with respect to time to ischemic stroke and mortality: sampling without replacement in random order; sampling with replacement; and sampling without replacement in chronological order.Methods: We constructed index cohorts of individuals from the Danish general population with no particular trait, except being alive and without ischemic stroke on the index date. We also constructed index cohorts of persons aged >50 years from the general population. We then applied the sampling strategies to sample comparison cohorts (5:1 or 1:1) from the Danish general population and compared outcome risks between the index and comparison cohorts. Finally, we sampled comparison cohorts for a heart failure cohort using each strategy.Results: We observed increased outcome risks in comparison cohorts sampled 5:1 without replacement in random order compared to the index cohorts. However, these increases were minuscule unless index persons were aged >50 years. In this setting, sampling without replacement in chronological order failed to sample a sufficient number of comparators, and the mortality risks in these comparison cohorts were lower than in the index cohorts. Sampling 1:1 showed no systematic difference between comparison and index cohorts. When we sampled comparison cohorts for the heart failure patients, we observed a pattern similar to when index persons were aged >50 years.Conclusion: When index persons were aged >50 years, ie, had high outcome risks, sampling 5:1 without replacement introduced bias. Sampling with replacement or 1:1 did not introduce bias. Keywords: matched cohort study, survival analysis, population-based registry, observational stud

    Comparison of risk of osteoporotic fracture in Denosumab vs Alendronate treatment within 3 years of initiation

    No full text
    Importance Head-to-head randomized clinical trials showed greater efficacy of denosumab vs alendronate in improving bone mineral density. Although there is an association of changes in bone mineral density with reductions in fracture risk, the magnitude of the association is not well established. Objective To compare the risk of hip and any fracture in patients treated with denosumab and alendronate in routine practice settings. Design, Setting, and Participants This Danish nationwide, population-based, historical cohort study of a population with universal access to health care used prospectively collected, individually linked data from Danish health registries with complete follow-up. Cohorts consisted of 92 355 individuals 50 years or older who were new users of denosumab (n = 4624) or alendronate (n = 87 731) from May 2010 to December 2017 after at least 1 year without an antiosteoporosis medication dispensing. Exposures Initiation of denosumab or alendronate. Main Outcomes and Measures The primary outcome was hospitalization for hip fracture, and the secondary outcome was hospitalization for any fracture. Inverse probability of treatment weights and the intention-to-treat approach were used to calculate cumulative incidences and adjusted hazard ratios (aHRs) with 95% CIs. Results Of the 92 355 included patients, 75 046 (81.3%) were women, and the mean (SD) age was 71 (10) years. The denosumab cohort had a lower proportion of men than the alendronate cohort (12.7% [589] vs 19.0% [16 700]), while age distributions were similar in the 2 cohorts. Within 3 years of follow-up, initiation of denosumab or alendronate was associated with cumulative incidences of 3.7% and 3.1%, respectively, for hip fracture and 9.0% and 9.0%, respectively, for any fracture. Overall, the aHRs for denosumab vs alendronate were 1.08 (95% CI, 0.92-1.28) for hip fracture and 0.92 (95% CI, 0.83-1.02) for any fracture. The aHR of denosumab vs alendronate for hip fracture was 1.07 (95% CI, 0.85-1.34) among patients with a history of any fracture and 1.05 (95% CI, 0.83-1.32) among patients without history of fracture. The aHR for any fracture for denosumab vs alendronate was 0.84 (95% CI, 0.71-0.98) among patients with a history of any fracture and 0.77 (95% CI, 0.64-0.93) among patients with no history of fracture. Conclusions and Relevance Treatment with denosumab and alendronate was associated with similar risks of hip or any fracture over a 3-year period, regardless of fracture history

    Comparison of risk of osteoporotic fracture in Denosumab vs Alendronate treatment within 3 years of initiation

    No full text
    Importance Head-to-head randomized clinical trials showed greater efficacy of denosumab vs alendronate in improving bone mineral density. Although there is an association of changes in bone mineral density with reductions in fracture risk, the magnitude of the association is not well established. Objective To compare the risk of hip and any fracture in patients treated with denosumab and alendronate in routine practice settings. Design, Setting, and Participants This Danish nationwide, population-based, historical cohort study of a population with universal access to health care used prospectively collected, individually linked data from Danish health registries with complete follow-up. Cohorts consisted of 92 355 individuals 50 years or older who were new users of denosumab (n = 4624) or alendronate (n = 87 731) from May 2010 to December 2017 after at least 1 year without an antiosteoporosis medication dispensing. Exposures Initiation of denosumab or alendronate. Main Outcomes and Measures The primary outcome was hospitalization for hip fracture, and the secondary outcome was hospitalization for any fracture. Inverse probability of treatment weights and the intention-to-treat approach were used to calculate cumulative incidences and adjusted hazard ratios (aHRs) with 95% CIs. Results Of the 92 355 included patients, 75 046 (81.3%) were women, and the mean (SD) age was 71 (10) years. The denosumab cohort had a lower proportion of men than the alendronate cohort (12.7% [589] vs 19.0% [16 700]), while age distributions were similar in the 2 cohorts. Within 3 years of follow-up, initiation of denosumab or alendronate was associated with cumulative incidences of 3.7% and 3.1%, respectively, for hip fracture and 9.0% and 9.0%, respectively, for any fracture. Overall, the aHRs for denosumab vs alendronate were 1.08 (95% CI, 0.92-1.28) for hip fracture and 0.92 (95% CI, 0.83-1.02) for any fracture. The aHR of denosumab vs alendronate for hip fracture was 1.07 (95% CI, 0.85-1.34) among patients with a history of any fracture and 1.05 (95% CI, 0.83-1.32) among patients without history of fracture. The aHR for any fracture for denosumab vs alendronate was 0.84 (95% CI, 0.71-0.98) among patients with a history of any fracture and 0.77 (95% CI, 0.64-0.93) among patients with no history of fracture. Conclusions and Relevance Treatment with denosumab and alendronate was associated with similar risks of hip or any fracture over a 3-year period, regardless of fracture history
    corecore