148 research outputs found
Economic Sustainability and Multiple Risk Management Strategies: Examining Interlinked Decisions of Small American Farms
Economic viability of small farms and farming businesses depends on multiple factors. These farms have limited production and financial resources to maintain their operation. Therefore, to sustain farming, adopting appropriate risk management strategies is a pivotal decision for small farmers. We surveyed Tennessee’s small farms and utilized multivariate probit models to study factors influencing the adoption of various risk management strategies. Our findings suggest that the decisions related to the adoption of risk management strategies are significantly interlinked. Along with factors representing the operator’s age, education, and farm operator’s income and land holdings, we also found that the government incentives (payments), smartphones, and farmers’ continuation plan significantly influence the strategic decisions of adopting risk management strategies
Steering a Historical Disease Forecasting Model Under a Pandemic: Case of Flu and COVID-19
Forecasting influenza in a timely manner aids health organizations and
policymakers in adequate preparation and decision making. However, effective
influenza forecasting still remains a challenge despite increasing research
interest. It is even more challenging amidst the COVID pandemic, when the
influenza-like illness (ILI) counts are affected by various factors such as
symptomatic similarities with COVID-19 and shift in healthcare seeking patterns
of the general population. Under the current pandemic, historical influenza
models carry valuable expertise about the disease dynamics but face
difficulties adapting. Therefore, we propose CALI-Net, a neural transfer
learning architecture which allows us to 'steer' a historical disease
forecasting model to new scenarios where flu and COVID co-exist. Our framework
enables this adaptation by automatically learning when it should emphasize
learning from COVID-related signals and when it should learn from the
historical model. Thus, we exploit representations learned from historical ILI
data as well as the limited COVID-related signals. Our experiments demonstrate
that our approach is successful in adapting a historical forecasting model to
the current pandemic. In addition, we show that success in our primary goal,
adaptation, does not sacrifice overall performance as compared with
state-of-the-art influenza forecasting approaches.Comment: Appears in AAAI-2
Global, regional, and national comparative risk assessment of 84 behavioural, environmental and occupational, and metabolic risks or clusters of risks for 195 countries and territories, 1990–2017 : a systematic analysis for the Global Burden of Disease Study 2017
Background: The Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2017 comparative risk assessment (CRA) is a comprehensive approach to risk factor quantification that offers a useful tool for synthesising evidence on risks and risk outcome associations. With each annual GBD study, we update the GBD CRA to incorporate improved methods, new risks and risk outcome pairs, and new data on risk exposure levels and risk outcome associations.
Methods: We used the CRA framework developed for previous iterations of GBD to estimate levels and trends in exposure, attributable deaths, and attributable disability-adjusted life-years (DALYs), by age group, sex, year, and location for 84 behavioural, environmental and occupational, and metabolic risks or groups of risks from 1990 to 2017. This study included 476 risk outcome pairs that met the GBD study criteria for convincing or probable evidence of causation. We extracted relative risk and exposure estimates from 46 749 randomised controlled trials, cohort studies, household surveys, census data, satellite data, and other sources. We used statistical models to pool data, adjust for bias, and incorporate covariates. Using the counterfactual scenario of theoretical minimum risk exposure level (TMREL), we estimated the portion of deaths and DALYs that could be attributed to a given risk. We explored the relationship between development and risk exposure by modelling the relationship between the Socio-demographic Index (SDI) and risk-weighted exposure prevalence and estimated expected levels of exposure and risk-attributable burden by SDI. Finally, we explored temporal changes in risk-attributable DALYs by decomposing those changes into six main component drivers of change as follows: (1) population growth; (2) changes in population age structures; (3) changes in exposure to environmental and occupational risks; (4) changes in exposure to behavioural risks; (5) changes in exposure to metabolic risks; and (6) changes due to all other factors, approximated as the risk-deleted death and DALY rates, where the risk-deleted rate is the rate that would be observed had we reduced the exposure levels to the TMREL for all risk factors included in GBD 2017.
Findings: In 2017,34.1 million (95% uncertainty interval [UI] 33.3-35.0) deaths and 121 billion (144-1.28) DALYs were attributable to GBD risk factors. Globally, 61.0% (59.6-62.4) of deaths and 48.3% (46.3-50.2) of DALYs were attributed to the GBD 2017 risk factors. When ranked by risk-attributable DALYs, high systolic blood pressure (SBP) was the leading risk factor, accounting for 10.4 million (9.39-11.5) deaths and 218 million (198-237) DALYs, followed by smoking (7.10 million [6.83-7.37] deaths and 182 million [173-193] DALYs), high fasting plasma glucose (6.53 million [5.23-8.23] deaths and 171 million [144-201] DALYs), high body-mass index (BMI; 4.72 million [2.99-6.70] deaths and 148 million [98.6-202] DALYs), and short gestation for birthweight (1.43 million [1.36-1.51] deaths and 139 million [131-147] DALYs). In total, risk-attributable DALYs declined by 4.9% (3.3-6.5) between 2007 and 2017. In the absence of demographic changes (ie, population growth and ageing), changes in risk exposure and risk-deleted DALYs would have led to a 23.5% decline in DALYs during that period. Conversely, in the absence of changes in risk exposure and risk-deleted DALYs, demographic changes would have led to an 18.6% increase in DALYs during that period. The ratios of observed risk exposure levels to exposure levels expected based on SDI (O/E ratios) increased globally for unsafe drinking water and household air pollution between 1990 and 2017. This result suggests that development is occurring more rapidly than are changes in the underlying risk structure in a population. Conversely, nearly universal declines in O/E ratios for smoking and alcohol use indicate that, for a given SDI, exposure to these risks is declining. In 2017, the leading Level 4 risk factor for age-standardised DALY rates was high SBP in four super-regions: central Europe, eastern Europe, and central Asia; north Africa and Middle East; south Asia; and southeast Asia, east Asia, and Oceania. The leading risk factor in the high-income super-region was smoking, in Latin America and Caribbean was high BMI, and in sub-Saharan Africa was unsafe sex. O/E ratios for unsafe sex in sub-Saharan Africa were notably high, and those for alcohol use in north Africa and the Middle East were notably low.
Interpretation: By quantifying levels and trends in exposures to risk factors and the resulting disease burden, this assessment offers insight into where past policy and programme efforts might have been successful and highlights current priorities for public health action. Decreases in behavioural, environmental, and occupational risks have largely offset the effects of population growth and ageing, in relation to trends in absolute burden. Conversely, the combination of increasing metabolic risks and population ageing will probably continue to drive the increasing trends in non-communicable diseases at the global level, which presents both a public health challenge and opportunity. We see considerable spatiotemporal heterogeneity in levels of risk exposure and risk-attributable burden. Although levels of development underlie some of this heterogeneity, O/E ratios show risks for which countries are overperforming or underperforming relative to their level of development. As such, these ratios provide a benchmarking tool to help to focus local decision making. Our findings reinforce the importance of both risk exposure monitoring and epidemiological research to assess causal connections between risks and health outcomes, and they highlight the usefulness of the GBD study in synthesising data to draw comprehensive and robust conclusions that help to inform good policy and strategic health planning
Baricitinib in patients admitted to hospital with COVID-19 (RECOVERY): a randomised, controlled, open-label, platform trial and updated meta-analysis
Background: We aimed to evaluate the use of baricitinib, a Janus kinase (JAK) 1–2 inhibitor, for the treatment of patients admitted to hospital with COVID-19. Methods: This randomised, controlled, open-label, platform trial (Randomised Evaluation of COVID-19 Therapy [RECOVERY]), is assessing multiple possible treatments in patients hospitalised with COVID-19 in the UK. Eligible and consenting patients were randomly allocated (1:1) to either usual standard of care alone (usual care group) or usual care plus baricitinib 4 mg once daily by mouth for 10 days or until discharge if sooner (baricitinib group). The primary outcome was 28-day mortality assessed in the intention-to-treat population. A meta-analysis was done, which included the results from the RECOVERY trial and all previous randomised controlled trials of baricitinib or other JAK inhibitor in patients hospitalised with COVID-19. The RECOVERY trial is registered with ISRCTN (50189673) and ClinicalTrials.gov (NCT04381936) and is ongoing. Findings: Between Feb 2 and Dec 29, 2021, from 10 852 enrolled, 8156 patients were randomly allocated to receive usual care plus baricitinib versus usual care alone. At randomisation, 95% of patients were receiving corticosteroids and 23% were receiving tocilizumab (with planned use within the next 24 h recorded for a further 9%). Overall, 514 (12%) of 4148 patients allocated to baricitinib versus 546 (14%) of 4008 patients allocated to usual care died within 28 days (age-adjusted rate ratio 0·87; 95% CI 0·77–0·99; p=0·028). This 13% proportional reduction in mortality was somewhat smaller than that seen in a meta-analysis of eight previous trials of a JAK inhibitor (involving 3732 patients and 425 deaths), in which allocation to a JAK inhibitor was associated with a 43% proportional reduction in mortality (rate ratio 0·57; 95% CI 0·45–0·72). Including the results from RECOVERY in an updated meta-analysis of all nine completed trials (involving 11 888 randomly assigned patients and 1485 deaths) allocation to baricitinib or another JAK inhibitor was associated with a 20% proportional reduction in mortality (rate ratio 0·80; 95% CI 0·72–0·89; p<0·0001). In RECOVERY, there was no significant excess in death or infection due to non-COVID-19 causes and no significant excess of thrombosis, or other safety outcomes. Interpretation: In patients hospitalised with COVID-19, baricitinib significantly reduced the risk of death but the size of benefit was somewhat smaller than that suggested by previous trials. The total randomised evidence to date suggests that JAK inhibitors (chiefly baricitinib) reduce mortality in patients hospitalised for COVID-19 by about one-fifth. Funding: UK Research and Innovation (Medical Research Council) and National Institute of Health Research
Veto Players in Post-Conflict DDR Programs: Evidence from Nepal and the DRC
Under what conditions are Disarmament, Demobilization and Reintegration (DDR) programs successfully implemented following intrastate conflict? Previous research is dominated by under-theorized case studies that lack the ability to detect the precise factors and mechanisms that lead to successful DDR. In this article, we draw on game theory and ask how the number of veto players, their policy distance, and their internal cohesion impact DDR implementation. Using empirical evidence from Nepal and the Democratic Republic of Congo, we show that the number of veto players, rather than their distance and cohesion, explains the (lack of) implementation of DDR
Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States
Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub (https://covid19forecasthub.org/) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multimodel ensemble forecast that combined predictions from dozens of groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naïve baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-wk horizon three to five times larger than when predicting at a 1-wk horizon. This project underscores the role that collaboration and active coordination between governmental public-health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks
CAGI, the Critical Assessment of Genome Interpretation, establishes progress and prospects for computational genetic variant interpretation methods
Background:
The Critical Assessment of Genome Interpretation (CAGI) aims to advance the state-of-the-art for computational prediction of genetic variant impact, particularly where relevant to disease. The five complete editions of the CAGI community experiment comprised 50 challenges, in which participants made blind predictions of phenotypes from genetic data, and these were evaluated by independent assessors.
//
Results:
Performance was particularly strong for clinical pathogenic variants, including some difficult-to-diagnose cases, and extends to interpretation of cancer-related variants. Missense variant interpretation methods were able to estimate biochemical effects with increasing accuracy. Assessment of methods for regulatory variants and complex trait disease risk was less definitive and indicates performance potentially suitable for auxiliary use in the clinic.
//
Conclusions:
Results show that while current methods are imperfect, they have major utility for research and clinical applications. Emerging methods and increasingly large, robust datasets for training and assessment promise further progress ahead
Mapping geographical inequalities in access to drinking water and sanitation facilities in low-income and middle-income countries, 2000-17
Background: Universal access to safe drinking water and sanitation facilities is an essential human right, recognised in the Sustainable Development Goals as crucial for preventing disease and improving human wellbeing. Comprehensive, high-resolution estimates are important to inform progress towards achieving this goal. We aimed to produce high-resolution geospatial estimates of access to drinking water and sanitation facilities. Methods: We used a Bayesian geostatistical model and data from 600 sources across more than 88 low-income and middle-income countries (LMICs) to estimate access to drinking water and sanitation facilities on continuous continent-wide surfaces from 2000 to 2017, and aggregated results to policy-relevant administrative units. We estimated mutually exclusive and collectively exhaustive subcategories of facilities for drinking water (piped water on or off premises, other improved facilities, unimproved, and surface water) and sanitation facilities (septic or sewer sanitation, other improved, unimproved, and open defecation) with use of ordinal regression. We also estimated the number of diarrhoeal deaths in children younger than 5 years attributed to unsafe facilities and estimated deaths that were averted by increased access to safe facilities in 2017, and analysed geographical inequality in access within LMICs. Findings: Across LMICs, access to both piped water and improved water overall increased between 2000 and 2017, with progress varying spatially. For piped water, the safest water facility type, access increased from 40·0% (95% uncertainty interval [UI] 39·4–40·7) to 50·3% (50·0–50·5), but was lowest in sub-Saharan Africa, where access to piped water was mostly concentrated in urban centres. Access to both sewer or septic sanitation and improved sanitation overall also increased across all LMICs during the study period. For sewer or septic sanitation, access was 46·3% (95% UI 46·1–46·5) in 2017, compared with 28·7% (28·5–29·0) in 2000. Although some units improved access to the safest drinking water or sanitation facilities since 2000, a large absolute number of people continued to not have access in several units with high access to such facilities (>80%) in 2017. More than 253 000 people did not have access to sewer or septic sanitation facilities in the city of Harare, Zimbabwe, despite 88·6% (95% UI 87·2–89·7) access overall. Many units were able to transition from the least safe facilities in 2000 to safe facilities by 2017; for units in which populations primarily practised open defecation in 2000, 686 (95% UI 664–711) of the 1830 (1797–1863) units transitioned to the use of improved sanitation. Geographical disparities in access to improved water across units decreased in 76·1% (95% UI 71·6–80·7) of countries from 2000 to 2017, and in 53·9% (50·6–59·6) of countries for access to improved sanitation, but remained evident subnationally in most countries in 2017. Interpretation: Our estimates, combined with geospatial trends in diarrhoeal burden, identify where efforts to increase access to safe drinking water and sanitation facilities are most needed. By highlighting areas with successful approaches or in need of targeted interventions, our estimates can enable precision public health to effectively progress towards universal access to safe water and sanitation
- …
