53 research outputs found

    Setting Fair Incentives to Maximize Improvement

    Get PDF
    We consider the problem of helping agents improve by setting goals. Given a set of target skill levels, we assume each agent will try to improve from their initial skill level to the closest target level within reach (or do nothing if no target level is within reach). We consider two models: the common improvement capacity model, where agents have the same limit on how much they can improve, and the individualized improvement capacity model, where agents have individualized limits. Our goal is to optimize the target levels for social welfare and fairness objectives, where social welfare is defined as the total amount of improvement, and we consider fairness objectives when the agents belong to different underlying populations. We prove algorithmic, learning, and structural results for each model. A key technical challenge of this problem is the non-monotonicity of social welfare in the set of target levels, i.e., adding a new target level may decrease the total amount of improvement; agents who previously tried hard to reach a distant target now have a closer target to reach and hence improve less. This especially presents a challenge when considering multiple groups because optimizing target levels in isolation for each group and outputting the union may result in arbitrarily low improvement for a group, failing the fairness objective. Considering these properties, we provide algorithms for optimal and near-optimal improvement for both social welfare and fairness objectives. These algorithmic results work for both the common and individualized improvement capacity models. Furthermore, despite the non-monotonicity property and interference of the target levels, we show a placement of target levels exists that is approximately optimal for the social welfare of each group. Unlike the algorithmic results, this structural statement only holds in the common improvement capacity model, and we illustrate counterexamples of this result in the individualized improvement capacity model. Finally, we extend our algorithms to learning settings where we have only sample access to the initial skill levels of agents

    Censored data considerations and analytical approaches for salivary bioscience data

    Get PDF
    Left censoring in salivary bioscience data occurs when salivary analyte determinations fall below the lower limit of an assay’s measurement range. Conventional statistical approaches for addressing censored values (i.e., recoding as missing, substituting or extrapolating values) may introduce systematic bias. While specialized censored data statistical approaches (i.e., Maximum Likelihood Estimation, Regression on Ordered Statistics, Kaplan-Meier, and general Tobit regression) are available, these methods are rarely implemented in biobehavioral studies that examine salivary biomeasures, and their application to salivary data analysis may be hindered by their sensitivity to skewed data distributions, outliers, and sample size. This study compares descriptive statistics, correlation coefficients, and regression parameter estimates generated via conventional and specialized censored data approaches using salivary C-reactive protein data. We assess differences in statistical estimates across approach and across two levels of censoring (9% and 15%) and examine the sensitivity of our results to sample size. Overall, findings were similar across conventional and censored data approaches, but the implementation of specialized censored data approaches was more efficient (i.e., required little manipulations to the raw analyte data) and appropriate. Based on our review of the findings, we outline preliminary recommendations to enable investigators to more efficiently and effectively reduce statistical bias when working with left-censored salivary biomeasure data

    Correspondence Between Cytomegalovirus Immunoglobulin-G Levels Measured in Saliva and Serum

    Get PDF
    Human cytomegalovirus (HCMV) infects more than 80% of the global population. While mostly asymptomatic, HCMV infection can be serious among the immunocompromised, and it is implicated in chronic disease pathophysiology in adulthood. Large-scale minimally invasive HCMV screening could advance research and public health efforts to monitor infection prevalence and prevent or mitigate downstream risks associated with infection. We examine the utility of measuring HCMV immunoglobulin-G (IgG) levels in saliva as an index of serum levels. Matched serum and saliva samples from healthy adults (N = 98; 44% female; 51% white) were assayed for HCMV IgG, total salivary protein, and salivary markers related to oral inflammation, blood, and tissue integrity. We examine the serum-saliva association for HCMV IgG and assess the influence of participant characteristics and factors specific to the oral compartment (e.g., oral inflammation) on HCMV IgG levels and cross-specimen relations. We found a robust serum-saliva association for HCMV IgG with serum antibody levels accounting for \u3e60% of the variance in salivary levels. This relation remained after adjusting for key demographic and oral immune-related variables. Compared to the serum test, the salivary HCMV IgG test had 51% sensitivity and 97% specificity. With improvements in assay performance and sample optimization, HCMV antibody levels in oral fluids may be a useful proxy for serum levels

    Calibration and Validation of Daisy Model for Sunflower under Partial Root-Zone Drying

    Get PDF
    Due to the increased water consumption and the depletion of water resources, deficit irrigation is an optimal strategy for cultivation, which is usually applied by utilizing the methods of Deficit Irrigation (DI), Regulated Deficit Irrigation (PRD), and Partial Root-zone Drying Irrigation (PRD). In the PRD method, just one side of the plant is irrigated in each irrigation interval. Under these conditions, in the part of the irrigated plant, the roots absorb enough water and grow, so that there is no change in the amount of the plan’s photosynthesis. There are some models, including WOFOST (Van Diepen et al., 1989; Boogaard et al., 1998), EPIC (Jones et al., 1991), AquoCrop (Steduto et al., 2009), and STICS (Brisson et al., 2003), that can simulate crop yield under different soil conditions, Climates, irrigation schedule, and agricultural managements (Hashemi et al., 2018). These models simulate PRD irrigation, such as the DI method. Daisy is the first model, differentiating the gained results between the two methods (Hansen et al., 1990; Hansen et al., 1991); a semi-experimental model that considers the Richards equation (Richards, 1931) to simulate the soil water content and the experimental equations to simulate crop yield parameters. The PRD sub-model in the Daisy was developed and upgraded based on the data gained from potato cultivation under PRD irrigation (Liu et al., 2008; Plauborg et al., 2010). Since this sub-model was developed only for the potato, the aim of the present study was calibration and validation of two parameters; stomatal slop factor (m) and specific leaf weight modifier (LeafAIMod) in the PRD sub-model, to run the Daisy model to simulate sunflower under the PRD irrigation

    Behavioral and psychosocial factors related to mental distress among medical students

    Get PDF
    IntroductionPhysicians die by suicide at rates higher than the general population, with the increased risk beginning in medical school. To better understand why, this study examined the prevalence of mental distress (e.g., depressive symptoms and suicide risk) and behavioral and psychosocial risk factors for distress, as well as the associations between mental distress and risk factors among a sample of medical students in a pre–COVID-19-era.MethodsStudents enrolled in a large California medical school in 2018–2019 (N = 134; 52% female) completed questionnaires assessing sociodemographic characteristics, depression and suicide family history, health behaviors, and psychosocial wellbeing. Assessment scores indexing mental distress (e.g., depressive symptoms, thoughts of suicide in the past 12 months, suicide risk, and history of suicidality) and risk factors (e.g., stress, subjective sleep quality, alcohol use, impostor feelings, and bill payment difficulty) were compared across biological sex using chi-squared tests, and associations between mental distress and risk factors were determined through logistic regression.ResultsElevated mental distress indicators were observed relative to the general public (e.g., 16% positive depression screen, 17% thought about suicide in previous 12 months, 10% positive suicide risk screen, and 34% history of suicidality), as well as elevated risk factors [e.g., 55% moderate or high stress, 95% at least moderate impostor feelings, 59% poor sleep quality, 50% screened positive for hazardous drinking (more likely in females), and 25% difficulty paying bills]. A positive depression screen was associated with higher stress, higher impostor feelings, poorer sleep quality, and difficulty paying bills. Suicidal ideation in the previous 12 months, suicide risk, and a history of suicidality were independently associated with higher levels of impostor feelings.DiscussionHigher scores on assessments of depressive symptoms and suicidal thoughts and behaviors were related to several individual-level and potentially modifiable risk factors (e.g., stress, impostor feelings, sleep quality, and bill payment difficulties). Future research is needed to inform customized screening and resources for the wellbeing of the medical community. However, it is likely that the modification of individual-level risk factors is limited by the larger medical culture and systems, suggesting that successful interventions mitigate suicide risk for medical providers need to address multiple socio-ecological levels

    Risk of incident cardiovascular diseases at national and subnational levels in Iran from 2000 to 2016 and projection through 2030: Insights from Iran STEPS surveys.

    Get PDF
    BackgroundCardiovascular Disease (CVD) is the leading cause of death in developing countries. CVD risk stratification guides the health policy to make evidence-based decisions.AimTo provide current picture and future trend of CVD risk in the adult Iranian population.MethodsNationally representative datasets of 2005, 2006, 2007, 2008, 2009, 2011, and 2016 STEPwise approach to non-communicable diseases risk factor surveillance (STEPS) studies were used to generate the 10-year and 30-year risks of CVD based on Framingham, Globorisk, and World Health Organization (WHO) risk estimation models. Trend of CVD risk was calculated from 2000 until 2016 and projected to 2030.ResultsIn 2016, based on Framingham model, 14.0% of the Iranian, aged 30 to 74, were at great risk (≥20%) of CVD in the next 10 years (8.0% among females, 20.7% among males). Among those aged 25 to 59, 12.7% had ≥45% risk of CVD in the coming 30 years (9.2% among females, 16.6 among males). In 2016, CVD risk was higher among urban area inhabitants. Age-standardized Framingham 10-year CVD risk will increase 32.2% and 19%, from 2000 to 2030, in females and males, respectively. Eastern provinces had the lowest and northern provinces had the greatest risk.ConclusionsThis study projected that CVD risk has increased from 2000 to 2016 in Iran. Without further risk factor modification, this trend will continue until 2030. We have identified populations at higher risks of CVD to guide future intervention

    Contextualizing the impact of prenatal alcohol and tobacco exposure on neurodevelopment in a South African birth cohort: an analysis from the socioecological perspective

    Get PDF
    BackgroundAlcohol and tobacco are known teratogens. Historically, more severe prenatal alcohol exposure (PAE) and prenatal tobacco exposure (PTE) have been examined as the principal predictor of neurodevelopmental alterations, with little incorporation of lower doses or ecological contextual factors that can also impact neurodevelopment, such as socioeconomic resources (SER) or adverse childhood experiences (ACEs). Here, a novel analytical approach informed by a socio-ecological perspective was used to examine the associations between SER, PAE and/or PTE, and ACEs, and their effects on neurodevelopment.MethodsN = 313 mother-child dyads were recruited from a prospective birth cohort with maternal report of PAE and PTE, and cross-sectional structural brain neuroimaging of child acquired via 3T scanner at ages 8–11 years. In utero SER was measured by maternal education, household income, and home utility availability. The child’s ACEs were measured by self-report assisted by the researcher. PAE was grouped into early exposure (<12 weeks), continued exposure (>=12 weeks), and no exposure controls. PTE was grouped into exposed and non-exposed controls.ResultsGreater access to SER during pregnancy was associated with fewer ACEs (maternal education: β = −0.293,p = 0.01; phone access: β = −0.968,p = 0.05). PTE partially mediated the association between SER and ACEs, where greater SER reduced the likelihood of PTE, which was positively associated with ACEs (β = 1.110,p = 0.01). SER was associated with alterations in superior frontal (β = −1336.036, q = 0.046), lateral orbitofrontal (β = −513.865, q = 0.046), caudal anterior cingulate volumes (β = −222.982, q = 0.046), with access to phone negatively associated with all three brain volumes. Access to water was positively associated with superior frontal volume (β=1569.527, q = 0.013). PTE was associated with smaller volumes of lateral orbitofrontal (β = −331.000, q = 0.033) and nucleus accumbens regions (β = −34.800, q = 0.033).ConclusionResearch on neurodevelopment following community-levels of PAE and PTE should more regularly consider the ecological context to accelerate understanding of teratogenic outcomes. Further research is needed to replicate this novel conceptual approach with varying PAE and PTE patterns, to disentangle the interplay between dose, community-level and individual-level risk factors on neurodevelopment

    Global, regional, and national cancer incidence, mortality, years of life lost, years lived with disability, and disability-Adjusted life-years for 29 cancer groups, 1990 to 2017 : A systematic analysis for the global burden of disease study

    Get PDF
    Importance: Cancer and other noncommunicable diseases (NCDs) are now widely recognized as a threat to global development. The latest United Nations high-level meeting on NCDs reaffirmed this observation and also highlighted the slow progress in meeting the 2011 Political Declaration on the Prevention and Control of Noncommunicable Diseases and the third Sustainable Development Goal. Lack of situational analyses, priority setting, and budgeting have been identified as major obstacles in achieving these goals. All of these have in common that they require information on the local cancer epidemiology. The Global Burden of Disease (GBD) study is uniquely poised to provide these crucial data. Objective: To describe cancer burden for 29 cancer groups in 195 countries from 1990 through 2017 to provide data needed for cancer control planning. Evidence Review: We used the GBD study estimation methods to describe cancer incidence, mortality, years lived with disability, years of life lost, and disability-Adjusted life-years (DALYs). Results are presented at the national level as well as by Socio-demographic Index (SDI), a composite indicator of income, educational attainment, and total fertility rate. We also analyzed the influence of the epidemiological vs the demographic transition on cancer incidence. Findings: In 2017, there were 24.5 million incident cancer cases worldwide (16.8 million without nonmelanoma skin cancer [NMSC]) and 9.6 million cancer deaths. The majority of cancer DALYs came from years of life lost (97%), and only 3% came from years lived with disability. The odds of developing cancer were the lowest in the low SDI quintile (1 in 7) and the highest in the high SDI quintile (1 in 2) for both sexes. In 2017, the most common incident cancers in men were NMSC (4.3 million incident cases); tracheal, bronchus, and lung (TBL) cancer (1.5 million incident cases); and prostate cancer (1.3 million incident cases). The most common causes of cancer deaths and DALYs for men were TBL cancer (1.3 million deaths and 28.4 million DALYs), liver cancer (572000 deaths and 15.2 million DALYs), and stomach cancer (542000 deaths and 12.2 million DALYs). For women in 2017, the most common incident cancers were NMSC (3.3 million incident cases), breast cancer (1.9 million incident cases), and colorectal cancer (819000 incident cases). The leading causes of cancer deaths and DALYs for women were breast cancer (601000 deaths and 17.4 million DALYs), TBL cancer (596000 deaths and 12.6 million DALYs), and colorectal cancer (414000 deaths and 8.3 million DALYs). Conclusions and Relevance: The national epidemiological profiles of cancer burden in the GBD study show large heterogeneities, which are a reflection of different exposures to risk factors, economic settings, lifestyles, and access to care and screening. The GBD study can be used by policy makers and other stakeholders to develop and improve national and local cancer control in order to achieve the global targets and improve equity in cancer care. © 2019 American Medical Association. All rights reserved.Peer reviewe
    corecore