29 research outputs found

    Do Anti-Bullying Laws Reduce Youth Violence?

    Full text link
    This study is the first to comprehensively examine the effect of state anti-bullying laws (ABLs) on youth violence. Using data from a variety of sources including the Youth Risk Behavior Surveys, Uniform Crime Reports, and newly collected data on school shootings we find that the enforcement of strict, comprehensive school district anti-bullying policies is associated with a 7 to 13 percent reduction in school violence and an 8 to 12 percent reduction in bullying. Our results also show that anti-bullying policy mandates are associated with a reduction in minor teen school shooting deaths and violent crime arrests. A causal interpretation of our results is supported by falsification tests on older young adults for whom ABLs do not bind

    Longer-Run Effects of Antipoverty Policies on Disadvantaged Neighborhoods

    Get PDF
    We estimate the longer-run effects of minimum wages, the Earned Income Tax Credit, and welfare on key economic indicators of economic self-sufficiency in disadvantaged neighborhoods. We find that the longer-run effects of the EITC are to increase employment and to reduce poverty and public assistance. We also find some evidence that higher welfare benefits had longer-run adverse effects, and quite robust evidence that tighter welfare time limits reduce poverty and public assistance in the longer run. The evidence on the long-run effects of the minimum wage on poverty and public assistance is not robust, with some evidence pointing to reductions and some to increases

    Bone mineral density among men and women aged 35 to 50 years

    Get PDF
    © 2019 American Osteopathic Association. Context: Osteoporosis is characterized by low bone mineral density (BMD) and has been thought to only be a major health concern for postmenopausal women. However, osteoporosis and its risk factors have been greatly understudied in the middle-aged and male populations. Objective: To assess the likelihood of low BMD and its association with related risk factors in early–middle-aged (defined in this study as 35-50 years) men and women.Methods: Eligible men and women completed a questionnaire assessing calcium intake, hours per week of exercise, and other related risk factors associated with osteoporosis and osteopenia. The primary outcome variable, BMD, was attained using dual-energy x-ray absorptiometry scans taken at the femoral neck, trochanter, intertrochanteric crest, total femur, and lumbar spine.Results: Of the 173 participants in this study, 23 men (28%) and 24 women (26%) had osteopenia at the femoral neck. In men, there was a significant and negative correlation between exercise and femoral neck BMD (r=−0.296, P=.01). In women, correlation analyses showed significant positive correlations between exercise and BMD of the trochanter (r=0.329, P=.003), intertrochanteric crest (r=0.285, P=.01), total femur (r=0.30, P=.01), and lumbar spine (r=0.29, P=.01).Conclusions: Osteopenia was found in more than 25% of both male and female participants, which suggests that more osteoporosis screening and prevention programs need to be targeted to persons in the studied age group because osteopenia can lead to osteoporosis

    Do Published Cooking Temperatures Correspond with Consumer and Chef Perceptions of Steak Degrees of Doneness?

    Get PDF
    The objective of this study was to assess consumer and chef perceptions and knowledge of beef degrees of doneness (DOD) as well as to measure the changes in cooked color over time related to DOD. Steaks from strip loins (M. longissimus lumborum) from each of 5 quality treatments were used for this study. Steaks were cooked to an endpoint temperature of either very-rare (54°C), rare (60°C), medium-rare (63°C), medium (71°C), well-done (77°C), or very well-done (82°C). L*, a*, and b* were evaluated at 0, 1, 2, 3, 6, 9, and 12 min post-cutting and digital pictures were taken immediately on an internal surface of the steak. Digital surveys for the evaluation of the images of the cooked steaks were created for consumers and chefs. There were time × DOD interactions (P 0.05) for DOD responses for steak pictures evaluated by consumers or chefs. Consumers identified the DOD of cooked steaks as the DOD that corresponds to published end-point temperatures 27 to 35% of the time. Chefs typically identified the DOD as 1 DOD higher than which the steaks were cooked for steaks cooked to medium or less and 1 DOD lower for steaks cooked to well-done and higher. This indicates differences exist in the perceptions of DOD between culinary professionals and consumers, and may contribute to decreased consumer satisfaction when ordering steaks in a restaurant

    Visual Degree of Doneness Impacts Beef Palatability for Consumers with Different Degree of Doneness Preferences

    Get PDF
    The objective of this study was to determine the impact on beef palatability perceptions when consumers with varying degree of doneness (DOD) preferences are served steaks cooked to multiple DOD. Paired Low Choice strip loin steaks were randomly assigned to a DOD of either rare (60°C), medium-rare (63°C), medium (71°C), medium-well (74°C), or well-done (77°C). Consumer panelists were prescreened for DOD preference (rare, medium, or well-done) prior to sensory panels and were assigned to panels based on their DOD preference. In the first round of testing, consumers were served 1 sample from each of the 5 DOD under low-intensity red incandescent light to mask any DOD differences among samples. In round 2 of testing, consumers were fed the paired samples cooked to the same DOD under white incandescent lights. There were no (P > 0.05) consumer DOD preference × steak DOD interactions or consumer DOD preference effects for tenderness, juiciness, and flavor ratings when steaks were evaluated under both lighting types. Within the white-lighting testing, there was a consumer DOD preference × steak DOD interaction (P 0.05) in overall palatability among DOD under white-lighting, whereas consumers who preferred steaks cooked to rare and medium rated steaks lower (P < 0.05) for overall palatability as DOD increased. Regardless of DOD preference, consumer sensory ratings decreased (P < 0.05) when steaks were cooked above the consumer’s preferred DOD; whereas sensory ratings improved (P < 0.05) when steaks were served below the consumers’ preferences. These results indicate that overcooking steaks has the greatest negative impact on beef palatability perception and thus, foodservice should err on the side of undercooking steaks to preserve, and potentially improve, eating satisfaction

    Mortality and pulmonary complications in patients undergoing surgery with perioperative SARS-CoV-2 infection: an international cohort study

    Get PDF
    Background: The impact of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) on postoperative recovery needs to be understood to inform clinical decision making during and after the COVID-19 pandemic. This study reports 30-day mortality and pulmonary complication rates in patients with perioperative SARS-CoV-2 infection. Methods: This international, multicentre, cohort study at 235 hospitals in 24 countries included all patients undergoing surgery who had SARS-CoV-2 infection confirmed within 7 days before or 30 days after surgery. The primary outcome measure was 30-day postoperative mortality and was assessed in all enrolled patients. The main secondary outcome measure was pulmonary complications, defined as pneumonia, acute respiratory distress syndrome, or unexpected postoperative ventilation. Findings: This analysis includes 1128 patients who had surgery between Jan 1 and March 31, 2020, of whom 835 (74·0%) had emergency surgery and 280 (24·8%) had elective surgery. SARS-CoV-2 infection was confirmed preoperatively in 294 (26·1%) patients. 30-day mortality was 23·8% (268 of 1128). Pulmonary complications occurred in 577 (51·2%) of 1128 patients; 30-day mortality in these patients was 38·0% (219 of 577), accounting for 81·7% (219 of 268) of all deaths. In adjusted analyses, 30-day mortality was associated with male sex (odds ratio 1·75 [95% CI 1·28–2·40], p\textless0·0001), age 70 years or older versus younger than 70 years (2·30 [1·65–3·22], p\textless0·0001), American Society of Anesthesiologists grades 3–5 versus grades 1–2 (2·35 [1·57–3·53], p\textless0·0001), malignant versus benign or obstetric diagnosis (1·55 [1·01–2·39], p=0·046), emergency versus elective surgery (1·67 [1·06–2·63], p=0·026), and major versus minor surgery (1·52 [1·01–2·31], p=0·047). Interpretation: Postoperative pulmonary complications occur in half of patients with perioperative SARS-CoV-2 infection and are associated with high mortality. Thresholds for surgery during the COVID-19 pandemic should be higher than during normal practice, particularly in men aged 70 years and older. Consideration should be given for postponing non-urgent procedures and promoting non-operative treatment to delay or avoid the need for surgery. Funding: National Institute for Health Research (NIHR), Association of Coloproctology of Great Britain and Ireland, Bowel and Cancer Research, Bowel Disease Research Foundation, Association of Upper Gastrointestinal Surgeons, British Association of Surgical Oncology, British Gynaecological Cancer Society, European Society of Coloproctology, NIHR Academy, Sarcoma UK, Vascular Society for Great Britain and Ireland, and Yorkshire Cancer Research

    The impact of surgical delay on resectability of colorectal cancer: An international prospective cohort study

    Get PDF
    AIM: The SARS-CoV-2 pandemic has provided a unique opportunity to explore the impact of surgical delays on cancer resectability. This study aimed to compare resectability for colorectal cancer patients undergoing delayed versus non-delayed surgery. METHODS: This was an international prospective cohort study of consecutive colorectal cancer patients with a decision for curative surgery (January-April 2020). Surgical delay was defined as an operation taking place more than 4 weeks after treatment decision, in a patient who did not receive neoadjuvant therapy. A subgroup analysis explored the effects of delay in elective patients only. The impact of longer delays was explored in a sensitivity analysis. The primary outcome was complete resection, defined as curative resection with an R0 margin. RESULTS: Overall, 5453 patients from 304 hospitals in 47 countries were included, of whom 6.6% (358/5453) did not receive their planned operation. Of the 4304 operated patients without neoadjuvant therapy, 40.5% (1744/4304) were delayed beyond 4 weeks. Delayed patients were more likely to be older, men, more comorbid, have higher body mass index and have rectal cancer and early stage disease. Delayed patients had higher unadjusted rates of complete resection (93.7% vs. 91.9%, P = 0.032) and lower rates of emergency surgery (4.5% vs. 22.5%, P < 0.001). After adjustment, delay was not associated with a lower rate of complete resection (OR 1.18, 95% CI 0.90-1.55, P = 0.224), which was consistent in elective patients only (OR 0.94, 95% CI 0.69-1.27, P = 0.672). Longer delays were not associated with poorer outcomes. CONCLUSION: One in 15 colorectal cancer patients did not receive their planned operation during the first wave of COVID-19. Surgical delay did not appear to compromise resectability, raising the hypothesis that any reduction in long-term survival attributable to delays is likely to be due to micro-metastatic disease

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570

    Does an Introduction of a Paid Parental Leave Policy Affect Maternal Labor Market Outcomes in the Short Run? Evidence from Australia’s Paid Parental Leave Scheme

    No full text
    This paper studies how an introduction of paid parental leave (PPL) affects maternal labor market outcomes in the short run. Using a reform in Australia, the PPL scheme, that gave the primary caregiver of a child born or adopted on or after January 1 2011, $672.70 a week for a maximum of 18 weeks, this paper develops theoretical predictions of the effect of PPL on maternal labor market outcomes, and tests these predictions using confidential data from the Australian Pregnancy and Employment Transitions Survey. The theoretical results imply that after the introduction of PPL, hours of work in the pre-birth period should decrease for mothers who will qualify for PPL, and increase for mothers who are attempting to qualify for PPL. Post birth, the theoretical results imply that more mothers are out of work and on leave than would have been in the absence of PPL. The empirical results suggest that the PPL scheme had no significant effect on labor market outcomes pre birth or post birth

    Does an Introduction of a Paid Parental Leave Policy Affect Maternal Labor Market Outcomes in the Short Run? Evidence from Australia’s Paid Parental Leave Scheme

    No full text
    This paper studies how an introduction of paid parental leave (PPL) affects maternal labor market outcomes in the short run. Using a reform in Australia, the PPL scheme, that gave the primary caregiver of a child born or adopted on or after January 1 2011, $672.70 a week for a maximum of 18 weeks, this paper develops theoretical predictions of the effect of PPL on maternal labor market outcomes, and tests these predictions using confidential data from the Australian Pregnancy and Employment Transitions Survey. The theoretical results imply that after the introduction of PPL, hours of work in the pre-birth period should decrease for mothers who will qualify for PPL, and increase for mothers who are attempting to qualify for PPL. Post birth, the theoretical results imply that more mothers are out of work and on leave than would have been in the absence of PPL. The empirical results suggest that the PPL scheme had no significant effect on labor market outcomes pre birth or post birth
    corecore