6,161 research outputs found

    Incentives and Childrens's Dietary Choices: A Field Experiment in Primary Schools

    Get PDF
    We conduct a field experiment in 31 primary schools in England to test the effectiveness of different temporary incentive schemes, a standard individual based incentive scheme and a competitive scheme, on increasing the choice and consumption of healthy items at lunchtime. The individual scheme has a weak positive effect that masks significantly differential effects by age whereas all students respond positively to the competitive scheme.For our sample of interest, the competivie scheme increases choice of healthy items by 33% and consumption of healthy items by 48%, twice and three times as much as ain the individual incentive scheme, respectively. The positive effects generally carry over to the week immediately following the treatment but we find little evidence of any effects six months later. Our results show that incentives can work, at least temporarily, to increase healthy eating but that there are large differences in effectiveness between schemes. Furthermore it is important to analyse things at the individual level as average effects appear to be masking significant heterogeneous effects that are predicted by the health literature

    The geometry of reaction norms yields insights on classical fitness functions for Great Lakes salmon.

    Get PDF
    Life history theory examines how characteristics of organisms, such as age and size at maturity, may vary through natural selection as evolutionary responses that optimize fitness. Here we ask how predictions of age and size at maturity differ for the three classical fitness functions-intrinsic rate of natural increase r, net reproductive rate R0, and reproductive value Vx-for semelparous species. We show that different choices of fitness functions can lead to very different predictions of species behavior. In one's efforts to understand an organism's behavior and to develop effective conservation and management policies, the choice of fitness function matters. The central ingredient of our approach is the maturation reaction norm (MRN), which describes how optimal age and size at maturation vary with growth rate or mortality rate. We develop a practical geometric construction of MRNs that allows us to include different growth functions (linear growth and nonlinear von Bertalanffy growth in length) and develop two-dimensional MRNs useful for quantifying growth-mortality trade-offs. We relate our approach to Beverton-Holt life history invariants and to the Stearns-Koella categorization of MRNs. We conclude with a detailed discussion of life history parameters for Great Lakes Chinook Salmon and demonstrate that age and size at maturity are consistent with predictions using R0 (but not r or Vx) as the underlying fitness function

    Multi-Agent Interactions for Ambient Assisted Living

    Get PDF

    Hard and soft news: A review of concepts, operationalizations and key findings

    Get PDF
    Over 30 years, a large body of research on what is often called ‘hard’ and ‘soft news’ has accumulated in communication studies. However, there is no consensus about what hard and soft news exactly is, or how it should be defined or measured. Moreover, the concept has not been clearly differentiated from or systematically related to concepts addressing very similar phenomena – tabloidization and ‘infotainment’. Consequently, the results of various studies are hard to compare and different scientific discourses on related issues remain unconnected. Against this backdrop, this article offers a conceptual analysis of the concept based on studies in English and other languages. We identify key dimensions of the concept and make suggestions for a standardized definition and multi-dimensional measurement of harder and softer news. In doing so, we propose to distinguish thematic, focus and style features as basic dimensions that – in their combination – make up harder and softer types of news

    Non-inferiority trials: are they inferior? A systematic review of reporting in major medical journals.

    Get PDF
    OBJECTIVE: To assess the adequacy of reporting of non-inferiority trials alongside the consistency and utility of current recommended analyses and guidelines. DESIGN: Review of randomised clinical trials that used a non-inferiority design published between January 2010 and May 2015 in medical journals that had an impact factor >10 (JAMA Internal Medicine, Archives Internal Medicine, PLOS Medicine, Annals of Internal Medicine, BMJ, JAMA, Lancet and New England Journal of Medicine). DATA SOURCES: Ovid (MEDLINE). METHODS: We searched for non-inferiority trials and assessed the following: choice of non-inferiority margin and justification of margin; power and significance level for sample size; patient population used and how this was defined; any missing data methods used and assumptions declared and any sensitivity analyses used. RESULTS: A total of 168 trial publications were included. Most trials concluded non-inferiority (132; 79%). The non-inferiority margin was reported for 98% (164), but less than half reported any justification for the margin (77; 46%). While most chose two different analyses (91; 54%) the most common being intention-to-treat (ITT) or modified ITT and per-protocol, a large number of articles only chose to conduct and report one analysis (65; 39%), most commonly the ITT analysis. There was lack of clarity or inconsistency between the type I error rate and corresponding CIs for 73 (43%) articles. Missing data were rarely considered with (99; 59%) not declaring whether imputation techniques were used. CONCLUSIONS: Reporting and conduct of non-inferiority trials is inconsistent and does not follow the recommendations in available statistical guidelines, which are not wholly consistent themselves. Authors should clearly describe the methods used and provide clear descriptions of and justifications for their design and primary analysis. Failure to do this risks misleading conclusions being drawn, with consequent effects on clinical practice

    Cost-effectiveness analysis of 3-D computerized tomography colonography versus optical colonoscopy for imaging symptomatic gastroenterology patients.

    No full text
    BACKGROUND: When symptomatic gastroenterology patients have an indication for colonic imaging, clinicians have a choice between optical colonoscopy (OC) and computerized tomography colonography with three-dimensional reconstruction (3-D CTC). 3-D CTC provides a minimally invasive and rapid evaluation of the entire colon, and it can be an efficient modality for diagnosing symptoms. It allows for a more targeted use of OC, which is associated with a higher risk of major adverse events and higher procedural costs. A case can be made for 3-D CTC as a primary test for colonic imaging followed if necessary by targeted therapeutic OC; however, the relative long-term costs and benefits of introducing 3-D CTC as a first-line investigation are unknown. AIM: The aim of this study was to assess the cost effectiveness of 3-D CTC versus OC for colonic imaging of symptomatic gastroenterology patients in the UK NHS. METHODS: We used a Markov model to follow a cohort of 100,000 symptomatic gastroenterology patients, aged 50 years or older, and estimate the expected lifetime outcomes, life years (LYs) and quality-adjusted life years (QALYs), and costs (£, 2010-2011) associated with 3-D CTC and OC. Sensitivity analyses were performed to assess the robustness of the base-case cost-effectiveness results to variation in input parameters and methodological assumptions. RESULTS: 3D-CTC provided a similar number of LYs (7.737 vs 7.739) and QALYs (7.013 vs 7.018) per individual compared with OC, and it was associated with substantially lower mean costs per patient (£467 vs £583), leading to a positive incremental net benefit. After accounting for the overall uncertainty, the probability of 3-D CTC being cost effective was around 60 %, at typical willingness-to-pay values of £20,000-£30,000 per QALY gained. CONCLUSION: 3-D CTC is a cost-saving and cost-effective option for colonic imaging of symptomatic gastroenterology patients compared with OC

    Rethinking non-inferiority: a practical trial design for optimising treatment duration.

    Get PDF
    Background Trials to identify the minimal effective treatment duration are needed in different therapeutic areas, including bacterial infections, tuberculosis and hepatitis C. However, standard non-inferiority designs have several limitations, including arbitrariness of non-inferiority margins, choice of research arms and very large sample sizes. Methods We recast the problem of finding an appropriate non-inferior treatment duration in terms of modelling the entire duration-response curve within a pre-specified range. We propose a multi-arm randomised trial design, allocating patients to different treatment durations. We use fractional polynomials and spline-based methods to flexibly model the duration-response curve. We call this a 'Durations design'. We compare different methods in terms of a scaled version of the area between true and estimated prediction curves. We evaluate sensitivity to key design parameters, including sample size, number and position of arms. Results A total sample size of ~ 500 patients divided into a moderate number of equidistant arms (5-7) is sufficient to estimate the duration-response curve within a 5% error margin in 95% of the simulations. Fractional polynomials provide similar or better results than spline-based methods in most scenarios. Conclusion Our proposed practical randomised trial 'Durations design' shows promising performance in the estimation of the duration-response curve; subject to a pending careful investigation of its inferential properties, it provides a potential alternative to standard non-inferiority designs, avoiding many of their limitations, and yet being fairly robust to different possible duration-response curves. The trial outcome is the whole duration-response curve, which may be used by clinicians and policymakers to make informed decisions, facilitating a move away from a forced binary hypothesis testing paradigm

    The botanical biofiltration of VOCs with active airflow: is removal efficiency related to chemical properties?

    Full text link
    © 2019 Elsevier Ltd Botanical biofiltration using active green walls is showing increasing promise as a viable method for the filtration of volatile organic compounds (VOCs) from ambient air; however there is a high level of heterogeneity reported amongst VOC removal efficiencies, and the reasons for these observations have yet to be explained. Comparisons of removal efficiencies amongst studies is also difficult due to the use of many different VOCs, and systems that have been tested under different conditions. The current work describes a procedure to determine whether some of these differences may be related to the chemical properties of the VOCs themselves. This work used an active green wall system to test the single pass removal efficiency (SPRE) of nine different VOCs (acetone, benzene, cyclohexane, ethanol, ethyl acetate, hexane, isopentane, isopropanol and toluene) and explored which chemical properties were meaningful predictor variables of their biofiltration efficiencies. Ethanol was removed most efficiently (average SPRE of 96.34% ± 1.61), while benzene was least efficiently removed (average SPRE of 19.76% ± 2.93). Multiple stepwise linear regression was used to determine that the dipole moment and molecular mass were significant predictors of VOC SPRE, in combination accounting for 54.6% of the variability in SPREs amongst VOCs. The octanol water partition coefficient, proton affinity, Henry's law constant and vapour pressure were not significant predictors of SPRE. The most influential predictor variable was the dipole moment, alone accounting for 49.8% of the SPRE variability. The model thus allows for an estimation of VOC removal efficiency based on a VOC's chemical properties, and supports the idea that system optimisation could be achieved through methods that promote both VOC partitioning into the biofilter's aqueous phase, and substrate development to enhance adsorption.

    The impact of heavy-quark loops on LHC dark matter searches

    Full text link
    If only tree-level processes are included in the analysis, LHC monojet searches give weak constraints on the dark matter-proton scattering cross section arising from the exchange of a new heavy scalar or pseudoscalar mediator with Yukawa-like couplings to quarks. In this letter we calculate the constraints on these interactions from the CMS 5.0/fb and ATLAS 4.7/fb searches for jets with missing energy including the effects of heavy-quark loops. We find that the inclusion of such contributions leads to a dramatic increase in the predicted cross section and therefore a significant improvement of the bounds from LHC searches.Comment: 12 pages, 1 table, 3 figures, v2: extended discussion and improved relic density calculation - matches published versio
    • …
    corecore