842 research outputs found
Porcine reproductive and respiratory syndrome virus (PRRSV) in GB pig herds : farm characteristics associated with heterogeneity in seroprevalence
Background: The between- and within-herd variability of porcine reproductive and respiratory syndrome virus (PRRSV) antibodies were investigated in a cross-sectional study of 103 British pig herds conducted 2003–2004. Fifty pigs from each farm were tested for anti-PRRSV antibodies using
ELISA. A binomial logistic model was used to investigate management risks for farms with and without pigs with PRRSV antibodies and multilevel statistical models were used to investigate variability in pigs' log ELISA IRPC (relative index × 100) in positive herds.
Results: Thirty-five herds (34.0%) were seronegative, 41 (39.8%) were seropositive and 27 (26.2%) were vaccinated. Herds were more likely to be seronegative if they had < 250 sows (OR 3.86 (95% CI 1.46, 10.19)) and if the nearest pig herd was ≥ 2 miles away (OR 3.42 (95% CI 1.29, 9.12)). The
mean log IRPC in seropositive herds was 3.02 (range, 0.83 – 5.58). Sixteen seropositive herds had only seropositive adult pigs. In these herds, pigs had -0.06 (95% CI -0.10, -0.01) lower log IRPC for every mile increase in distance to the nearest pig unit, and -0.56 (95% CI -1.02, -0.10) lower log IRPC when quarantine facilities were present. For 25 herds with seropositive young stock and adults, lower log IRPC were associated with isolating purchased stock for ≥ 6 days (coefficient - 0.46, 95% CI -0.81, -0.11), requesting ≥ 48 hours 'pig-free time' from humans (coefficient -0.44, 95% CI -0.79, -0.10) and purchasing gilts (coefficient -0.61, 95% CI -0.92, -0.29).
Conclusion: These patterns are consistent with PRRSV failing to persist indefinitely on some infected farms, with fadeout more likely in smaller herds with little/no reintroduction of infectious stock. Persistence of infection may be associated with large herds in pig-dense regions with repeated reintroduction
Impact of school lunch type on nutritional quality of English children's diets.
OBJECTIVE: Nutrient and food standards exist for school lunches in English primary schools although packed lunches brought from home are not regulated. The aim of the present study was to determine nutritional and dietary differences by lunch type. DESIGN: A cross-sectional survey was carried out in 2007 assessing diet using the Child and Diet Evaluation Tool (CADET), a validated 24 h estimated food diary. The data were analysed to determine nutritional and dietary intakes over the whole day by school meal type: school meals and packed lunches. SETTING: Fifty-four primary schools across England. SUBJECTS: Children (n 2709) aged 6-8 years. RESULTS: Children having a packed lunch consumed on average 11·0 g more total sugars (95 % CI 6·6, 15·3 g) and 101 mg more Na (95 % CI 29, 173 mg) over the whole day. Conversely, children having a school meal consumed, on average, 4·0 g more protein (95 % CI 2·3, 5·7 g), 0·9 g more fibre (NSP; 95 % CI 0·5, 1·3 g) and 0·4 mg more Zn (95 % CI 0·1, 0·6 mg). There was no difference in daily energy intake by lunch type. Children having a packed lunch were more likely to consume snacks and sweetened drinks; while children having a school meal were more likely to consume different types of vegetables and drink water over the whole day. CONCLUSIONS: Compared with children having a school meal, children taking a packed lunch to school consumed a lower-quality diet over the whole day, including higher levels of sugar and Na and fewer vegetables. These findings support the introduction of policies that increase school meal uptake
Formative evaluation of the usability and acceptability of myfood24 among adolescents: a UK online dietary assessments tool
BackgroundMyfood24 is a new online 24 h dietary assessment tool developed for use among the UK population. Limited information is available on the usability and acceptability of such tools. Hence this study aims to determine the usability and acceptability of myfood24 among British adolescents (11-18y) before and after making the improvements.MethodsA total of 84 adolescents were involved in two stages. In stage-I (beta-version of myfood24), 14 adolescents were recruited, 7 of whom (group-1) were asked to enter standardized tasks in a testing room with screen capture software. The remaining 7-adolescents (group-2) were asked to report their previous food intake using myfood24 at home. All participants then completed a usability and acceptability questionnaire. Stage-II was carried out after making amendments to the live-version of myfood24 in which 70 adolescents were asked to enter their food intake for two days and then complete the same questionnaire. Thematic analysis was conducted of observer comments and open-ended questions.ResultsNavigation, presentation errors and failure to find functions were the main usability issues identified in the beta-version. Significant improvements were found in the usability and acceptability of most functions after implementing certain features like a spell checker, auto-fill option, and adding ‘mouse hover’ to help with the use of some functions. Adolescents’ perceptions of searching food items, selecting food portion sizes and making a list function were significantly improved in the live-version. The mean completion time of myfood24 reduced from 31 (SD?=?6) minutes in the beta-version to 16 (SD?=?5) minutes in the live-version. The mean system usability score (SUS) of myfood24 improved from 66/100 (95 % CI 60, 73) in the beta-version to 74/100 (95 % CI 71, 77) in the live-version, which is considered as ‘good’. Of the adolescents in stage-II, 41 % preferred using myfood24 to the interviewer-administered 24 h recall because myfood24 was quicker, easier to use and provided the adolescents with privacy when reporting dietary intake.ConclusionConsidering adolescents’ feedback has helped in improving the usability and acceptability of the final-version of myfood24. myfood24 appears to support adolescents’ need in reporting their dietary intake, which may potentially improve the overall quality of adolescents’ self-reported dietary information
The frequency and duration of Salmonella-macrophage adhesion events determines infection efficiency.
Salmonella enterica causes a range of important diseases in humans and a in a variety of animal species. The ability of bacteria to adhere to, invade and survive within host cells plays an important role in the pathogenesis of Salmonella infections. In systemic salmonellosis, macrophages constitute a niche for the proliferation of bacteria within the host organism. Salmonella enterica serovar Typhimurium is flagellated and the frequency with which this bacterium collides with a cell is important for infection efficiency. We investigated how bacterial motility affects infection efficiency, using a combination of population-level macrophage infection experiments and direct imaging of single-cell infection events, comparing wild-type and motility mutants. Non-motile and aflagellate bacterial strains, in contrast to wild-type bacteria, collide less frequently with macrophages, are in contact with the cell for less time and infect less frequently. Run-biased Salmonella also collide less frequently with macrophages but maintain contact with macrophages for a longer period of time than wild-type strains and infect the cells more readily. Our results suggest that uptake of S. Typhimurium by macrophages is dependent upon the duration of contact time of the bacterium with the cell, in addition to the frequency with which the bacteria collide with the cell.SA was supported by an Oliver Gatty studentship, and this work was funded from EU-ITN Transpol (PC), BBSRC Research Development Fellowship BB/H021930/1 (JAW and CEB).This is the final version of the article. It first appeared from the Royal Society. via http://dx.doi.org/10.1098/rstb.2014.003
Prevalence of physical frailty, including risk factors, up to 1 year after hospitalisation for COVID-19 in the UK: a multicentre, longitudinal cohort study.
Background
The scale of COVID-19 and its well documented long-term sequelae support a need to understand long-term outcomes including frailty.
Methods
This prospective cohort study recruited adults who had survived hospitalisation with clinically diagnosed COVID-19 across 35 sites in the UK (PHOSP-COVID). The burden of frailty was objectively measured using Fried's Frailty Phenotype (FFP). The primary outcome was the prevalence of each FFP group—robust (no FFP criteria), pre-frail (one or two FFP criteria) and frail (three or more FFP criteria)—at 5 months and 1 year after discharge from hospital. For inclusion in the primary analysis, participants required complete outcome data for three of the five FFP criteria. Longitudinal changes across frailty domains are reported at 5 months and 1 year post-hospitalisation, along with risk factors for frailty status. Patient-perceived recovery and health-related quality of life (HRQoL) were retrospectively rated for pre-COVID-19 and prospectively rated at the 5 month and 1 year visits. This study is registered with ISRCTN, number ISRCTN10980107.
Findings
Between March 5, 2020, and March 31, 2021, 2419 participants were enrolled with FFP data. Mean age was 57.9 (SD 12.6) years, 933 (38.6%) were female, and 429 (17.7%) had received invasive mechanical ventilation. 1785 had measures at both timepoints, of which 240 (13.4%), 1138 (63.8%) and 407 (22.8%) were frail, pre-frail and robust, respectively, at 5 months compared with 123 (6.9%), 1046 (58.6%) and 616 (34.5%) at 1 year. Factors associated with pre-frailty or frailty were invasive mechanical ventilation, older age, female sex, and greater social deprivation. Frail participants had a larger reduction in HRQoL compared with before their COVID-19 illness and were less likely to describe themselves as recovered.
Interpretation
Physical frailty and pre-frailty are common following hospitalisation with COVID-19. Improvement in frailty was seen between 5 and 12 months although two-thirds of the population remained pre-frail or frail. This suggests comprehensive assessment and interventions targeting pre-frailty and frailty beyond the initial illness are required.
Funding
UK Research and Innovation and National Institute for Health Research
Pseudomonas stutzeri NT-I : optimal conditions for growth and selenate reduction
In this study, Pseudomonas stutzeri NT-I growth and selenate reduction were examined using aerobic batch
experiments. Optimal growth conditions were determined in a mineral salt medium in the presence of
background selenium. Optimal conditions for the reduction of selenate to selenite and elemental selenium was
identified using harvested cells in a mineral salt medium. The reduction profiles of selenium were monitored
using selenite as indicator species. A glucose and nitrogen independent maximum biomass concentration of
0.64 g/L dry cell weight was measured for all glucose concentrations above 2 g/L, signifying the presence of a
population density control mechanism. Optimal growth conditions for the culture were obtained at a pH of 7,
temperature of 37 °C, a salinity of 10 – 20 g/L NaCl, and a background selenium concentration of 5 mM. Optimal
selenium reduction rates were observed at a temperature of 37 °C, pH 7 – 8 and salinity less than 5 g/L NaCl.
The similarity of conditions for maximum growth and selenium reduction rates provide evidence that optimal
operation can be achieved for both parameters simultaneously, a requirement for continuous operation. The
microbe was capable of practically complete reduction of up to 4 mM selenate in less than 3 h of operation,
translating to a volumetric reduction rate of between 0.2 mM/h (for 0.5 mM selenate) and 1.33 mM/h (for 4 mM
selenate). The increasing mass-based reduction rates of between 0.006 mmol/g.h (for 0.5 mM selenate) and
0.1 mmol/g.h (for 4 mM selenate) indicate that the increased reduction rate was a result of both increased
biomass and increased biomass activity with increased selenate concentration. Results from the study
demonstrate the potential of the organism Pseudomonas stutzeri NT-I for the biological remediation of selenate
and subsequent removal from the environment.www.aidic.it/cethttp://www.aidic.it/cetam2019Chemical Engineerin
Understanding the science of portion control and the art of downsizing
Offering large portions of high-energy-dense (HED) foods increases overall intake in children and adults. This is known as the portion size effect (PSE). It is robust, reliable and enduring. Over time, the PSE may facilitate overeating and ultimately positive energy balance. Therefore, it is important to understand what drives the PSE and what might be done to counter the effects of an environment promoting large portions, especially in children. Explanations for the PSE are many and diverse, ranging from consumer error in estimating portion size to simple heuristics such as cleaning the plate or eating in accordance with consumption norms. However, individual characteristics and hedonic processes influence the PSE, suggesting a more complex explanation than error or heuristics. Here PSE studies are reviewed to identify interventions that can be used to downsize portions of HED foods, with a focus on children who are still learning about social norms for portion size. Although the scientific evidence for the PSE is robust, there is still a need for creative downsizing solutions to facilitate portion control as children and adolescents establish their eating habits
Non-Milk Extrinsic Sugars Intake and Food and Nutrient Consumption Patterns among Adolescents in the UK National Diet and Nutrition Survey, Years 2008–16
The revised guidelines from the Department of Health (DoH) in the UK state that mean population intakes of free sugars should be below 5% of the total energy (TE) consumption of the British population. However, very few studies have assessed the impact of this recommendation on diet quality in the UK. We explored the dietary patterns and intakes of micronutrients of British adolescents with low intakes of non-milk extrinsic sugars (NMES) (similar to free sugars but not equal, with slight differences in the categorisation of fruit sugars from dried, stewed or canned fruit and smoothies), using the National Diet and Nutrition Survey Rolling Programme, years 1–8 (NDNS RP). The sample included 2587 adolescents aged 11–18 years. Four percent (112) of adolescents reported consuming 5% or lower NMES as a proportion of TE. The odds of being categorised as a low-sugar consumer in adolescents (≤5% TE from NMES) were significantly lower with higher intakes of sweetened drinks, fruit juice, cakes, biscuits, sugar and sweet spreads, chocolate confectionery and sugar confectionery, and significantly higher with higher intakes of pasta and rice, wholemeal and brown bread, and fish. Across the five categories of NMES intakes, micronutrient intakes were lowest for those consuming either ≤5% TE or more than 20% TE from NMES, and optimal for those consuming between 10–15% of energy from NMES. These findings confirm the difficulties of meeting the free sugars recommended intake for adolescents. Care needs to be taken to ensure that an adequate consumption of micronutrients is achieved in those adhering to the revised guidelines on free sugars
The prevention of glucocorticoid‐induced osteoporosis in patients with immune thrombocytopenia receiving steroids:a British Society for Haematology Good Practice Paper
Methodology This Good Practice Paper was compiled according to the British Society for Haematology (BSH) process at http://www.b-s-h.org.uk/guidelines/proposing-and-writing-a-new-bsh-guideline/. The BSH produces Good Practice Papers to recommend good practice in areas where there is a limited evidence base but for which a degree of consensus or uniformity is likely to be beneficial to patient care. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) nomenclature was used to evaluate levels of evidence and to assess the strength of recommendations. The GRADE criteria can be found at http://www.gradeworkinggroup.org
- …