45 research outputs found

    Self-affirmation improves self-control over snacking among participants low in eating self-efficacy

    Get PDF
    bjective: Individuals low in eating self-efficacy are at particular risk of engaging in unhealthy eating behaviours, including the consumption of high calorie snacks. The elevated levels of snacking displayed by these individuals can largely be attributed to their experiencing low self-control over the avoidance of such foods (Hankonen, Kinnunen, Absetz, & Jallinoja, 2014). Interventions are thus required to boost self-control over snacking among those low in eating self-efficacy. Self-affirmation has been shown to boost self-control among individuals with depleted resources in other domains (Schmeichel & Vohs, 2009). The purpose of the current study was to test the hypothesis that a self-affirmation manipulation would similarly increase self-control over snacking for individuals low in eating self-efficacy. Methods: At baseline, participants (N = 70) completed measures of dietary restraint and eating self-efficacy. In the main study, participants completed either a self-affirmation or a control task immediately before undertaking a joystick category judgment task that assessed self-control over snacking. Results: Hierarchical multiple regression analysis revealed the predicted significant interaction between eating self-efficacy and self-affirmation, demonstrating that self-affirmation moderated the association between eating self-efficacy and self-control over snacking. Johnson-Neyman regions of significance confirmed that for participants low in eating self-efficacy the self-affirmation manipulation resulted in higher levels of self-control. Unexpectedly, however, for participants high in eating self-efficacy the self-affirmation manipulation was found to be associated with lower levels of self-control. Conclusions: Findings supported the hypothesis that a self-affirmation manipulation would boost self-control over snacking among individuals low in eating self-efficacy. Self-affirmation may thus provide a useful technique for strengthening self-control in relation to the avoidance of unhealthy foods among individuals who find it difficult to manage challenging dietary situations

    Understanding how front-line staff use patient experience data for service improvement: an exploratory case study evaluation

    Get PDF
    Background and aim: The NHS collects a large number of data on patient experience, but there are concerns that it does not use this information to improve care. This study explored whether or not and how front-line staff use patient experience data for service improvement. Methods: Phase 1 – secondary analysis of existing national survey data, and a new survey of NHS trust patient experience leads. Phase 2 – case studies in six medical wards using ethnographic observations and interviews. A baseline and a follow-up patient experience survey were conducted on each ward, supplemented by in-depth interviews. Following an initial learning community to discuss approaches to learning from and improving patient experience, teams developed and implemented their own interventions. Emerging findings from the ethnographic research were shared formatively. Phase 3 – dissemination, including an online guide for NHS staff. Key findings: Phase 1 – an analysis of staff and inpatient survey results for all 153 acute trusts in England was undertaken, and 57 completed surveys were obtained from patient experience leads. The most commonly cited barrier to using patient experience data was a lack of staff time to examine the data (75%), followed by cost (35%), lack of staff interest/support (21%) and too many data (21%). Trusts were grouped in a matrix of high, medium and low performance across several indices to inform case study selection. Phase 2 – in every site, staff undertook quality improvement projects using a range of data sources. The number and scale of these varied, as did the extent to which they drew directly on patient experience data, and the extent of involvement of patients. Before-and-after surveys of patient experience showed little statistically significant change. Making sense of patient experience ‘data’ Staff were engaged in a process of sense-making from a range of formal and informal sources of intelligence. Survey data remain the most commonly recognised and used form of data. ‘Soft’ intelligence, such as patient stories, informal comments and daily ward experiences of staff, patients and family, also fed into staff’s improvement plans, but they and the wider organisation may not recognise these as ‘data’. Staff may lack confidence in using them for improvement. Staff could not always point to a specific source of patient experience ‘data’ that led to a particular project, and sometimes reported acting on what they felt they already knew needed changing. Staff experience as a route to improving patient experience Some sites focused on staff motivation and experience on the assumption that this would improve patient experience through indirect cultural and attitudinal change, and by making staff feel empowered and supported. Staff participants identified several potential interlinked mechanisms: (1) motivated staff provide better care, (2) staff who feel taken seriously are more likely to be motivated, (3) involvement in quality improvement is itself motivating and (4) improving patient experience can directly improve staff experience. ‘Team-based capital’ in NHS settings We propose ‘team-based capital’ in NHS settings as a key mechanism between the contexts in our case studies and observed outcomes. ‘Capital’ is the extent to which staff command varied practical, organisational and social resources that enable them to set agendas, drive process and implement change. These include not just material or economic resources, but also status, time, space, relational networks and influence. Teams involving a range of clinical and non-clinical staff from multiple disciplines and levels of seniority could assemble a greater range of capital; progress was generally greater when the team included individuals from the patient experience office. Phase 3 – an online guide for NHS staff was produced in collaboration with The Point of Care Foundation. Limitations: This was an ethnographic study of how and why NHS front-line staff do or do not use patient experience data for quality improvement. It was not designed to demonstrate whether particular types of patient experience data or quality improvement approaches are more effective than others. Future research: Developing and testing interventions focused specifically on staff but with patient experience as the outcome, with a health economics component. Studies focusing on the effect of team composition and diversity on the impact and scope of patient-centred quality improvement. Research into using unstructured feedback and soft intelligence

    Planned early delivery or expectant management for late preterm pre-eclampsia (PHOENIX): a randomised controlled trial

    Get PDF
    © 2019 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license Background: In women with late preterm pre-eclampsia, the optimal time to initiate delivery is unclear because limitation of maternal disease progression needs to be balanced against infant complications. The aim of this trial was to determine whether planned earlier initiation of delivery reduces maternal adverse outcomes without substantial worsening of neonatal or infant outcomes, compared with expectant management (usual care) in women with late preterm pre-eclampsia. Methods: In this parallel-group, non-masked, multicentre, randomised controlled trial done in 46 maternity units across England and Wales, we compared planned delivery versus expectant management (usual care) with individual randomisation in women with late preterm pre-eclampsia from 34 to less than 37 weeks' gestation and a singleton or dichorionic diamniotic twin pregnancy. The co-primary maternal outcome was a composite of maternal morbidity or recorded systolic blood pressure of at least 160 mm Hg with a superiority hypothesis. The co-primary perinatal outcome was a composite of perinatal deaths or neonatal unit admission up to infant hospital discharge with a non-inferiority hypothesis (non-inferiority margin of 10% difference in incidence). Analyses were by intention to treat, together with a per-protocol analysis for the perinatal outcome. The trial was prospectively registered with the ISRCTN registry, ISRCTN01879376. The trial is closed to recruitment but follow-up is ongoing. Findings: Between Sept 29, 2014, and Dec 10, 2018, 901 women were recruited. 450 women (448 women and 471 infants analysed) were allocated to planned delivery and 451 women (451 women and 475 infants analysed) to expectant management. The incidence of the co-primary maternal outcome was significantly lower in the planned delivery group (289 [65%] women) compared with the expectant management group (338 [75%] women; adjusted relative risk 0·86, 95% CI 0·79–0·94; p=0·0005). The incidence of the co-primary perinatal outcome by intention to treat was significantly higher in the planned delivery group (196 [42%] infants) compared with the expectant management group (159 [34%] infants; 1·26, 1·08–1·47; p=0·0034). The results from the per-protocol analysis were similar. There were nine serious adverse events in the planned delivery group and 12 in the expectant management group. Interpretation: There is strong evidence to suggest that planned delivery reduces maternal morbidity and severe hypertension compared with expectant management, with more neonatal unit admissions related to prematurity but no indicators of greater neonatal morbidity. This trade-off should be discussed with women with late preterm pre-eclampsia to allow shared decision making on timing of delivery. Funding: National Institute for Health Research Health Technology Assessment Programme

    SARS-CoV-2-specific immune responses and clinical outcomes after COVID-19 vaccination in patients with immune-suppressive disease

    Get PDF
    Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) immune responses and infection outcomes were evaluated in 2,686 patients with varying immune-suppressive disease states after administration of two Coronavirus Disease 2019 (COVID-19) vaccines. Overall, 255 of 2,204 (12%) patients failed to develop anti-spike antibodies, with an additional 600 of 2,204 (27%) patients generating low levels (<380 AU ml−1). Vaccine failure rates were highest in ANCA-associated vasculitis on rituximab (21/29, 72%), hemodialysis on immunosuppressive therapy (6/30, 20%) and solid organ transplant recipients (20/81, 25% and 141/458, 31%). SARS-CoV-2-specific T cell responses were detected in 513 of 580 (88%) patients, with lower T cell magnitude or proportion in hemodialysis, allogeneic hematopoietic stem cell transplantation and liver transplant recipients (versus healthy controls). Humoral responses against Omicron (BA.1) were reduced, although cross-reactive T cell responses were sustained in all participants for whom these data were available. BNT162b2 was associated with higher antibody but lower cellular responses compared to ChAdOx1 nCoV-19 vaccination. We report 474 SARS-CoV-2 infection episodes, including 48 individuals with hospitalization or death from COVID-19. Decreased magnitude of both the serological and the T cell response was associated with severe COVID-19. Overall, we identified clinical phenotypes that may benefit from targeted COVID-19 therapeutic strategies

    Socializing One Health: an innovative strategy to investigate social and behavioral risks of emerging viral threats

    Get PDF
    In an effort to strengthen global capacity to prevent, detect, and control infectious diseases in animals and people, the United States Agency for International Development’s (USAID) Emerging Pandemic Threats (EPT) PREDICT project funded development of regional, national, and local One Health capacities for early disease detection, rapid response, disease control, and risk reduction. From the outset, the EPT approach was inclusive of social science research methods designed to understand the contexts and behaviors of communities living and working at human-animal-environment interfaces considered high-risk for virus emergence. Using qualitative and quantitative approaches, PREDICT behavioral research aimed to identify and assess a range of socio-cultural behaviors that could be influential in zoonotic disease emergence, amplification, and transmission. This broad approach to behavioral risk characterization enabled us to identify and characterize human activities that could be linked to the transmission dynamics of new and emerging viruses. This paper provides a discussion of implementation of a social science approach within a zoonotic surveillance framework. We conducted in-depth ethnographic interviews and focus groups to better understand the individual- and community-level knowledge, attitudes, and practices that potentially put participants at risk for zoonotic disease transmission from the animals they live and work with, across 6 interface domains. When we asked highly-exposed individuals (ie. bushmeat hunters, wildlife or guano farmers) about the risk they perceived in their occupational activities, most did not perceive it to be risky, whether because it was normalized by years (or generations) of doing such an activity, or due to lack of information about potential risks. Integrating the social sciences allows investigations of the specific human activities that are hypothesized to drive disease emergence, amplification, and transmission, in order to better substantiate behavioral disease drivers, along with the social dimensions of infection and transmission dynamics. Understanding these dynamics is critical to achieving health security--the protection from threats to health-- which requires investments in both collective and individual health security. Involving behavioral sciences into zoonotic disease surveillance allowed us to push toward fuller community integration and engagement and toward dialogue and implementation of recommendations for disease prevention and improved health security

    Non-invasive diagnostic tests for Helicobacter pylori infection

    Get PDF
    BACKGROUND: Helicobacter pylori (H pylori) infection has been implicated in a number of malignancies and non-malignant conditions including peptic ulcers, non-ulcer dyspepsia, recurrent peptic ulcer bleeding, unexplained iron deficiency anaemia, idiopathic thrombocytopaenia purpura, and colorectal adenomas. The confirmatory diagnosis of H pylori is by endoscopic biopsy, followed by histopathological examination using haemotoxylin and eosin (H & E) stain or special stains such as Giemsa stain and Warthin-Starry stain. Special stains are more accurate than H & E stain. There is significant uncertainty about the diagnostic accuracy of non-invasive tests for diagnosis of H pylori. OBJECTIVES: To compare the diagnostic accuracy of urea breath test, serology, and stool antigen test, used alone or in combination, for diagnosis of H pylori infection in symptomatic and asymptomatic people, so that eradication therapy for H pylori can be started. SEARCH METHODS: We searched MEDLINE, Embase, the Science Citation Index and the National Institute for Health Research Health Technology Assessment Database on 4 March 2016. We screened references in the included studies to identify additional studies. We also conducted citation searches of relevant studies, most recently on 4 December 2016. We did not restrict studies by language or publication status, or whether data were collected prospectively or retrospectively. SELECTION CRITERIA: We included diagnostic accuracy studies that evaluated at least one of the index tests (urea breath test using isotopes such as13C or14C, serology and stool antigen test) against the reference standard (histopathological examination using H & E stain, special stains or immunohistochemical stain) in people suspected of having H pylori infection. DATA COLLECTION AND ANALYSIS: Two review authors independently screened the references to identify relevant studies and independently extracted data. We assessed the methodological quality of studies using the QUADAS-2 tool. We performed meta-analysis by using the hierarchical summary receiver operating characteristic (HSROC) model to estimate and compare SROC curves. Where appropriate, we used bivariate or univariate logistic regression models to estimate summary sensitivities and specificities. MAIN RESULTS: We included 101 studies involving 11,003 participants, of which 5839 participants (53.1%) had H pylori infection. The prevalence of H pylori infection in the studies ranged from 15.2% to 94.7%, with a median prevalence of 53.7% (interquartile range 42.0% to 66.5%). Most of the studies (57%) included participants with dyspepsia and 53 studies excluded participants who recently had proton pump inhibitors or antibiotics.There was at least an unclear risk of bias or unclear applicability concern for each study.Of the 101 studies, 15 compared the accuracy of two index tests and two studies compared the accuracy of three index tests. Thirty-four studies (4242 participants) evaluated serology; 29 studies (2988 participants) evaluated stool antigen test; 34 studies (3139 participants) evaluated urea breath test-13C; 21 studies (1810 participants) evaluated urea breath test-14C; and two studies (127 participants) evaluated urea breath test but did not report the isotope used. The thresholds used to define test positivity and the staining techniques used for histopathological examination (reference standard) varied between studies. Due to sparse data for each threshold reported, it was not possible to identify the best threshold for each test.Using data from 99 studies in an indirect test comparison, there was statistical evidence of a difference in diagnostic accuracy between urea breath test-13C, urea breath test-14C, serology and stool antigen test (P = 0.024). The diagnostic odds ratios for urea breath test-13C, urea breath test-14C, serology, and stool antigen test were 153 (95% confidence interval (CI) 73.7 to 316), 105 (95% CI 74.0 to 150), 47.4 (95% CI 25.5 to 88.1) and 45.1 (95% CI 24.2 to 84.1). The sensitivity (95% CI) estimated at a fixed specificity of 0.90 (median from studies across the four tests), was 0.94 (95% CI 0.89 to 0.97) for urea breath test-13C, 0.92 (95% CI 0.89 to 0.94) for urea breath test-14C, 0.84 (95% CI 0.74 to 0.91) for serology, and 0.83 (95% CI 0.73 to 0.90) for stool antigen test. This implies that on average, given a specificity of 0.90 and prevalence of 53.7% (median specificity and prevalence in the studies), out of 1000 people tested for H pylori infection, there will be 46 false positives (people without H pylori infection who will be diagnosed as having H pylori infection). In this hypothetical cohort, urea breath test-13C, urea breath test-14C, serology, and stool antigen test will give 30 (95% CI 15 to 58), 42 (95% CI 30 to 58), 86 (95% CI 50 to 140), and 89 (95% CI 52 to 146) false negatives respectively (people with H pylori infection for whom the diagnosis of H pylori will be missed).Direct comparisons were based on few head-to-head studies. The ratios of diagnostic odds ratios (DORs) were 0.68 (95% CI 0.12 to 3.70; P = 0.56) for urea breath test-13C versus serology (seven studies), and 0.88 (95% CI 0.14 to 5.56; P = 0.84) for urea breath test-13C versus stool antigen test (seven studies). The 95% CIs of these estimates overlap with those of the ratios of DORs from the indirect comparison. Data were limited or unavailable for meta-analysis of other direct comparisons. AUTHORS' CONCLUSIONS: In people without a history of gastrectomy and those who have not recently had antibiotics or proton ,pump inhibitors, urea breath tests had high diagnostic accuracy while serology and stool antigen tests were less accurate for diagnosis of Helicobacter pylori infection.This is based on an indirect test comparison (with potential for bias due to confounding), as evidence from direct comparisons was limited or unavailable. The thresholds used for these tests were highly variable and we were unable to identify specific thresholds that might be useful in clinical practice.We need further comparative studies of high methodological quality to obtain more reliable evidence of relative accuracy between the tests. Such studies should be conducted prospectively in a representative spectrum of participants and clearly reported to ensure low risk of bias. Most importantly, studies should prespecify and clearly report thresholds used, and should avoid inappropriate exclusions

    Reflective and non-reflective antecedents of health-related behaviour: exploring the relative contributions of impulsivity and implicit self-control to the prediction of dietary behaviour

    No full text
    Objectives. This study (N= 139) explored whether two measures that capture non-reflective processing (viz. a self-report measure of impulsivity and a behavioural measure of implicit self-control) would contribute to the prediction of dietary behaviour over and above cognitive predictors specified by the theory of planned behaviour (TPB). Methods. Four dimensions of impulsivity were measured at Time 1. Implicit self-control was measured at Time 2, alongside TPB predictors relating to the avoidance of high-calorie snacks. At Time 3, participants reported their snacking behaviour over the previous 2 weeks. Results. Results revealed that both impulsivity and implicit self-control significantly contributed to the prediction of snacking behaviour over and above the TPB predictors. Conclusions. It was concluded that the predictive utility of models such as the TPB might be augmented by the inclusion of variables that capture non-reflective information processin
    corecore