380 research outputs found
The politicisation of evaluation: constructing and contesting EU policy performance
Although systematic policy evaluation has been conducted for decades and has been growing strongly within the European Union (EU) institutions and in the member states, it remains largely underexplored in political science literatures. Extant work in political science and public policy typically focuses on elements such as agenda setting, policy shaping, decision making, or implementation rather than evaluation. Although individual pieces of research on evaluation in the EU have started to emerge, most often regarding policy “effectiveness” (one criterion among many in evaluation), a more structured approach is currently missing. This special issue aims to address this gap in political science by focusing on four key focal points: evaluation institutions (including rules and cultures), evaluation actors and interests (including competencies, power, roles and tasks), evaluation design (including research methods and theories, and their impact on policy design and legislation), and finally, evaluation purpose and use (including the relationships between discourse and scientific evidence, political attitudes and strategic use). The special issue considers how each of these elements contributes to an evolving governance system in the EU, where evaluation is playing an increasingly important role in decision making
Socioeconomic position across the lifecourse & allostatic load: data from the West of Scotland Twenty-07 cohort study
Background: We examined how socioeconomic position (SEP) across the lifecourse (three critical periods, social mobility and accumulated over time) is associated with allostatic load (a measure of cumulative physiological burden). Methods. Data are from the West of Scotland Twenty-07 Study, with respondents aged 35 (n = 740), 55 (n = 817) and 75 (n = 483). SEP measures representing childhood, the transition to adulthood and adulthood SEP were used. Allostatic load was produced by summing nine binary biomarker scores (1 = in the highest-risk quartile). Linear regressions were used for each of the lifecourse models; with model fits compared using partial F-tests. Results: For those aged 35 and 55, higher SEP was associated with lower allostatic load (no association in the 75-year-olds). The accumulation model (more time spent with higher SEP) had the best model fit in those aged 35 (b = -0.50, 95%CI = -0.68, -0.32, P = 0.002) and 55 (b = -0.31, 95%CI = -0.49, -0.12, P < 0.001). However, the relative contributions of each life-stage differed, with adulthood SEP less strongly associated with allostatic load. Conclusions: Long-term, accumulated higher SEP has been shown to be associated with lower allostatic load (less physiological burden). However, the transition to adulthood may represent a particularly sensitive period for SEP to impact on allostatic load. © 2014 Robertson et al.; licensee BioMed Central Ltd
Caring for quality of care: symbolic violence and the bureaucracies of audit.
BACKGROUND: This article considers the moral notion of care in the context of Quality of Care discourses. Whilst care has clear normative implications for the delivery of health care it is less clear how Quality of Care, something that is centrally involved in the governance of UK health care, relates to practice. DISCUSSION: This paper presents a social and ethical analysis of Quality of Care in the light of the moral notion of care and Bourdieu's conception of symbolic violence. We argue that Quality of Care bureaucracies show significant potential for symbolic violence or the domination of practice and health care professionals. This generates problematic, and unintended, consequences that can displace the goals of practice. SUMMARY: Quality of Care bureaucracies may have unintended consequences for the practice of health care. Consistent with feminist conceptions of care, Quality of Care 'audits' should be reconfigured so as to offer a more nuanced and responsive form of evaluation
Attention or instruction: do sustained attentional abilities really differ between high and low hypnotisable persons?
Previous research has suggested that highly hypnotisable participants (‘highs’) are more sensitive to the bistability of ambiguous figures—as evidenced by reporting more perspective changes of a Necker cube—than low hypnotisable participants (‘lows’). This finding has been interpreted as supporting the hypothesis that highs have more efficient sustained attentional abilities than lows. However, the higher report of perspective changes in highs in comparison to lows may reflect the implementation of different expectation-based strategies as a result of differently constructed demand characteristics according to one’s level of hypnotisability. Highs, but not lows, might interpret an instruction to report perspective changes as an instruction to report many changes. Using a Necker cube as our bistable stimulus, we manipulated demand characteristics by giving specific information to participants of different hypnotisability levels. Participants were told that previous research has shown that people with similar hypnotisability as theirs were either very good at switching or maintaining perspective versus no information. Our results show that highs, but neither lows nor mediums, were strongly influenced by the given information. However, highs were not better at maintaining the same perspective than participants with lower hypnotisability. Taken together, these findings favour the view that the higher sensitivity of highs in comparison to lows to the bistability of ambiguous figures reflect the implementation of different strategies
Recommended from our members
Detection and attribution of human influence on regional precipitation
Understanding how human influence on climate is affecting precipitation around the world is immensely important for defining mitigation policies, and for adaptation planning. Yet despite increasing evidence for the influence of climate change on global patterns of precipitation, and expectations that significant changes in regional precipitation should have already occurred as a result of human influence on climate, compelling evidence of anthropogenic fingerprints on regional precipitation is obscured by observational and modelling uncertainties and is likely to remain so using current methods for years to come. This is in spite of substantial ongoing improvements in models, new reanalyses and a satellite record that spans over thirty years. If we are to quantify how human-induced climate change is affecting the regional water cycle, we need to consider novel ways of identifying the effects of natural and anthropogenic influences on precipitation that take full advantage of our physical expectations
25-Hydroxyvitamin D and pre-clinical alterations in inflammatory and hemostatic markers: a cross sectional analysis in the 1958 British Birth Cohort
BACKGROUND: Vitamin D deficiency has been suggested as a cardiovascular risk factor, but little is known about underlying mechanisms or associations with inflammatory or hemostatic markers. Our aim was to investigate the association between 25-hydroxyvitamin D [25(OH)D, a measure for vitamin D status] concentrations with pre-clinical variations in markers of inflammation and hemostasis. METHODOLOGY/PRINCIPAL FINDINGS: Serum concentrations of 25(OH)D, C-reactive protein (CRP), fibrinogen, D-dimer, tissue plasminogen activator (tPA) antigen, and von Willebrand factor (vWF) were measured in a large population based study of British whites (aged 45 y). Participants for the current investigation were restricted to individuals free of drug treated cardiovascular disease (n = 6538). Adjusted for sex and month, 25(OH)D was inversely associated with all outcomes (p or =75 nmol/l compared to < 25 nmol/l. D-dimer concentrations were lower for participants with 25(OH)D 50-90 nmol/l compared to others (quadratic term p = 0.01). We also examined seasonal variation in hemostatic and inflammatory markers, and evaluated 25(OH)D contribution to the observed patterns using mediation models. TPA concentrations varied by season (p = 0.02), and much of this pattern was related to fluctuations in 25(OH)D concentrations (p < or =0.001). Some evidence of a seasonal variation was observed also for fibrinogen, D-dimer and vWF (p < 0.05 for all), with 25(OH)D mediating some of the pattern for fibrinogen and D-dimer, but not vWF. CONCLUSIONS: Current vitamin D status was associated with tPA concentrations, and to a lesser degree with fibrinogen and D-dimer, suggesting that vitamin D status/intake may be important for maintaining antithrombotic homeostasi
Aging brain from a network science perspective: Something to be positive about?
To better understand age differences in brain function and behavior, the current study applied network science to model functional interactions between brain regions. We observed a shift in network topology whereby for older adults subcortical and cerebellar structures overlapping with the Salience network had more connectivity to the rest of the brain, coupled with fragmentation of large-scale cortical networks such as the Default and Fronto-Parietal networks. Additionally, greater integration of the dorsal medial thalamus and red nucleus in the Salience network was associated with greater satisfaction with life for older adults, which is consistent with theoretical predictions of age-related increases in emotion regulation that are thought to help maintain well-being and life satisfaction in late adulthood. In regard to cognitive abilities, greater ventral medial prefrontal cortex coherence with its topological neighbors in the Default Network was associated with faster processing speed. Results suggest that large-scale organizing properties of the brain differ with normal aging, and this perspective may offer novel insight into understanding age-related differences in cognitive function and well-being. © 2013 Voss et al
Long-latency modulation of motor cortex excitability by ipsilateral posterior inferior frontal gyrus and pre-supplementary motor area
The primary motor cortex (M1) is strongly influenced by several frontal regions. Dual-site transcranial magnetic stimulation (dsTMS) has highlighted the timing of early (<40 ms) prefrontal/premotor influences over M1. Here we used dsTMS to investigate, for the first time, longer-latency causal interactions of the posterior inferior frontal gyrus (pIFG) and pre-supplementary motor area (pre-SMA) with M1 at rest. A suprathreshold test stimulus (TS) was applied over M1 producing a motor-evoked potential (MEP) in the relaxed hand. Either a subthreshold or a suprathreshold conditioning stimulus (CS) was administered over ipsilateral pIFG/pre-SMA sites before the TS at different CS-TS inter-stimulus intervals (ISIs: 40-150 ms). Independently of intensity, CS over pIFG and pre-SMA (but not over a control site) inhibited MEPs at an ISI of 40 ms. The CS over pIFG produced a second peak of inhibition at an ISI of 150 ms. Additionally, facilitatory modulations were found at an ISI of 60 ms, with supra-but not subthreshold CS intensities. These findings suggest differential modulatory roles of pIFG and pre-SMA in M1 excitability. In particular, the pIFG-but not the pre-SMA-exerts intensity-dependent modulatory influences over M1 within the explored time window of 40-150 ms, evidencing fine-tuned control of M1 output
Genome-wide meta-analysis identifies six novel loci associated with habitual coffee consumption.
Coffee, a major dietary source of caffeine, is among the most widely consumed beverages in the world and has received considerable attention regarding health risks and benefits. We conducted a genome-wide (GW) meta-analysis of predominately regular-type coffee consumption (cups per day) among up to 91 462 coffee consumers of European ancestry with top single-nucleotide polymorphisms (SNPs) followed-up in ~30 062 and 7964 coffee consumers of European and African-American ancestry, respectively. Studies from both stages were combined in a trans-ethnic meta-analysis. Confirmed loci were examined for putative functional and biological relevance. Eight loci, including six novel loci, met GW significance (log10Bayes factor (BF)>5.64) with per-allele effect sizes of 0.03-0.14 cups per day. Six are located in or near genes potentially involved in pharmacokinetics (ABCG2, AHR, POR and CYP1A2) and pharmacodynamics (BDNF and SLC6A4) of caffeine. Two map to GCKR and MLXIPL genes related to metabolic traits but lacking known roles in coffee consumption. Enhancer and promoter histone marks populate the regions of many confirmed loci and several potential regulatory SNPs are highly correlated with the lead SNP of each. SNP alleles near GCKR, MLXIPL, BDNF and CYP1A2 that were associated with higher coffee consumption have previously been associated with smoking initiation, higher adiposity and fasting insulin and glucose but lower blood pressure and favorable lipid, inflammatory and liver enzyme profiles (P<5 × 10-8).Our genetic findings among European and African-American adults reinforce the role of caffeine in mediating habitual coffee consumption and may point to molecular mechanisms underlying inter-individual variability in pharmacological and health effects of coffee
Setting an Optimal α That Minimizes Errors in Null Hypothesis Significance Tests
Null hypothesis significance testing has been under attack in recent years, partly owing to the arbitrary nature of setting α (the decision-making threshold and probability of Type I error) at a constant value, usually 0.05. If the goal of null hypothesis testing is to present conclusions in which we have the highest possible confidence, then the only logical decision-making threshold is the value that minimizes the probability (or occasionally, cost) of making errors. Setting α to minimize the combination of Type I and Type II error at a critical effect size can easily be accomplished for traditional statistical tests by calculating the α associated with the minimum average of α and β at the critical effect size. This technique also has the flexibility to incorporate prior probabilities of null and alternate hypotheses and/or relative costs of Type I and Type II errors, if known. Using an optimal α results in stronger scientific inferences because it estimates and minimizes both Type I errors and relevant Type II errors for a test. It also results in greater transparency concerning assumptions about relevant effect size(s) and the relative costs of Type I and II errors. By contrast, the use of α = 0.05 results in arbitrary decisions about what effect sizes will likely be considered significant, if real, and results in arbitrary amounts of Type II error for meaningful potential effect sizes. We cannot identify a rationale for continuing to arbitrarily use α = 0.05 for null hypothesis significance tests in any field, when it is possible to determine an optimal α
- …
