177 research outputs found

    Probabilistic Causality And The Foundations Of Modern Science

    Get PDF
    Van Fraassen, in The Scientific Image, uses Reichenbach\u27s Principle of Common Cause to state the realist\u27s position on scientific research strategies. Van Fraassen then claims, on the basis of his version of Bell\u27s argument, that quantum mechanics is a couterexample to the doctrine of realism. This construal of realism is not adequate, and a more adequate formulation is developed in the first chapter of this thesis. The essential strategy of science, it is argued, is to aim for a greater unification of its concepts. An attempt is made in chapter 1 to capture this notion using the formal ideas of the so-called semantic view of theories as developed by Sneed and others. The result is that unification should be construed as the achievement of simple connections across theory applications (formally construed by Sneed as constraints of classes of models). Less formally the idea is developed as cross-situational invariance or robustness or resiliency .;Two crucial difficulties with probabilistic causality are identified in chapter 2. One is the old problem of capturing the temporal asymmetry of cause , while the other is a problem of definability when conditional probabilities are 0 or 1. A better theory of causality is developed in chapter 3 under the guidance of the realist principles as explicated in chapter 1. This theory follows the basic ideas of von Wright\u27s manipulability account of causality , or the intervention account (after H. Simon). The method of path analysis is thereby re-interpreted.;Finally, van Fraassen\u27s original charge that quantum mechanics is a counterexample to realism is confronted in chapter 5, after formulating some of the conceptual difficulties with quantum statistics in a general way in chapter 4. The realist must give up the principle of locality, but it is argued that this need not be a high price for the realist. For it is possible to represent the quantum phenomena in terms of a model of two-way causation that achieves the cross-situational invariance desired by the realist without violating the relativistic taboo on causal action along space-like paths

    Nitrous oxide and methane in the Atlantic Ocean between 50 degrees North and 52 degrees South: Latitudinal distribution and sea-to-air flux

    Get PDF
    We discuss nitrous oxide (N2O) and methane (CH4) distributions in 49 vertical profiles covering the upper 300 m of the water column along two 13,500 km transects between 50°N and 52°S during the Atlantic Meridional Transect (AMT) programme (AMT cruises 12 and 13). Vertical N2O profiles were amenable to analysis on the basis of common features coincident with Longhurst provinces. In contrast, CH4 showed no such pattern. The most striking feature of the latitudinal depth distributions was a well-defined “plume” of exceptionally high N2O concentrations coincident with very low levels of CH4, located between 23.5°N and 23.5°S; this feature reflects the upwelling of deep waters containing N2O derived from nitrification, as identified by an analysis of N2O, apparent oxygen utilization (AOU) and NO3-, and presumably depleted in CH4 by bacterial oxidation. Sea-to-air emissions fluxes for a region equivalent to 42% of the Atlantic Ocean surface area were in the range 0.40–0.68 Tg N2O yr-1 and 0.81–1.43 Tg CH4 yr-1. Based on contemporary estimates of the global ocean source strengths of atmospheric N2O and CH4, the Atlantic Ocean could account for 6–15% and 4–13%, respectively, of these source totals. Given that the Atlantic Ocean accounts for around 20% of the global ocean surface, on unit area basis it appears that the Atlantic may be a slightly weaker source of atmospheric N2O than other ocean regions but it could make a somewhat larger contribution to marine-derived atmospheric CH4 than previously thought

    Mediated amperometric immunosensing using single walled carbon nanotube forests

    Get PDF
    A prototype amperometric immunosensor was evaluated based on the adsorption of antibodies onto perpendicularly oriented assemblies of single wall carbon nanotubes called SWNT forests. The forests were self-assembled from oxidatively shortened SWNTs onto Nafion/iron oxide coated pyrolytic graphite electrodes. The nanotube forests were characterized using atomic force microscopy and resonance Raman spectroscopy. Anti-biotin antibody strongly adsorbed to the SWNT forests. In the presence of a soluble mediator, the detection limit for horseradish peroxidase (HRP) labeled biotin was 2.5 pmol ml[-1] (2.5 nM). Unlabelled biotin was detected in a competitive approach with a detection limit of 16 nmol ml[-1] (16 μM) and a relative standard deviation of 12%. The immunosensor showed low non-specific adsorption of biotin-HRP (approx. 0.1%) when blocked with bovine serum albumin. This immunosensing approach using high surface area, patternable, conductive SWNT assemblies may eventually prove useful for nano-biosensing arrays

    Translation of evidence-based Assistive Technologies into stroke rehabilitation: Users' perceptions of the barriers and opportunities

    Get PDF
    Background: Assistive Technologies (ATs), defined as "electrical or mechanical devices designed to help people recover movement", demonstrate clinical benefits in upper limb stroke rehabilitation; however translation into clinical practice is poor. Uptake is dependent on a complex relationship between all stakeholders. Our aim was to understand patients', carers' (P&Cs) and healthcare professionals' (HCPs) experience and views of upper limb rehabilitation and ATs, to identify barriers and opportunities critical to the effective translation of ATs into clinical practice. This work was conducted in the UK, which has a state funded healthcare system, but the findings have relevance to all healthcare systems. Methods. Two structurally comparable questionnaires, one for P&Cs and one for HCPs, were designed, piloted and completed anonymously. Wide distribution of the questionnaires provided data from HCPs with experience of stroke rehabilitation and P&Cs who had experience of stroke. Questionnaires were designed based on themes identified from four focus groups held with HCPs and P&Cs and piloted with a sample of HCPs (N = 24) and P&Cs (N = 8). Eight of whom (four HCPs and four P&Cs) had been involved in the development. Results: 292 HCPs and 123 P&Cs questionnaires were analysed. 120 (41%) of HCP and 79 (64%) of P&C respondents had never used ATs. Most views were common to both groups, citing lack of information and access to ATs as the main reasons for not using them. Both HCPs (N = 53 [34%]) and P&C (N = 21 [47%]) cited Functional Electrical Stimulation (FES) as the most frequently used AT. Research evidence was rated by HCPs as the most important factor in the design of an ideal technology, yet ATs they used or prescribed were not supported by research evidence. P&Cs rated ease of set-up and comfort more highly. Conclusion: Key barriers to translation of ATs into clinical practice are lack of knowledge, education, awareness and access. Perceptions about arm rehabilitation post-stroke are similar between HCPs and P&Cs. Based on our findings, improvements in AT design, pragmatic clinical evaluation, better knowledge and awareness and improvement in provision of services will contribute to better and cost-effective upper limb stroke rehabilitation. © 2014 Hughes et al.; licensee BioMed Central Ltd

    Cold War Fictions

    Get PDF
    This chapter offers a detailed reading of McEwan’s 2012 novel Sweet Tooth as a highly self-conscious and allusive literary spy thriller of the Cold War era, one which invites a renewed attention to the Cold War themes, ideas and literary strategies which have been important in his work since the late 1970s in which the novel is set. These flourished especially in the two novels written around the fall of the Berlin Wall, The Innocent and Black Dogs which also receive extended treatment here. In McEwan’s reworking of the Cold War spy thriller as postmodern literary fiction we find, it is argued, a recurrent fascination with misunderstandings and readjustments in emotional and political relations between the sexes as an analogy for Cold War politics and vice versa. Added to this McEwan increasingly packs his fictions with informed literary debate that constitute a profound exploration of literary genres and of the complex relationship between author and reader

    Utilisation of an operative difficulty grading scale for laparoscopic cholecystectomy

    Get PDF
    Background A reliable system for grading operative difficulty of laparoscopic cholecystectomy would standardise description of findings and reporting of outcomes. The aim of this study was to validate a difficulty grading system (Nassar scale), testing its applicability and consistency in two large prospective datasets. Methods Patient and disease-related variables and 30-day outcomes were identified in two prospective cholecystectomy databases: the multi-centre prospective cohort of 8820 patients from the recent CholeS Study and the single-surgeon series containing 4089 patients. Operative data and patient outcomes were correlated with Nassar operative difficultly scale, using Kendall’s tau for dichotomous variables, or Jonckheere–Terpstra tests for continuous variables. A ROC curve analysis was performed, to quantify the predictive accuracy of the scale for each outcome, with continuous outcomes dichotomised, prior to analysis. Results A higher operative difficulty grade was consistently associated with worse outcomes for the patients in both the reference and CholeS cohorts. The median length of stay increased from 0 to 4 days, and the 30-day complication rate from 7.6 to 24.4% as the difficulty grade increased from 1 to 4/5 (both p < 0.001). In the CholeS cohort, a higher difficulty grade was found to be most strongly associated with conversion to open and 30-day mortality (AUROC = 0.903, 0.822, respectively). On multivariable analysis, the Nassar operative difficultly scale was found to be a significant independent predictor of operative duration, conversion to open surgery, 30-day complications and 30-day reintervention (all p < 0.001). Conclusion We have shown that an operative difficulty scale can standardise the description of operative findings by multiple grades of surgeons to facilitate audit, training assessment and research. It provides a tool for reporting operative findings, disease severity and technical difficulty and can be utilised in future research to reliably compare outcomes according to case mix and intra-operative difficulty

    A highly mutagenised barley (cv. Golden Promise) TILLING population coupled with strategies for screening-by-sequencing

    Get PDF
    Background:We developed and characterised a highly mutagenised TILLING population of the barley (Hordeum vulgare) cultivar Golden Promise. Golden Promise is the 'reference' genotype for barley transformation and a primary objective of using this cultivar was to be able to genetically complement observed mutations directly in order to prove gene function. Importantly, a reference genome assembly of Golden Promise has also recently been developed. As our primary interest was to identify mutations in genes involved in meiosis and recombination, to characterise the population we focused on a set of 46 genes from the literature that are possible meiosis gene candidates. Results:Sequencing 20 plants from the population using whole exome capture revealed that the mutation density in this population is high (one mutation every 154 kb), and consequently even in this small number of plants we identified several interesting mutations. We also recorded some issues with seed availability and germination. We subsequently designed and applied a simple two-dimensional pooling strategy to identify mutations in varying numbers of specific target genes by Illumina short read pooled-amplicon sequencing and subsequent deconvolution. In parallel we assembled a collection of semi-sterile mutants from the population and used a custom exome capture array targeting the 46 candidate meiotic genes to identify potentially causal mutations. Conclusions:We developed a highly mutagenised barley TILLING population in the transformation competent cultivar Golden Promise. We used novel and cost-efficient screening approaches to successfully identify a broad range of potentially deleterious variants that were subsequently validated by Sanger sequencing. These resources combined with a high-quality genome reference sequence opens new possibilities for efficient functional gene validation.Miriam Schreiber, Abdellah Barakate, Nicola Uzrek, Malcolm Macaulay, Adeline Sourdille, Jenny Morris, Pete E. Hedley, Luke Ramsay and Robbie Waug

    [Comment] Redefine statistical significance

    Get PDF
    The lack of reproducibility of scientific studies has caused growing concern over the credibility of claims of new discoveries based on “statistically significant” findings. There has been much progress toward documenting and addressing several causes of this lack of reproducibility (e.g., multiple testing, P-hacking, publication bias, and under-powered studies). However, we believe that a leading cause of non-reproducibility has not yet been adequately addressed: Statistical standards of evidence for claiming discoveries in many fields of science are simply too low. Associating “statistically significant” findings with P < 0.05 results in a high rate of false positives even in the absence of other experimental, procedural and reporting problems. For fields where the threshold for defining statistical significance is P<0.05, we propose a change to P<0.005. This simple step would immediately improve the reproducibility of scientific research in many fields. Results that would currently be called “significant” but do not meet the new threshold should instead be called “suggestive.” While statisticians have known the relative weakness of using P≈0.05 as a threshold for discovery and the proposal to lower it to 0.005 is not new (1, 2), a critical mass of researchers now endorse this change. We restrict our recommendation to claims of discovery of new effects. We do not address the appropriate threshold for confirmatory or contradictory replications of existing claims. We also do not advocate changes to discovery thresholds in fields that have already adopted more stringent standards (e.g., genomics and high-energy physics research; see Potential Objections below). We also restrict our recommendation to studies that conduct null hypothesis significance tests. We have diverse views about how best to improve reproducibility, and many of us believe that other ways of summarizing the data, such as Bayes factors or other posterior summaries based on clearly articulated model assumptions, are preferable to P-values. However, changing the P-value threshold is simple and might quickly achieve broad acceptance
    corecore