1,471 research outputs found

    Reason is King and Science is his Crown: A Study of French Science-Fiction for the Dissemination of Philosophical Thought

    Get PDF
    The thesis seeks to explore the didactic application of French science-fiction during the seventeenth and eighteenth centuries for the portrayal and dissemination of their respective philosophical theories. Studying science-fiction novels during these centuries will allow a comparison of seventeenth and eighteenth-century dissemination methods, to determine if the foundational seventeenth-century methods were retained or modified to more accurately represent the change in philosophical attitudes. Exploration of this topic will contribute to a greater understanding of French Enlightenment theory, analysis of relatively unstudied novels in the science-fiction genre, and a novel approach to “proto” science-fiction literature by connecting the previously separate genres of science-fiction and philosophy during the Enlightenment. The trends within the seventeenth century show dominant authoritative representations through analogical examples, authoritative ideological figures, and an emphasis on logically sustained arguments. The eighteenth-century trends focus on logical passionate attitudes, burlesque scenarios, and authoritative actions to exemplify the Enlightenment ideologies. Therefore, these five analyzed œuvres show conservation of didactic and authoritative dissemination methods during this philosophically evolutionary time period

    Sequential Implementation of Monte Carlo Tests with Uniformly Bounded Resampling Risk

    Full text link
    This paper introduces an open-ended sequential algorithm for computing the p-value of a test using Monte Carlo simulation. It guarantees that the resampling risk, the probability of a different decision than the one based on the theoretical p-value, is uniformly bounded by an arbitrarily small constant. Previously suggested sequential or non-sequential algorithms, using a bounded sample size, do not have this property. Although the algorithm is open-ended, the expected number of steps is finite, except when the p-value is on the threshold between rejecting and not rejecting. The algorithm is suitable as standard for implementing tests that require (re-)sampling. It can also be used in other situations: to check whether a test is conservative, iteratively to implement double bootstrap tests, and to determine the sample size required for a certain power.Comment: Major Revision 15 pages, 4 figure

    State-dependent Kernel selection for conditional sampling of graphs

    Get PDF
    This article introduces new efficient algorithms for two problems: sampling conditional on vertex degrees in unweighted graphs, and conditional on vertex strengths in weighted graphs. The resulting conditional distributions provide the basis for exact tests on social networks and two-way contingency tables. The algorithms are able to sample conditional on the presence or absence of an arbitrary set of edges. Existing samplers based on MCMC or sequential importance sampling are generally not scalable; their efficiency can degrade in large graphs with complex patterns of known edges. MCMC methods usually require explicit computation of a Markov basis to navigate the state space; this is computationally intensive even for small graphs. Our samplers do not require a Markov basis, and are efficient both in sparse and dense settings. The key idea is to carefully select a Markov kernel on the basis of the current state of the chain. We demonstrate the utility of our methods on a real network and contingency table. Supplementary materials for this article are available online

    spcadjust: an R package for adjusting for estimation error in control charts

    Get PDF
    In practical applications of control charts the in-control state and the corresponding chart parameters are usually estimated based on some past in-control data. The estimation error then needs to be accounted for. In this paper we present an R package, spcadjust , which implements a bootstrap based method for adjusting monitoring schemes to take into account the estimation error. By bootstrapping the past data this method guarantees, with a certain probability, a conditional performance of the chart. In spcadjust the method is implement for various types of Shewhart, CUSUM and EWMA charts, various performance criteria, and both parametric and non-parametric bootstrap schemes. In addition to the basic charts, charts based on linear and logistic regression models for risk adjusted monitoring are included, and it is easy for the user to add further charts. Use of the package is demonstrated by examples

    Impact of established clubs on probability of survival in top leagues

    Get PDF
    Football leagues across the world apply the European promotion-relegation model, where the best teams in the highest-ranking minor league are promoted to the major league from which the worst teams are relegated to the former. This paper proposes a simple statistical model that calculates the probability of non-established clubs avoiding relegation, by assuming the existence of a cohort of established clubs, which rarely if ever are relegated. It uses three data items (i.e. total number of clubs, number of established clubs and number of clubs relegated). It is the number of established clubs which is critical, rather than which clubs should be so categorised. For illustrative purposes the model was applied to the English Football Premier League (EFPL) for its first twenty-one seasons. It was found that the means of the model and the observed distributions for the key EFPL cohorts of seven and eleven established clubs were not significantly different statistically, suggesting that the model reasonably reflects the observed distribution for each size of established group. Also, the probability of a (non-established) club surviving eight seasons, assuming no established clubs, was 600% greater than if there were a cohort of eleven established EFPL clubs. This demonstrates that the projected probability of survival will be greatly overestimated unless a cohort of established clubs is assumed. Any football club in such a major league, particularly one that is newly promoted, can use this statistical model to calculate the probability of avoiding relegation and thereby generate a more sensitive assessment of risk

    Importance subsampling: Improving power system planning under climate-based uncertainty

    Get PDF
    Recent studies indicate that the effects of inter-annual climate-based variability in power system planning are significant and that long samples of demand & weather data (spanning multiple decades) should be considered. At the same time, modelling renewable generation such as solar and wind requires high temporal resolution to capture fluctuations in output levels. In many realistic power system models, using long samples at high temporal resolution is computationally unfeasible. This paper introduces a novel subsampling approach, referred to as importance subsampling, allowing the use of multiple decades of demand & weather data in power system planning models at reduced computational cost. The methodology can be applied in a wide class of optimisation-based power system simulations. A test case is performed on a model of the United Kingdom created using the open-source modelling framework Calliope and 36 years of hourly demand and wind data. Standard data reduction approaches such as using individual years or clustering into representative days lead to significant errors in estimates of optimal system design. Furthermore, the resultant power systems lead to supply capacity shortages, raising questions of generation capacity adequacy. In contrast, importance subsampling leads to accurate estimates of optimal system design at greatly reduced computational cost, with resultant power systems able to meet demand across all 36 years of demand & weather scenarios

    A Bayesian methodology for systemic risk assessment in financial networks

    Get PDF
    We develop a Bayesian methodology for systemic risk assessment in financial networks such as the interbank market. Nodes represent participants in the network and weighted directed edges represent liabilities. Often, for every participant, only the total liabilities and total assets within this network are observable. However, systemic risk assessment needs the individual liabilities. We propose a model for the individual liabilities, which, following a Bayesian approach, we then condition on the observed total liabilities and assets and, potentially, on certain observed individual liabilities. We construct a Gibbs sampler to generate samples from this conditional distribution. These samples can be used in stress testing, giving probabilities for the outcomes of interest. As one application we derive default probabilities of individual banks and discuss their sensitivity with respect to prior information included to model the network. An R-package implementing the methodology is provided

    Compound poisson models for weighted networks with applications in finance

    Get PDF
    We develop a modelling framework for estimating and predicting weighted network data. The edge weights in weighted networks often arise from aggregating some individual relationships be- tween the nodes. Motivated by this, we introduce a modelling framework for weighted networks based on the compound Poisson distribution. To allow for heterogeneity between the nodes, we use a regression approach for the model parameters. We test the new modelling framework on two types of financial networks: a network of financial institutions in which the edge weights represent exposures from trading Credit Default Swaps and a network of countries in which the edge weights represent cross-border lending. The compound Poisson Gamma distributions with regression fit the data well in both situations. We illustrate how this modelling framework can be used for predicting unobserved edges and their weights in an only partially observed network. This is for example relevant for assessing systemic risk in financial networks

    Biodesalination: an emerging technology for targeted removal of Na+and Cl−from seawater by cyanobacteria

    Get PDF
    Although desalination by membrane processes is a possible solution to the problem of freshwater supply, related cost and energy demands prohibit its use on a global scale. Hence, there is an emerging necessity for alternative, energy and cost-efficient methods for water desalination. Cyanobacteria are oxygen-producing, photosynthetic bacteria that actively grow in vast blooms both in fresh and seawater bodies. Moreover, cyanobacteria can grow with minimal nutrient requirements and under natural sunlight. Taking these observations together, a consortium of five British Universities was formed to test the principle of using cyanobacteria as ion exchangers, for the specific removal of Na+ and Cl− from seawater. This project consisted of the isolation and characterisation of candidate strains, with central focus on their potential to be osmotically and ionically adaptable. The selection panel resulted in the identification of two Euryhaline strains, one of freshwater (Synechocystis sp. Strain PCC 6803) and one of marine origin (Synechococcus sp. Strain PCC 7002) (Robert Gordon University, Aberdeen). Other work packages were as follows. Genetic manipulations potentially allowed for the expression of a light-driven, Cl−-selective pump in both strains, therefore, enhancing the bioaccumulation of specific ions within the cell (University of Glasgow). Characterisation of surface properties under different salinities (University of Sheffield), ensured that cell–liquid separation efficiency would be maximised post-treatment, as well as monitoring the secretion of mucopolysaccharides in the medium during cell growth. Work at Newcastle University is focused on the social acceptance of this scenario, together with an assessment of the potential risks through the generation and application of a Hazard Analysis and Critical Control Points plan. Finally, researchers in Imperial College (London) designed the process, from biomass production to water treatment and generation of a model photobioreactor. This multimodal approach has produced promising first results, and further optimisation is expected to result in mass scaling of this process
    corecore