383 research outputs found

    The Blockchain as a Narrative Technology: Investigating the Social Ontology and Normative Configurations of Cryptocurrencies

    Get PDF
    This article is available via Open Access on the publisher's webpage. Follow the DOI for full text.In this paper, we engage in a philosophical investigation of how blockchain technologies such as cryptocurrencies can mediate our social world. Emerging blockchain-based decentralised applications have the potential to transform our financial system, our bureaucracies and models of governance. We construct an ontological framework of ā€œnarrative technologiesā€ that allows us to show how these technologies, like texts, can configure our social reality. Drawing from the work of Ricoeur and responding to the works of Searle, in postphenomenology and STS, we show how blockchain technologies bring about a process of emplotment: an organisation of characters and events. First, we show how blockchain technologies actively configure plots such as financial transactions by rendering them increasingly rigid. Secondly, we show how they configure abstractions from the world of action, by replacing human interactions with automated code. Third, we investigate the role of peopleā€™s interpretative distances towards blockchain technologies: discussing the importance of greater public involvement with their application in different realms of social life

    Mark Sheasby

    Get PDF

    Modeling association between DNA copy number and gene expression with constrained piecewise linear regression splines

    Get PDF
    DNA copy number and mRNA expression are widely used data types in cancer studies, which combined provide more insight than separately. Whereas in existing literature the form of the relationship between these two types of markers is fixed a priori, in this paper we model their association. We employ piecewise linear regression splines (PLRS), which combine good interpretation with sufficient flexibility to identify any plausible type of relationship. The specification of the model leads to estimation and model selection in a constrained, nonstandard setting. We provide methodology for testing the effect of DNA on mRNA and choosing the appropriate model. Furthermore, we present a novel approach to obtain reliable confidence bands for constrained PLRS, which incorporates model uncertainty. The procedures are applied to colorectal and breast cancer data. Common assumptions are found to be potentially misleading for biologically relevant genes. More flexible models may bring more insight in the interaction between the two markers.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS605 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Normalized, Segmented or Called aCGH Data?

    Get PDF
    Array comparative genomic hybridization (aCGH) is a high-throughput lab technique to measure genome-wide chromosomal copy numbers. Data from aCGH experiments require extensive pre-processing, which consists of three steps: normalization, segmentation and calling. Each of these pre-processing steps yields a different data set: normalized data, segmented data, and called data. Publications using aCGH base their findings on data from all stages of the pre-processing. Hence, there is no consensus on which should be used for further down-stream analysis. This consensus is however important for correct reporting of findings, and comparison of results from different studies. We discuss several issues that should be taken into account when deciding on which data are to be used. We express the believe that called data are best used, but would welcome opposing views

    The spectral condition number plot for regularization parameter evaluation

    Get PDF
    Abstract: Many modern statistical applications ask for the estimation of a covariance (or precision) matrix in settings where the number of variables is larger than the number of observations. There exists a broad class of ridge-type estimators that employs regularization to cope with the subsequent singularity of the sample covariance matrix. These estimators depend on a penalty parameter and choosing its value can be hard, in terms of being computationally unfeasible or tenable only for a restricted set of ridge-type estimators. Here we introduce a simple graphical tool, the spectral condition number plot, for informed heuristic penalty parameter assessment. The proposed tool is computationally friendly and can be employed for the full class of ridge-type covariance (precision) estimators

    Exploiting flux ratio anomalies to probe warm dark matter in future large-scale surveys

    Get PDF
    Flux ratio anomalies in strong gravitationally lensed quasars constitute a unique way to probe the abundance of non-luminous dark matter haloes, and hence the nature of dark matter. In this paper, we identify double-imaged quasars as a statistically efficient probe of dark matter, since they are 20 times more abundant than quadruply imaged quasars. Using N-body simulations that include realistic baryonic feedback, we measure the full distribution of flux ratios in doubly imaged quasars for cold (CDM) and warm dark matter (WDM) cosmologies. Through this method, we fold in two key systematics ā€“ quasar variability and line-of-sight structures. We find that WDM cosmologies predict a āˆ¼6 per cent difference in the cumulative distribution functions of flux ratios relative to CDM, with CDM predicting many more small ratios. Finally, we estimate that āˆ¼600 doubly imaged quasars will need to be observed in order to be able to unambiguously discern between CDM and the two WDM models studied here. Such sample sizes will be easily within reach of future large-scale surveys such as Euclid. In preparation for this survey data, we require discerning the scale of the uncertainties in modelling lens galaxies and their substructure in simulations, plus a strong understanding of the selection function of observed lensed quasars

    Better prediction by use of co-data: Adaptive group-regularized ridge regression

    Full text link
    For many high-dimensional studies, additional information on the variables, like (genomic) annotation or external p-values, is available. In the context of binary and continuous prediction, we develop a method for adaptive group-regularized (logistic) ridge regression, which makes structural use of such 'co-data'. Here, 'groups' refer to a partition of the variables according to the co-data. We derive empirical Bayes estimates of group-specific penalties, which possess several nice properties: i) they are analytical; ii) they adapt to the informativeness of the co-data for the data at hand; iii) only one global penalty parameter requires tuning by cross-validation. In addition, the method allows use of multiple types of co-data at little extra computational effort. We show that the group-specific penalties may lead to a larger distinction between `near-zero' and relatively large regression parameters, which facilitates post-hoc variable selection. The method, termed GRridge, is implemented in an easy-to-use R-package. It is demonstrated on two cancer genomics studies, which both concern the discrimination of precancerous cervical lesions from normal cervix tissues using methylation microarray data. For both examples, GRridge clearly improves the predictive performances of ordinary logistic ridge regression and the group lasso. In addition, we show that for the second study the relatively good predictive performance is maintained when selecting only 42 variables.Comment: 15 pages, 2 figures. Supplementary Information available on first author's web sit
    • ā€¦
    corecore