22,159 research outputs found

    Borel sets with large squares

    Full text link
    For a cardinal mu we give a sufficient condition (*)_mu (involving ranks measuring existence of independent sets) for: [(**)_mu] if a Borel set B subseteq R x R contains a mu-square (i.e. a set of the form A x A, |A|= mu) then it contains a 2^{aleph_0}-square and even a perfect square, and also for [(***)_mu] if psi in L_{omega_1, omega} has a model of cardinality mu then it has a model of cardinality continuum generated in a nice, absolute way. Assuming MA + 2^{aleph_0}> mu for transparency, those three conditions ((*)_mu, (**)_mu and (***)_mu) are equivalent, and by this we get e.g.: for all alpha= aleph_alpha => not (**)_{aleph_alpha}, and also min {mu :(*)_mu}, if <2^{aleph_0}, has cofinality aleph_1. We deal also with Borel rectangles and related model theoretic problems

    Club guessing and the universal models

    Full text link
    We survey the use of club guessing and other pcf constructs in the context of showing that a given partially ordered class of objects does not have a largest, or a universal element

    Choosing effective methods for design diversity - How to progress from intuition to science

    Get PDF
    Design diversity is a popular defence against design faults in safety critical systems. Design diversity is at times pursued by simply isolating the development teams of the different versions, but it is presumably better to &quot;force&quot; diversity, by appropriate prescriptions to the teams. There are many ways of forcing diversity. Yet, managers who have to choose a cost-effective combination of these have little guidance except their own intuition. We argue the need for more scientifically based recommendations, and outline the problems with producing them. We focus on what we think is the standard basis for most recommendations: the belief that, in order to produce failure diversity among versions, project decisions should aim at causing &quot;diversity&quot; among the faults in the versions. We attempt to clarify what these beliefs mean, in which cases they may be justified and how they can be checked or disproved experimentally

    A methodology for collecting valid software engineering data

    Get PDF
    An effective data collection method for evaluating software development methodologies and for studying the software development process is described. The method uses goal-directed data collection to evaluate methodologies with respect to the claims made for them. Such claims are used as a basis for defining the goals of the data collection, establishing a list of questions of interest to be answered by data analysis, defining a set of data categorization schemes, and designing a data collection form. The data to be collected are based on the changes made to the software during development, and are obtained when the changes are made. To insure accuracy of the data, validation is performed concurrently with software development and data collection. Validation is based on interviews with those people supplying the data. Results from using the methodology show that data validation is a necessary part of change data collection. Without it, as much as 50% of the data may be erroneous. Feasibility of the data collection methodology was demonstrated by applying it to five different projects in two different environments. The application showed that the methodology was both feasible and useful

    climatology

    Get PDF
    Energy and momentum deposition from planetary-scale Rossby waves as well as from small-scale gravity waves (GWs) largely control stratospheric dynamics. Interactions between these different wave types, however, complicate the quantification of their individual contribution to the overall dynamical state of the middle atmosphere. In state-of-the-art general circulation models (GCMs), the majority of the GW spectrum cannot be resolved and therefore has to be parameterised. This is commonly implemented in two discrete schemes, one for GWs that originate from flow over orographic obstacles and one for all other kinds of GWs (non-orographic GWs). In this study, we attempt to gain a deeper understanding of the interactions of resolved with parameterised wave driving and of their influence on the stratospheric zonal winds and on the Brewer–Dobson circulation (BDC). For this, we set up a GCM time slice experiment with two sensitivity simulations: one without orographic GWs and one without non-orographic GWs. Our findings include an acceleration of the polar vortices, which has historically been one of the main reasons for including explicit GW parameterisations in GCMs. Further, we find inter-hemispheric differences in BDC changes when omitting GWs that can be explained by wave compensation and amplification effects. These are partly evoked through local changes in the refractive properties of the atmosphere caused by the omitted GW drag and a thereby increased planetary wave propagation. However, non-local effects on the flow can act to suppress vertical wave fluxes into the stratosphere for a very strong polar vortex. Moreover, we study mean age of stratospheric air to investigate the impact of missing GWs on tracer transport. On the basis of this analysis, we suggest that the larger ratio of planetary waves to GWs leads to enhanced horizontal mixing, which can have a large impact on stratospheric tracer distributions

    CONTEMPORARY MISOGYNY: LAURA RIDING, WILLIAM EMPSON AND THE CRITICS – A SURVEY OF MIS-HISTORY

    Get PDF
    This essay examines three books: A Survey of Modernist Poetry, by Laura Riding and Robert Graves, Seven Types of Ambiguity by William Empson, and William Empson: Among the Mandarins by John Haffenden. It shows how and why Laura Riding was the original author of the interpretation of Shakespeare’s Sonnet 129 in A Survey of Modernist Poetry, which provided the idea of Empson’s understanding of ‘ambiguity’ which in turn was highly significant to the subsequent development of ‘New Criticism’. It examines the history of A Survey of Modernist Poetry since its first publication in 1927, its treatment by critics and reviewers, and its mistakenly being described as a book by Robert Graves up to the present day as epitomized in John Haffenden’s biography. It also indicates that modernist or post-modernist literary criticism from 1927 onwards would have been significantly different had numerous critics, Empson among them, but other poets and authors, too, given more attention to the work of Laura Riding than to Robert Graves

    Spark deployment and performance evaluation on the MareNostrum supercomputer

    Get PDF
    In this paper we present a framework to enable data-intensive Spark workloads on MareNostrum, a petascale supercomputer designed mainly for compute-intensive applications. As far as we know, this is the first attempt to investigate optimized deployment configurations of Spark on a petascale HPC setup. We detail the design of the framework and present some benchmark data to provide insights into the scalability of the system. We examine the impact of different configurations including parallelism, storage and networking alternatives, and we discuss several aspects in executing Big Data workloads on a computing system that is based on the compute-centric paradigm. Further, we derive conclusions aiming to pave the way towards systematic and optimized methodologies for fine-tuning data-intensive application on large clusters emphasizing on parallelism configurations.Peer ReviewedPostprint (author's final draft

    Cloud chamber laboratory investigations into scattering properties of hollow ice particles

    Get PDF
    Copyright 2015 The Authors. Published by Elsevier Ltd.This is an open access article under the CC-BY license (http://creativecommons.org/licenses/by/4.0/). Date of Acceptance: 16/02/2015Measurements are presented of the phase function, P11, and asymmetry parameter, g, of five ice clouds created in a laboratory cloud chamber. At −7 °C, two clouds were created: one comprised entirely of solid columns, and one comprised entirely of hollow columns. Similarly at −15 °C, two clouds were created: one consisting of solid plates and one consisting of hollow plates. At −30 °C, only hollow particles could be created within the constraints of the experiment. The resulting cloud at −30 °C contained short hollow columns and thick hollow plates. During the course of each experiment, the cloud properties were monitored using a Cloud Particle Imager (CPI). In addition to this, ice crystal replicas were created using formvar resin. By examining the replicas under an optical microscope, two different internal structures were identified. The internal and external facets were measured and used to create geometric particle models with realistic internal structures. Theoretical results were calculated using both Ray Tracing (RT) and Ray Tracing with Diffraction on Facets (RTDF). Experimental and theoretical results are compared to assess the impact of internal structure on P11 and g and the applicability of RT and RTDF for hollow columns.Peer reviewe
    • …
    corecore