2,214 research outputs found

    Exercising judgments in the world: a consideration of Cicero's theoretical writings on politics and their continuing relevance to International Political Theory

    Get PDF
    This project seeks to show International Political Theorists that Cicero’s theoretical writings on politics are of continuing relevance to their research. I argue that this field of study would benefit a great deal in holding a conception of politics which is more personal than current frameworks presuppose and that his theoretical writings on politics provide an excellent basis upon which to investigate real-world problems in politics, whether domestic, international, or global. In building a conception of politics which is (partly) personal, the project begins with a review of some literature in International Political Theory and a few neighbouring fields related to exercising judgments in the world, before moving on in the second chapter to address the literature reviewed through Hannah Arendt’s theoretical writings on politics. Arendt’s writings allow for the development of several terms used in the first chapter which are touchstones of the civic republican tradition, such as tradition, authority, and persona, at the same time as preparing the way for a consideration of Cicero’s writings. The third chapter, as well as developing numerous arguments from the first two, provides an account of Cicero’s framework of civic virtues as he articulates them in the De Officiis, and the final chapter carries out the same task in relation to his theoretical framework of res publica as articulated in the De Re Publica

    Vestige: Maximum likelihood phylogenetic footprinting

    Get PDF
    BACKGROUND: Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. RESULTS: Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. CONCLUSION: Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational processes, DNA repair and selection can be evaluated both spatially (along a sequence alignment) and temporally (for each branch of the tree) providing visual indicators to the attributes and functions of DNA sequences

    Only VpreB1, but not VpreB2, is expressed at levels which allow normal development of B cells

    Get PDF
    The surrogate light chain (SLC) consists of the polypeptides λ5 and, in the mouse, either VpreB1 or VpreB2. SLC associates with BILL-Cadherin and other glycoproteins to form the pro-B cell receptor (pro-BCR) at the pre-BI cell stage, and with the immunoglobulin μ heavy chain to form the pre-BCR at the pre-BII cell stage. The function of the pro-BCR, if any, is unknown, whereas the pre-BCR is crucial for proliferative expansion of pre-BII cells. To shed light on the functional properties of VpreB1 and VpreB2 in vivo, mice with either one or two VpreB1, or one or two VpreB2, alleles have been investigated. We show that B cell development in mice with two VpreB1 alleles is indistinguishable from that of normal mice. In contrast, mice with two VpreB2 alleles show an ∼1.6-fold increase in pre-BI and a 35% decrease in pre-BII cell numbers, while mice with only one VpreB2 allele show a reduction in B cell development manifested in a 2-fold enrichment in pre-BI cells and a 75% reduction in pre-BII cells. However, such a gene dosage effect is not observed for VpreB1. Our results suggest that the difference between VpreB1- and VpreB2-deficient mice is due to lower VpreB2 protein expression, thus limiting the formation of pre-BCRs and thereby the number of large, cycling pre-BII cell

    Thermodynamic Matrix Exponentials and Thermodynamic Parallelism

    Full text link
    Thermodynamic computing exploits fluctuations and dissipation in physical systems to efficiently solve various mathematical problems. For example, it was recently shown that certain linear algebra problems can be solved thermodynamically, leading to an asymptotic speedup scaling with the matrix dimension. The origin of this "thermodynamic advantage" has not yet been fully explained, and it is not clear what other problems might benefit from it. Here we provide a new thermodynamic algorithm for exponentiating a real matrix, with applications in simulating linear dynamical systems. We describe a simple electrical circuit involving coupled oscillators, whose thermal equilibration can implement our algorithm. We also show that this algorithm also provides an asymptotic speedup that is linear in the dimension. Finally, we introduce the concept of thermodynamic parallelism to explain this speedup, stating that thermodynamic noise provides a resource leading to effective parallelization of computations, and we hypothesize this as a mechanism to explain thermodynamic advantage more generally.Comment: 14 pages, 5 figure

    Comparison of Forced ENSO-Like Hydrological Expressions in Simulations of the Preindustrial and Mid-Holocene

    Get PDF
    Using the water isotope- and vapor source distribution (VSD) tracer-enabled Goddard Institute for Space Studies ModelE-R, we examine changing El Nino-Southern Oscillation (ENSO)-like expressions in the hydrological cycle in a suite of model experiments. We apply strong surface temperature anomalies associated with composite observed El Nino and La Nina events as surface boundary conditions to preindustrial and mid-Holocene model experiments in order to investigate ENSO-like expressions in the hydrological cycle under varying boundary conditions. We find distinct simulated hydrological anomalies associated with El Nino-like ("ENSOWARM") and La Nina-like ("ENSOCOOL") conditions, and the region-specific VSD tracers show hydrological differences across the Pacific basin between El Nino-like and La Nina-like events. The application of ENSOCOOL forcings does not produce climatological anomalies that represent the equal but opposite impacts of the ENSOWARM experiment, as the isotopic anomalies associated with ENSOWARM conditions are generally stronger than with ENSOCOOL and the spatial patterns of change distinct. Also, when the same ENSO-like surface temperature anomalies are imposed on the mid-Holocene, the hydrological response is muted, relative to the preindustrial. Mid-Holocene changes in moisture sources to the analyzed regions across the Pacific reveal potentially complex relationships between ENSO-like conditions and boundary conditions. Given the complex impacts of ENSO-like conditions on various aspects of the hydrological cycle, we suggest that proxy record insights into paleo-ENSO variability are most likely to be robust when synthesized from a network of many spatially diverse archives, which can account for the potential nonstationarity of ENSO teleconnections under different boundary conditions

    Modeling Insights into Deuterium Excess as an Indicator of Water Vapor Source Conditions

    Get PDF
    Deuterium excess (d) is interpreted in conventional paleoclimate reconstructions as a tracer of oceanic source region conditions, such as temperature, where precipitation originates. Previous studies have adopted co-isotopic approaches to estimate past changes in both site and oceanic source temperatures for ice core sites using empirical relationships derived from conceptual distillation models, particularly Mixed Cloud Isotopic Models (MCIMs). However, the relationship between d and oceanic surface conditions remains unclear in past contexts. We investigate this climate-isotope relationship for sites in Greenland and Antarctica using multiple simulations of the water isotope-enabled Goddard Institute for Space Studies (GISS) ModelE-R general circulation model and apply a novel suite of model vapor source distribution (VSD) tracers to assess d as a proxy for source temperature variability under a range of climatic conditions. Simulated average source temperatures determined by the VSDs are compared to synthetic source temperature estimates calculated using MCIM equations linking d to source region conditions. We show that although deuterium excess is generally a faithful tracer of source temperatures as estimated by the MCIM approach, large discrepancies in the isotope-climate relationship occur around Greenland during the Last Glacial Maximum simulation, when precipitation seasonality and moisture source regions were notably different from present. This identified sensitivity in d as a source temperature proxy suggests that quantitative climate reconstructions from deuterium excess should be treated with caution for some sites when boundary conditions are significantly different from the present day. Also, the exclusion of the influence of humidity and other evaporative source changes in MCIM regressions may be a limitation of quantifying source temperature fluctuations from deuterium excess in some instances

    Flavoured jets with exact anti-ktk_t kinematics and tests of infrared and collinear safety

    Full text link
    We propose extensions of the anti-ktk_t and Cambridge/Aachen hierarchical jet clustering algorithms that are designed to retain the exact jet kinematics of these algorithms, while providing an infrared-and-collinear-safe definition of jet flavour at any fixed order in perturbation theory. Central to our approach is a new technique called Interleaved Flavour Neutralisation (IFN), whereby the treatment of flavour is integrated with, but distinct from, the kinematic clustering. IFN allows flavour information to be meaningfully accessed at each stage of the clustering sequence, which enables a consistent assignment of flavour both to individual jets and to their substructure. We validate the IFN approach using a dedicated framework for fixed-order tests of infrared and collinear safety, which also reveals unanticipated issues in earlier approaches to flavoured jet clustering. We briefly explore the phenomenological impact of IFN with anti-ktk_t jets for benchmark tasks at the Large Hadron Collider.Comment: 36 pages, 27 figures, 1 table, code available from https://github.com/jetflav/IFNPlugi

    Thermodynamic Computing System for AI Applications

    Full text link
    Recent breakthroughs in artificial intelligence (AI) algorithms have highlighted the need for novel computing hardware in order to truly unlock the potential for AI. Physics-based hardware, such as thermodynamic computing, has the potential to provide a fast, low-power means to accelerate AI primitives, especially generative AI and probabilistic AI. In this work, we present the first continuous-variable thermodynamic computer, which we call the stochastic processing unit (SPU). Our SPU is composed of RLC circuits, as unit cells, on a printed circuit board, with 8 unit cells that are all-to-all coupled via switched capacitances. It can be used for either sampling or linear algebra primitives, and we demonstrate Gaussian sampling and matrix inversion on our hardware. The latter represents the first thermodynamic linear algebra experiment. We also illustrate the applicability of the SPU to uncertainty quantification for neural network classification. We envision that this hardware, when scaled up in size, will have significant impact on accelerating various probabilistic AI applications.Comment: 26 pages, 22 figure
    • …
    corecore