24 research outputs found

    Deconstructing radical democracy: articulation, representation and being-with-others

    Get PDF
    This paper addresses the contribution of deconstruction to democratic theory. It critically considers the usefulness of the conceptual distinction between “politics” and “the political” as a means of interpreting deconstruction’s relation to political questions. In particular, it critically engages with the inflection of deconstructive themes in the theory of radical democracy (RD) developed by Laclau and Mouffe. It is argued that this approach ontologizes the politics/political distinction, and elides together two distinct senses of otherness. This is registered in the prevalence of spatial tropes in this approach. The spatialization of key issues in political theory leads to a diminished sensitivity to the variegated temporalities through which solidarity and conflict, unity and multiplicity are negotiated. This is discussed with reference to the concept of articulation. By reducing temporality to a metaphysics of contingency, RD converges with a voluntaristic decisionism in its account of hegemony and political authority. The paper proceeds to a critical consideration of the interpretation of “undecidability” in RD, and of the elective affinity between this approach and the fascist critique of liberal democracy associated with Carl Schmitt. This discussion sets the scene for an alternative reading of the political significance of the theme of undecidability in Derrida’s thought. This reading focuses on the problem of negotiating two equally compelling forms of responsibility, the urgent responsibility to act in the world, and the patient responsibility to acknowledge otherness. By discussing the complex temporising associated with the theme of undecidability in deconstruction, the paper argues for a reassessment of the normative value of the concept of representation as it has developed in modern democratic theory. It develops an understanding of undecidability that points beyond the undeconstructed decisionism shared by both Schmitt and RD towards an account of the opening of public spaces of deliberation, deferral, and decision. More broadly, the paper is concerned with the moral limits of a prevalent spatialized interpretation of key themes in the poststructuralist canon, including difference, alterity, and otherness

    Commissioning and performance of the CMS silicon strip tracker with cosmic ray muons

    Get PDF
    This is the Pre-print version of the Article. The official published version of the Paper can be accessed from the link below - Copyright @ 2010 IOPDuring autumn 2008, the Silicon Strip Tracker was operated with the full CMS experiment in a comprehensive test, in the presence of the 3.8 T magnetic field produced by the CMS superconducting solenoid. Cosmic ray muons were detected in the muon chambers and used to trigger the readout of all CMS sub-detectors. About 15 million events with a muon in the tracker were collected. The efficiency of hit and track reconstruction were measured to be higher than 99% and consistent with expectations from Monte Carlo simulation. This article details the commissioning and performance of the Silicon Strip Tracker with cosmic ray muons.This work is supported by FMSR (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES (Croatia); RPF (Cyprus); Academy of Sciences and NICPB (Estonia); Academy of Finland, ME, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); OTKA and NKTH (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); NRF (Korea); LAS (Lithuania); CINVESTAV, CONACYT, SEP, and UASLP-FAI (Mexico); PAEC (Pakistan); SCSR (Poland); FCT (Portugal); JINR (Armenia, Belarus, Georgia, Ukraine, Uzbekistan); MST and MAE (Russia); MSTDS (Serbia); MICINN and CPAN (Spain); Swiss Funding Agencies (Switzerland); NSC (Taipei); TUBITAK and TAEK (Turkey); STFC (United Kingdom); DOE and NSF (USA)

    Transferability of Zr-Zr interatomic potentials

    Full text link
    Tens of Zr inter-atomic potentials (force fields) have been developed to enable atomic-scale simulations of Zr alloys. These can provide critical insight in the in-reactor behaviour of nuclear fuel cladding and structural components exposed, but the results are strongly sensitive to the choice of potential, and to date there has been no consistent evaluation of these potentials. We provide a comprehensive comparison of 13 popular Zr potentials, and assess their ability to reproduce key physical, mechanical, structural and thermodynamic properties of Zr. We assess the lattice parameters, thermal expansion, melting point, volume-energy response, allotropic phase stability, elastic properties, and point defect energies, and compare them to experimental and ab-initio values. No potential was found to outperform all others on all aspects, but for every metric considered here, at least one potential was found to provide reliable results. Older embedded-atom method (EAM) potentials tend to excel in 2-3 metrics each, but at the cost of poorer transferability. The two highest-performing potentials overall, with complementary strengths and weaknesses, were the 2021 angular-dependent potential of Smirnova and Starikov (Comp. Mater. Sci. 197, 110581) and the 2019 embedded-atom method potential of Wimmer et al (J. Nucl. Mater. 532, 152055). All potentials trained through machine learning algorithms proved to have lower overall accuracy, and less transferability, than simpler and computationally faster potentials available. Point defect structures and energies is where the greatest divergence and least accuracy is observed. We created maps that will help modellers select the most suitable potential for a specific application, and which may help identify areas of improvement in future potentials.Comment: 28 pages, 10 Figure

    The MaNGA FIREFLY Value-Added-Catalogue: resolved stellar populations of 10,010 nearby galaxies

    Full text link
    We present the MaNGA FIREFLY Value-Added-Catalogue (VAC) - a catalogue of ~3.7 million spatially resolved stellar population properties across 10,010 nearby galaxies from the final data release of the MaNGA survey. The full spectral fitting code firefly is employed to derive parameters such as stellar ages, metallicities, stellar and remnant masses, star formation histories, star formation rates and dust attenuation. In addition to Voronoi-binned measurements, our VAC also provides global properties, such as central values and radial gradients. Two variants of the VAC are available: presenting the results from fits using the M11-MILES and the novel MaStar stellar population models. MaStar allows to constrain the fit over the whole MaNGA wavelength range, extends the age-metallicity parameter space, and uses empirical spectra from the same instrument as MaNGA. The fits employing MaStar models find on average slightly younger ages, higher mass-weighted metallicities and smaller colour excesses. These differences are reduced when matching wavelength range and converging template grids. We further report that FIREFLY stellar masses are systematically lower by ~0.3 dex than masses from the MaNGA PCA and Pipe3D VACs, but match masses from the NSA best with only ~0.1 dex difference. Finally, we show that FIREFLY stellar ages correlate with spectral index age indicators HδA\delta_A and DnD_n(4000), though with a clear additional metallicity dependence.Comment: 20 pages, 16 figures (+appendix). Accepted for publication in MNRAS. The accepted version now also includes star formation rates and performance tests. The MaNGA FIREFLY VAC is publicly available at the SDSS webpage https://www.sdss.org/dr17/manga/manga-data/manga-firefly-value-added-catalog and at ICG Portsmouth's website http://www.icg.port.ac.uk/manga-firefly-va

    The performance of the CMS muon detector in proton-proton collisions at √s = 7 TeV at the LHC

    Get PDF
    The performance of all subsystems of the CMS muon detector has been studied by using a sample of proton-proton collision data at √s = 7 TeV collected at the LHC in 2010 that corresponds to an integrated luminosity of approximately 40 pb-1. The measured distributions of the major operational parameters of the drift tube (DT), cathode strip chamber (CSC), and resistive plate chamber (RPC) systems met the design specifications. The spatial resolution per chamber was 80–120 μm in the DTs, 40–150 μm in the CSCs, and 0.8–1.2 cm in the RPCs. The time resolution achievable was 3 ns or better per chamber for all 3 systems. The efficiency for reconstructing hits and track segments originating from muons traversing the muon chambers was in the range 95–98%. The CSC and DT systems provided muon track segments for the CMS trigger with over 96% efficiency, and identified the correct triggering bunch crossing in over 99.5% of such events. The measured performance is well reproduced by Monte Carlo simulation of the muon system down to the level of individual channel response. The results confirm the high efficiency of the muon system, the robustness of the design against hardware failures, and its effectiveness in the discrimination of backgrounds

    DUNE Offline Computing Conceptual Design Report

    No full text
    International audienceThis document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems thatfacilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    DUNE Offline Computing Conceptual Design Report

    No full text
    This document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems thatfacilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment

    DUNE Offline Computing Conceptual Design Report

    No full text
    This document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems thatfacilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment
    corecore