189 research outputs found

    Growth factor in f(T) gravity

    Full text link
    We derive the evolution equation of growth factor for the matter over-dense perturbation in f(T)f(T) gravity. For instance, we investigate its behavior in power law model at small redshift and compare it to the prediction of Λ\LambdaCDM and dark energy with the same equation of state in the framework of Einstein general relativity. We find that the perturbation in f(T)f(T) gravity grows slower than that in Einstein general relativity if \p f/\p T>0 due to the effectively weakened gravity.Comment: 15 pages,1 figure; v2,typos corrected; v3, discussions added, accepted by JCA

    Size Doesn't Matter: Towards a More Inclusive Philosophy of Biology

    Get PDF
    notes: As the primary author, O’Malley drafted the paper, and gathered and analysed data (scientific papers and talks). Conceptual analysis was conducted by both authors.publication-status: Publishedtypes: ArticlePhilosophers of biology, along with everyone else, generally perceive life to fall into two broad categories, the microbes and macrobes, and then pay most of their attention to the latter. ‘Macrobe’ is the word we propose for larger life forms, and we use it as part of an argument for microbial equality. We suggest that taking more notice of microbes – the dominant life form on the planet, both now and throughout evolutionary history – will transform some of the philosophy of biology’s standard ideas on ontology, evolution, taxonomy and biodiversity. We set out a number of recent developments in microbiology – including biofilm formation, chemotaxis, quorum sensing and gene transfer – that highlight microbial capacities for cooperation and communication and break down conventional thinking that microbes are solely or primarily single-celled organisms. These insights also bring new perspectives to the levels of selection debate, as well as to discussions of the evolution and nature of multicellularity, and to neo-Darwinian understandings of evolutionary mechanisms. We show how these revisions lead to further complications for microbial classification and the philosophies of systematics and biodiversity. Incorporating microbial insights into the philosophy of biology will challenge many of its assumptions, but also give greater scope and depth to its investigations

    Construction status and prospects of the Hyper-Kamiokande project

    Get PDF
    The Hyper-Kamiokande project is a 258-kton Water Cherenkov together with a 1.3-MW high-intensity neutrino beam from the Japan Proton Accelerator Research Complex (J-PARC). The inner detector with 186-kton fiducial volume is viewed by 20-inch photomultiplier tubes (PMTs) and multi-PMT modules, and thereby provides state-of-the-art of Cherenkov ring reconstruction with thresholds in the range of few MeVs. The project is expected to lead to precision neutrino oscillation studies, especially neutrino CP violation, nucleon decay searches, and low energy neutrino astronomy. In 2020, the project was officially approved and construction of the far detector was started at Kamioka. In 2021, the excavation of the access tunnel and initial mass production of the newly developed 20-inch PMTs was also started. In this paper, we present a basic overview of the project and the latest updates on the construction status of the project, which is expected to commence operation in 2027

    Prospects for neutrino astrophysics with Hyper-Kamiokande

    Get PDF
    Hyper-Kamiokande is a multi-purpose next generation neutrino experiment. The detector is a two-layered cylindrical shape ultra-pure water tank, with its height of 64 m and diameter of 71 m. The inner detector will be surrounded by tens of thousands of twenty-inch photosensors and multi-PMT modules to detect water Cherenkov radiation due to the charged particles and provide our fiducial volume of 188 kt. This detection technique is established by Kamiokande and Super-Kamiokande. As the successor of these experiments, Hyper-K will be located deep underground, 600 m below Mt. Tochibora at Kamioka in Japan to reduce cosmic-ray backgrounds. Besides our physics program with accelerator neutrino, atmospheric neutrino and proton decay, neutrino astrophysics is an important research topic for Hyper-K. With its fruitful physics research programs, Hyper-K will play a critical role in the next neutrino physics frontier. It will also provide important information via astrophysical neutrino measurements, i.e., solar neutrino, supernova burst neutrinos and supernova relic neutrino. Here, we will discuss the physics potential of Hyper-K neutrino astrophysics

    Comparative Molecular Analysis of Gastrointestinal Adenocarcinomas

    Get PDF
    We analyzed 921 adenocarcinomas of the esophagus, stomach, colon, and rectum to examine shared and distinguishing molecular characteristics of gastrointestinal tract adenocarcinomas (GIACs). Hypermutated tumors were distinct regardless of cancer type and comprised those enriched for insertions/deletions, representing microsatellite instability cases with epigenetic silencing of MLH1 in the context of CpG island methylator phenotype, plus tumors with elevated single-nucleotide variants associated with mutations in POLE. Tumors with chromosomal instability were diverse, with gastroesophageal adenocarcinomas harboring fragmented genomes associated with genomic doubling and distinct mutational signatures. We identified a group of tumors in the colon and rectum lacking hypermutation and aneuploidy termed genome stable and enriched in DNA hypermethylation and mutations in KRAS, SOX9, and PCBP1. Liu et al. analyze 921 gastrointestinal (GI) tract adenocarcinomas and find that hypermutated tumors are enriched for insertions/deletions, upper GI tumors with chromosomal instability harbor fragmented genomes, and a group of genome-stable colorectal tumors are enriched in mutations in SOX9 and PCBP1

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    Get PDF
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    The Physics of the B Factories

    Get PDF

    The ATLAS trigger system for LHC Run 3 and trigger performance in 2022

    Get PDF
    The ATLAS trigger system is a crucial component of the ATLAS experiment at the LHC. It is responsible for selecting events in line with the ATLAS physics programme. This paper presents an overview of the changes to the trigger and data acquisition system during the second long shutdown of the LHC, and shows the performance of the trigger system and its components in the proton-proton collisions during the 2022 commissioning period as well as its expected performance in proton-proton and heavy-ion collisions for the remainder of the third LHC data-taking period (2022–2025)

    Measurement of the VH,H → ττ process with the ATLAS detector at 13 TeV

    Get PDF
    A measurement of the Standard Model Higgs boson produced in association with a W or Z boson and decaying into a pair of τ-leptons is presented. This search is based on proton-proton collision data collected at √s = 13 TeV by the ATLAS experiment at the LHC corresponding to an integrated luminosity of 140 fb−1. For the Higgs boson candidate, only final states with at least one τ-lepton decaying hadronically (τ →hadrons + vτ ) are considered. For the vector bosons, only leptonic decay channels are considered: Z → ℓℓ and W → ℓvℓ, with ℓ = e, μ. An excess of events over the expected background is found with an observed (expected) significance of 4.2 (3.6) standard deviations, providing evidence of the Higgs boson produced in association with a vector boson and decaying into a pair of τ-leptons. The ratio of the measured cross-section to the Standard Model prediction is μττ VH = 1.28 +0.30 −0.29 (stat.) +0.25 −0.21 (syst.). This result represents the most accurate measurement of the VH(ττ) process achieved to date

    Differential cross-sections for events with missing transverse momentum and jets measured with the ATLAS detector in 13 TeV proton-proton collisions

    Get PDF
    Measurements of inclusive, diferential cross-sections for the production of events with missing transverse momentum in association with jets in proton-proton collisions at √s = 13 TeV are presented. The measurements are made with the ATLAS detector using an integrated luminosity of 140 fb−1 and include measurements of dijet distributions in a region in which vector-boson fusion processes are enhanced. They are unfolded to correct for detector resolution and efficiency within the fiducial acceptance, and are designed to allow robust comparisons with a wide range of theoretical predictions. A measurement of differential cross sections for the Z → νν process is made. The measurements are generally well-described by Standard Model predictions except for the dijet invariant mass distribution. Auxiliary measurements of the hadronic system recoiling against isolated leptons, and photons, are also made in the same phase space. Ratios between the measured distributions are then derived, to take advantage of cancellations in modelling effects and some of the major systematic uncertainties. These measurements are sensitive to new phenomena, and provide a mechanism to easily set constraints on phenomenological models. To illustrate the robustness of the approach, these ratios are compared with two common Dark Matter models, where the constraints derived from the measurement are comparable to those set by dedicated detector-level searches
    corecore