62 research outputs found

    Coupling of thermal and mass diffusion in regular binary thermal lattice-gases

    Full text link
    We have constructed a regular binary thermal lattice-gas in which the thermal diffusion and mass diffusion are coupled and form two nonpropagating diffusive modes. The power spectrum is shown to be similar in structure as for the one in real fluids, in which the central peak becomes a combination of coupled entropy and concentration contributions. Our theoretical findings for the power spectra are confirmed by computer simulations performed on this model.Comment: 5 pages including 3 figures in RevTex

    Lattice Gas Automata for Reactive Systems

    Full text link
    Reactive lattice gas automata provide a microscopic approachto the dynamics of spatially-distributed reacting systems. After introducing the subject within the wider framework of lattice gas automata (LGA) as a microscopic approach to the phenomenology of macroscopic systems, we describe the reactive LGA in terms of a simple physical picture to show how an automaton can be constructed to capture the essentials of a reactive molecular dynamics scheme. The statistical mechanical theory of the automaton is then developed for diffusive transport and for reactive processes, and a general algorithm is presented for reactive LGA. The method is illustrated by considering applications to bistable and excitable media, oscillatory behavior in reactive systems, chemical chaos and pattern formation triggered by Turing bifurcations. The reactive lattice gas scheme is contrasted with related cellular automaton methods and the paper concludes with a discussion of future perspectives.Comment: to appear in PHYSICS REPORTS, 81 revtex pages; uuencoded gziped postscript file; figures available from [email protected] or [email protected]

    Priorities for health economic methodological research: Results of an expert consultation

    Get PDF
    Background: The importance of economic evaluation in decision making is growing with increasing budgetary pressures on health systems. Diverse economic evidence is available for a range of interventions across national contexts within Europe, but little attention has been given to identifying evidence gaps that, if filled, could contribute to more efficient allocation of resources. One objective of the Research Agenda for Health Economic Evaluation project is to determine the most important methodological evidence gaps for the ten highest burden conditions in the European Union (EU), and to suggest ways of filling these gaps. Methods: The highest burden conditions in the EU by Disability Adjusted Life Years were determined using the Global Burden of Disease study. Clinical interventions were identified for each condition based on published guidelines, and economic evaluations indexed in MEDLINE were mapped to each intervention. A panel of public health and health economics experts discussed the evidence during a workshop and identified evidence gaps. Results: The literature analysis contributed to identifying cross-cutting methodological and technical issues, which were considered by the expert panel to derive methodological research priorities. Conclusions: The panel suggests a research agenda for health economics which incorporates the use of real-world evidence in the assessment of new and existing interventions; increased understanding of cost-effectiveness according to patient characteristics beyond the “-omics” approach to inform both investment and disinvestment decisions; methods for assessment of complex interventions; improved cross-talk between economic evaluations from health and other sectors; early health technology assessment; and standardized, transferable approaches to economic modeling

    GaAs-Based Superluminescent Light-Emitting Diodes with 290-nm Emission Bandwidth by Using Hybrid Quantum Well/Quantum Dot Structures

    Get PDF
    A high-performance superluminescent light-emitting diode (SLD) based upon a hybrid quantum well (QW)/quantum dot (QD) active element is reported and is assessed with regard to the resolution obtainable in an optical coherence tomography system. We report on the appearance of strong emission from higher order optical transition from the QW in a hybrid QW/QD structure. This additional emission broadening method contributes significantly to obtaining a 3-dB linewidth of 290 nm centered at 1200 nm, with 2.4 mW at room temperature

    Cost-Effectiveness of Magnetic Resonance Imaging with a New Contrast Agent for the Early Diagnosis of Alzheimer's Disease

    Get PDF
    Background: Used as contrast agents for brain magnetic resonance imaging (MRI), markers for beta-amyloid deposits might allow early diagnosis of Alzheimer’s disease (AD). We evaluated the cost-effectiveness of such a diagnostic test, MRI+CLP (contrastophore-linker-pharmacophore), should it become clinically available. Methodology/Principal Findings: We compared the cost-effectiveness of MRI+CLP to that of standard diagnosis using currently available cognition tests and of standard MRI, and investigated the impact of a hypothetical treatment efficient in early AD. The primary analysis was based on the current French context for 70-year-old patients with Mild Cognitive Impairment (MCI). In alternative ‘‘screen and treat’ ’ scenarios, we analyzed the consequences of systematic screenings of over-60 individuals (either population-wide or restricted to the ApoE4 genotype population). We used a Markov model of AD progression; model parameters, as well as incurred costs and quality-of-life weights in France were taken from the literature. We performed univariate and probabilistic multivariate sensitivity analyses. The base-case preferred strategy was the standard MRI diagnosis strategy. In the primary analysis however, MRI+CLP could become the preferred strategy under a wide array of scenarios involving lower cost and/or higher sensitivity or specificity. By contrast, in the ‘‘screen and treat’’ analyses, the probability of MRI+CLP becoming the preferred strategy remained lower than 5%. Conclusions/Significance: It is thought that anti-beta-amyloid compounds might halt the development of dementia i

    A portrait of the Higgs boson by the CMS experiment ten years after the discovery

    Get PDF
    A Correction to this paper has been published (18 October 2023) : https://doi.org/10.1038/s41586-023-06164-8.Data availability: Tabulated results are provided in the HEPData record for this analysis. Release and preservation of data used by the CMS Collaboration as the basis for publications is guided by the CMS data preservation, re-use and open acess policy.Code availability: The CMS core software is publicly available on GitHub (https://github.com/cms-sw/cmssw).In July 2012, the ATLAS and CMS collaborations at the CERN Large Hadron Collider announced the observation of a Higgs boson at a mass of around 125 gigaelectronvolts. Ten years later, and with the data corresponding to the production of a 30-times larger number of Higgs bosons, we have learnt much more about the properties of the Higgs boson. The CMS experiment has observed the Higgs boson in numerous fermionic and bosonic decay channels, established its spin–parity quantum numbers, determined its mass and measured its production cross-sections in various modes. Here the CMS Collaboration reports the most up-to-date combination of results on the properties of the Higgs boson, including the most stringent limit on the cross-section for the production of a pair of Higgs bosons, on the basis of data from proton–proton collisions at a centre-of-mass energy of 13 teraelectronvolts. Within the uncertainties, all these observations are compatible with the predictions of the standard model of elementary particle physics. Much evidence points to the fact that the standard model is a low-energy approximation of a more comprehensive theory. Several of the standard model issues originate in the sector of Higgs boson physics. An order of magnitude larger number of Higgs bosons, expected to be examined over the next 15 years, will help deepen our understanding of this crucial sector.BMBWF and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, FAPERGS, and FAPESP (Brazil); MES and BNSF (Bulgaria); CERN; CAS, MoST, and NSFC (China); MINCIENCIAS (Colombia); MSES and CSF (Croatia); RIF (Cyprus); SENESCYT (Ecuador); MoER, ERC PUT and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRI (Greece); NKFIH (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); MES (Latvia); LAS (Lithuania); MOE and UM (Malaysia); BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI (Mexico); MOS (Montenegro); MBIE (New Zealand); PAEC (Pakistan); MES and NSC (Poland); FCT (Portugal); MESTD (Serbia); MCIN/AEI and PCTI (Spain); MOSTR (Sri Lanka); Swiss Funding Agencies (Switzerland); MST (Taipei); MHESI and NSTDA (Thailand); TUBITAK and TENMAK (Turkey); NASU (Ukraine); STFC (United Kingdom); DOE and NSF (USA). Individuals have received support from the Marie-Curie programme and the European Research Council and Horizon 2020 Grant, contract Nos. 675440, 724704, 752730, 758316, 765710, 824093, 884104, and COST Action CA16108 (European Union); the Leventis Foundation; the Alfred P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation à la Recherche dans l’Industrie et dans l’Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the F.R.S.-FNRS and FWO (Belgium) under the “Excellence of Science – EOS” – be.h project n. 30820817; the Beijing Municipal Science & Technology Commission, No. Z191100007219010; the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Stavros Niarchos Foundation (Greece); the Deutsche Forschungsgemeinschaft (DFG), under Germany’s Excellence Strategy – EXC 2121 “Quantum Universe” – 390833306, and under project number 400140256 - GRK2497; the Hungarian Academy of Sciences, the New National Excellence Program - ÚNKP, the NKFIH research grants K 124845, K 124850, K 128713, K 128786, K 129058, K 131991, K 133046, K 138136, K 143460, K 143477, 2020-2.2.1-ED-2021-00181, and TKP2021-NKTA-64 (Hungary); the Council of Science and Industrial Research, India; the Latvian Council of Science; the Ministry of Education and Science, project no. 2022/WK/14, and the National Science Center, contracts Opus 2021/41/B/ST2/01369 and 2021/43/B/ST2/01552 (Poland); the Fundação para a Ciência e a Tecnologia, grant CEECIND/01334/2018 (Portugal); the National Priorities Research Program by Qatar National Research Fund; MCIN/AEI/10.13039/501100011033, ERDF “a way of making Europe”, and the Programa Estatal de Fomento de la Investigación Científica y Técnica de Excelencia María de Maeztu, grant MDM-2017-0765 and Programa Severo Ochoa del Principado de Asturias (Spain); the Chulalongkorn Academic into Its 2nd Century Project Advancement Project, and the National Science, Research and Innovation Fund via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation, grant B05F650021 (Thailand); the Kavli Foundation; the Nvidia Corporation; the SuperMicro Corporation; the Welch Foundation, contract C-1845; and the Weston Havens Foundation (USA)

    Measurement of the differential tt¯ production cross section as a function of the jet mass and extraction of the top quark mass in hadronic decays of boosted top quarks

    Get PDF
    Data Availability: This manuscript has no associated data or the data will not be deposited. [Authors’ comment: Release and preservation of data used by the CMS Collaboration as the basis for publications is guided by the CMS policy as stated in https://cms-docdb.cern.ch/cgibin/PublicDocDB/RetrieveFile?docid=6032 &filename=CMSDataPolicyV1.2.pdf &version=2.]A measurement of the jet mass distribution in hadronic decays of Lorentz-boosted top quarks is presented. The measurement is performed in the lepton + jets channel of top quark pair production (tt¯ ) events, where the lepton is an electron or muon. The products of the hadronic top quark decay are reconstructed using a single large-radius jet with transverse momentum greater than 400GeV . The data were collected with the CMS detector at the LHC in proton-proton collisions and correspond to an integrated luminosity of 138fb−1 . The differential tt¯ production cross section as a function of the jet mass is unfolded to the particle level and is used to extract the top quark mass. The jet mass scale is calibrated using the hadronic W boson decay within the large-radius jet. The uncertainties in the modelling of the final state radiation are reduced by studying angular correlations in the jet substructure. These developments lead to a significant increase in precision, and a top quark mass of 173.06±0.84GeV.SCOAP

    Portable Acceleration of CMS Computing Workflows with Coprocessors as a Service

    Get PDF
    A preprint version of the article is available at: arXiv:2402.15366v2 [physics.ins-det], https://arxiv.org/abs/2402.15366 . Comments: Replaced with the published version. Added the journal reference and the DOI. All the figures and tables can be found at https://cms-results.web.cern.ch/cms-results/public-results/publications/MLG-23-001 (CMS Public Pages). Report numbers: CMS-MLG-23-001, CERN-EP-2023-303.Data Availability: No datasets were generated or analyzed during the current study.Computing demands for large scientific experiments, such as the CMS experiment at the CERN LHC, will increase dramatically in the next decades. To complement the future performance increases of software running on central processing units (CPUs), explorations of coprocessor usage in data processing hold great potential and interest. Coprocessors are a class of computer processors that supplement CPUs, often improving the execution of certain functions due to architectural design choices. We explore the approach of Services for Optimized Network Inference on Coprocessors (SONIC) and study the deployment of this as-a-service approach in large-scale data processing. In the studies, we take a data processing workflow of the CMS experiment and run the main workflow on CPUs, while offloading several machine learning (ML) inference tasks onto either remote or local coprocessors, specifically graphics processing units (GPUs). With experiments performed at Google Cloud, the Purdue Tier-2 computing center, and combinations of the two, we demonstrate the acceleration of these ML algorithms individually on coprocessors and the corresponding throughput improvement for the entire workflow. This approach can be easily generalized to different types of coprocessors and deployed on local CPUs without decreasing the throughput performance. We emphasize that the SONIC approach enables high coprocessor usage and enables the portability to run workflows on different types of coprocessors.SCOAP3. Open access funding provided by CERN (European Organization for Nuclear Research

    Accuracy versus precision in boosted top tagging with the ATLAS detector

    Get PDF
    Abstract The identification of top quark decays where the top quark has a large momentum transverse to the beam axis, known as top tagging, is a crucial component in many measurements of Standard Model processes and searches for beyond the Standard Model physics at the Large Hadron Collider. Machine learning techniques have improved the performance of top tagging algorithms, but the size of the systematic uncertainties for all proposed algorithms has not been systematically studied. This paper presents the performance of several machine learning based top tagging algorithms on a dataset constructed from simulated proton-proton collision events measured with the ATLAS detector at √ s = 13 TeV. The systematic uncertainties associated with these algorithms are estimated through an approximate procedure that is not meant to be used in a physics analysis, but is appropriate for the level of precision required for this study. The most performant algorithms are found to have the largest uncertainties, motivating the development of methods to reduce these uncertainties without compromising performance. To enable such efforts in the wider scientific community, the datasets used in this paper are made publicly available.</jats:p
    corecore