56,168 research outputs found

    Host Galaxy Properties and Hubble Residuals of Type Ia Supernovae from the Nearby Supernova Factory

    Full text link
    We examine the relationship between Type Ia Supernova (SN Ia) Hubble residuals and the properties of their host galaxies using a sample of 115 SNe Ia from the Nearby Supernova Factory (SNfactory). We use host galaxy stellar masses and specific star-formation rates fitted from photometry for all hosts, as well as gas-phase metallicities for a subset of 69 star-forming (non-AGN) hosts, to show that the SN Ia Hubble residuals correlate with each of these host properties. With these data we find new evidence for a correlation between SN Ia intrinsic color and host metallicity. When we combine our data with those of other published SN Ia surveys, we find the difference between mean SN Ia brightnesses in low and high mass hosts is 0.077 +- 0.014 mag. When viewed in narrow (0.2 dex) bins of host stellar mass, the data reveal apparent plateaus of Hubble residuals at high and low host masses with a rapid transition over a short mass range (9.8 <= log(M_*/M_Sun) <= 10.4). Although metallicity has been a favored interpretation for the origin of the Hubble residual trend with host mass, we illustrate how dust in star-forming galaxies and mean SN Ia progenitor age both evolve along the galaxy mass sequence, thereby presenting equally viable explanations for some or all of the observed SN Ia host bias.Comment: 20 pages, 11 figures, accepted for publication in Ap

    ASCR/HEP Exascale Requirements Review Report

    Full text link
    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio

    Spartan Daily, March 19, 1985

    Get PDF
    Volume 84, Issue 35https://scholarworks.sjsu.edu/spartandaily/7288/thumbnail.jp

    Parsel: A (De-)compositional Framework for Algorithmic Reasoning with Language Models

    Full text link
    Despite recent success in large language model (LLM) reasoning, LLMs struggle with hierarchical multi-step reasoning tasks like generating complex programs. For these tasks, humans often start with a high-level algorithmic design and implement each part gradually. We introduce Parsel, a framework enabling automatic implementation and validation of complex algorithms with code LLMs, taking hierarchical function descriptions in natural language as input. We show that Parsel can be used across domains requiring hierarchical reasoning, including program synthesis, robotic planning, and theorem proving. We show that LLMs generating Parsel solve more competition-level problems in the APPS dataset, resulting in pass rates that are over 75% higher than prior results from directly sampling AlphaCode and Codex, while often using a smaller sample budget. We also find that LLM-generated robotic plans using Parsel as an intermediate language are more than twice as likely to be considered accurate than directly generated plans. Lastly, we explore how Parsel addresses LLM limitations and discuss how Parsel may be useful for human programmers.Comment: new quantitative detail

    Measurement of the inclusive isolated prompt photon cross section in pp collisions at √s=7 TeV with the ATLAS detector

    Get PDF
    A measurement of the cross section for the inclusive production of isolated prompt photons in pp collisions at a center-of-mass energy √s=7  TeV is presented. The measurement covers the pseudorapidity ranges |ηγ|<1.37 and 1.52≤|ηγ|<1.81 in the transverse energy range 15≤ETγ<100  GeV. The results are based on an integrated luminosity of 880  nb-1, collected with the ATLAS detector at the Large Hadron Collider. Photon candidates are identified by combining information from the calorimeters and from the inner tracker. Residual background in the selected sample is estimated from data based on the observed distribution of the transverse isolation energy in a narrow cone around the photon candidate. The results are compared to predictions from next-to-leading-order perturbative QCD calculations

    Cyber security investigation for Raspberry Pi devices

    Get PDF
    Big Data on Cloud application is growing rapidly. When the cloud is attacked, the investigation relies on digital forensics evidence. This paper proposed the data collection via Raspberry Pi devices, in a healthcare situation. The significance of this work is that could be expanded into a digital device array that takes big data security issues into account. There are many potential impacts in health area. The field of Digital Forensics Science has been tagged as a reactive science by some who believe research and study in the field often arise as a result of the need to respond to event which brought about the needs for investigation; this work was carried as a proactive research that will add knowledge to the field of Digital Forensic Science. The Raspberry Pi is a cost-effective, pocket sized computer that has gained global recognition since its development in 2008; with the wide spread usage of the device for different computing purposes. Raspberry Pi can potentially be a cyber security device, which can relate with forensics investigation in the near future. This work has used a systematic approach to study the structure and operation of the device and has established security issues that the widespread usage of the device can pose, such as health or smart city. Furthermore, its evidential information applied in security will be useful in the event that the device becomes a subject of digital forensic investigation in the foreseeable future. In healthcare system, PII (personal identifiable information) is a very important issue. When Raspberry Pi plays a processor role, its security is vital; consequently, digital forensics investigation on the Raspberry Pies becomes necessary

    Sixth Annual Users' Conference

    Get PDF
    Conference papers and presentation outlines which address the use of the Transportable Applications Executive (TAE) and its various applications programs are compiled. Emphasis is given to the design of the user interface and image processing workstation in general. Alternate ports of TAE and TAE subsystems are also covered

    JUNO Conceptual Design Report

    Get PDF
    The Jiangmen Underground Neutrino Observatory (JUNO) is proposed to determine the neutrino mass hierarchy using an underground liquid scintillator detector. It is located 53 km away from both Yangjiang and Taishan Nuclear Power Plants in Guangdong, China. The experimental hall, spanning more than 50 meters, is under a granite mountain of over 700 m overburden. Within six years of running, the detection of reactor antineutrinos can resolve the neutrino mass hierarchy at a confidence level of 3-4σ\sigma, and determine neutrino oscillation parameters sin2θ12\sin^2\theta_{12}, Δm212\Delta m^2_{21}, and Δmee2|\Delta m^2_{ee}| to an accuracy of better than 1%. The JUNO detector can be also used to study terrestrial and extra-terrestrial neutrinos and new physics beyond the Standard Model. The central detector contains 20,000 tons liquid scintillator with an acrylic sphere of 35 m in diameter. \sim17,000 508-mm diameter PMTs with high quantum efficiency provide \sim75% optical coverage. The current choice of the liquid scintillator is: linear alkyl benzene (LAB) as the solvent, plus PPO as the scintillation fluor and a wavelength-shifter (Bis-MSB). The number of detected photoelectrons per MeV is larger than 1,100 and the energy resolution is expected to be 3% at 1 MeV. The calibration system is designed to deploy multiple sources to cover the entire energy range of reactor antineutrinos, and to achieve a full-volume position coverage inside the detector. The veto system is used for muon detection, muon induced background study and reduction. It consists of a Water Cherenkov detector and a Top Tracker system. The readout system, the detector control system and the offline system insure efficient and stable data acquisition and processing.Comment: 328 pages, 211 figure

    Spartan Daily, April 19, 1985

    Get PDF
    Volume 84, Issue 52https://scholarworks.sjsu.edu/spartandaily/7305/thumbnail.jp
    corecore