1,173 research outputs found

    Theory of Electron-Phonon Dynamics in Insulating Nanoparticles

    Full text link
    We discuss the rich vibrational dynamics of nanometer-scale semiconducting and insulating crystals as probed by localized electronic impurity states, with an emphasis on nanoparticles that are only weakly coupled to their environment. Two principal regimes of electron-phonon dynamics are distinguished, and a brief survey of vibrational-mode broadening mechanisms is presented. Recent work on the effects of mechanical interaction with the environment is discussed.Comment: Revte

    Variability in visualization of latent fingermarks developed with 1,2-indanedione–zinc chloride

    Get PDF
    Amino acid variability in sweat may affect the ability of amino acid-sensitive fingermark reagents to successfully develop all latent fingermarks within a large population. There has been some speculation that age, gender, or prior activity may be the cause for differences in the amino acid profile within a population.Latent fingermarks from 120 donors were collected and treated with 1,2-indanedione–zinc chloride. Grades were given to treated samples based upon their initial color and resultant luminescent properties. Degradation of developed prints over three years was also assessed by regrading all samples and comparing the results to the initial grade. Statistical analyses, such as the Mann-Whitney U test, revealed that there was a correlation between the grade and the age of the developed print, age of the donor, and the washing of hands. However, no link was found between the food consumption or gender of the donor and the grade

    On the Verge of One Petabyte - the Story Behind the BaBar Database System

    Full text link
    The BaBar database has pioneered the use of a commercial ODBMS within the HEP community. The unique object-oriented architecture of Objectivity/DB has made it possible to manage over 700 terabytes of production data generated since May'99, making the BaBar database the world's largest known database. The ongoing development includes new features, addressing the ever-increasing luminosity of the detector as well as other changing physics requirements. Significant efforts are focused on reducing space requirements and operational costs. The paper discusses our experience with developing a large scale database system, emphasizing universal aspects which may be applied to any large scale system, independently of underlying technology used.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 6 pages. PSN MOKT01

    The Millennium Galaxy Catalogue: The connection between close pairs and asymmetry; implications for the galaxy merger rate

    Full text link
    We compare the use of galaxy asymmetry and pair proximity for measuring galaxy merger fractions and rates for a volume limited sample of 3184 galaxies with -21 < M(B) -5 log h < -18 mag. and 0.010 < z < 0.123 drawn from the Millennium Galaxy Catalogue. Our findings are that: (i) Galaxies in close pairs are generally more asymmetric than isolated galaxies and the degree of asymmetry increases for closer pairs. At least 35% of close pairs (with projected separation of less than 20 h^{-1} kpc and velocity difference of less than 500 km s^{-1}) show significant asymmetry and are therefore likely to be physically bound. (ii) Among asymmetric galaxies, we find that at least 80% are either interacting systems or merger remnants. However, a significant fraction of galaxies initially identified as asymmetric are contaminated by nearby stars or are fragmented by the source extraction algorithm. Merger rates calculated via asymmetry indices need careful attention in order to remove the above sources of contamination, but are very reliable once this is carried out. (iii) Close pairs and asymmetries represent two complementary methods of measuring the merger rate. Galaxies in close pairs identify future mergers, occurring within the dynamical friction timescale, while asymmetries are sensitive to the immediate pre-merger phase and identify remnants. (iv) The merger fraction derived via the close pair fraction and asymmetries is about 2% for a merger rate of (5.2 +- 1.0) 10^{-4} h^3 Mpc^{-3} Gyr^{-1}. These results are marginally consistent with theoretical simulations (depending on the merger time-scale), but imply a flat evolution of the merger rate with redshift up to z ~1.Comment: 10 pages, 10 figures, emulateapj format. ApJ, accepte

    The Origin of Neutral Hydrogen Clouds in Nearby Galaxy Groups: Exploring the Range Of Galaxy Interactions

    Full text link
    We combine high resolution N-body simulations with deep observations of neutral hydrogen (HI) in nearby galaxy groups in order to explore two well-known theories of HI cloud formation: HI stripping by galaxy interactions and dark matter minihalos with embedded HI gas. This paper presents new data from three galaxy groups, Canes Venatici I, NGC 672, and NGC 45, and assembles data from our previous galaxy group campaign to generate a rich HI cloud archive to compare to our simulated data. We find no HI clouds in the Canes Venatici I, NGC 672, or NGC 45 galaxy groups. We conclude that HI clouds in our detection space are most likely to be generated through recent, strong galaxy interactions. We find no evidence of HI clouds associated with dark matter halos above M_HI = 10^6 M_Sun, within +/- 700 km/s of galaxies, and within 50 kpc projected distance of galaxies.Comment: 35 pages, 10 figures, AJ accepte

    Optimization of Software on High Performance Computing Platforms for the LUX-ZEPLIN Dark Matter Experiment

    Full text link
    High Energy Physics experiments like the LUX-ZEPLIN dark matter experiment face unique challenges when running their computation on High Performance Computing resources. In this paper, we describe some strategies to optimize memory usage of simulation codes with the help of profiling tools. We employed this approach and achieved memory reduction of 10-30\%. While this has been performed in the context of the LZ experiment, it has wider applicability to other HEP experimental codes that face these challenges on modern computer architectures.Comment: Contribution to Proceedings of CHEP 2019, Nov 4-8, Adelaide, Australi

    External Quality Assessment Schemes for Biomarker Testing in Oncology:Comparison of Performance between Formalin-Fixed, Paraffin-Embedded-Tissue and Cell-Free Tumor DNA in Plasma

    Get PDF
    Liquid biopsies have emerged as a useful addition to tissue biopsies in molecular pathology. Literature has shown lower laboratory performances when a new method of variant analysis is introduced. This study evaluated the differences in variant analysis between tissue and plasma samples after the introduction of liquid biopsy in molecular analysis. Data from a pilot external quality assessment scheme for the detection of molecular variants in plasma samples and from external quality assessment schemes for the detection of molecular variants in tissue samples were collected. Laboratory performance and error rates by sample were compared between matrices for variants present in both scheme types. Results showed lower overall performance [65.6% (n = 276) versus 89.2% (n = 1607)] and higher error rates [21.0% to 43.5% (n = 138) versus 8.7% to 16.7% (n = 234 to 689)] for the detection of variants in plasma compared to tissue, respectively. In the plasma samples, performance was decreased for variants with an allele frequency of 1% compared to 5% [56.5% (n = 138) versus 74.6% (n = 138)]. The implementation of liquid biopsy in the detection of circulating tumor DNA in plasma was associated with poor laboratory performance. It is important both to apply optimal detection methods and to extensively validate new methods for testing circulating tumor DNA before treatment decisions are made

    Galaxy and Mass Assembly (GAMA): merging galaxies and their properties

    Get PDF
    We derive the close pair fractions and volume merger rates for galaxies in the Galaxy and Mass Assembly (GAMA) survey with −23 < Mr < −17 (ΩM = 0.27, ΩΛ = 0.73, H0 = 100 km s−1 Mpc−1) at 0.01 < z < 0.22 (look-back time of <2 Gyr). The merger fraction is approximately 1.5 per cent Gyr−1 at all luminosities (assuming 50 per cent of pairs merge) and the volume merger rate is ≈3.5 × 10−4 Mpc−3 Gyr−1. We examine how the merger rate varies by luminosity and morphology. Dry mergers (between red/spheroidal galaxies) are found to be uncommon and to decrease with decreasing luminosity. Fainter mergers are wet, between blue/discy galaxies. Damp mergers (one of each type) follow the average of dry and wet mergers. In the brighter luminosity bin (−23 < Mr < −20), the merger rate evolution is flat, irrespective of colour or morphology, out to z ∼ 0.2. The makeup of the merging population does not appear to change over this redshift range. Galaxy growth by major mergers appears comparatively unimportant and dry mergers are unlikely to be significant in the buildup of the red sequence over the past 2 Gyr. We compare the colour, morphology, environmental density and degree of activity (BPT class, Baldwin, Phillips & Terlevich) of galaxies in pairs to those of more isolated objects in the same volume. Galaxies in close pairs tend to be both redder and slightly more spheroid dominated than the comparison sample. We suggest that this may be due to ‘harassment’ in multiple previous passes prior to the current close interaction. Galaxy pairs do not appear to prefer significantly denser environments. There is no evidence of an enhancement in the AGN fraction in pairs, compared to other galaxies in the same volume

    High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

    Full text link
    Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.Comment: 72 page

    Accounts from developers of generic health state utility instruments explain why they produce different QALYs: a qualitative study

    Get PDF
    Purpose and setting: Despite the label generic health state utility instruments (HSUIs), empirical evidence shows that different HSUIs generate different estimates of Health-Related Quality of Life (HRQoL) in the same person. Once a HSUI is used to generate a QALY, the difference between HSUIs is often ignored, and decision-makers act as if \u27a QALY is a QALY is a QALY\u27. Complementing evidence that different generic HSUIs produce different empirical values, this study addresses an important gap by exploring how HSUIs differ, and processes that produced this difference. 15 developers of six generic HSUIs used for estimating the QOL component of QALYs: Quality of Well-Being (QWB) scale; 15 Dimension instrument (15D); Health Utilities Index (HUI); EuroQol EQ-5D; Short Form-6 Dimension (SF-6D), and the Assessment of Quality of Life (AQoL) were interviewed in 2012-2013. Principal findings: We identified key factors involved in shaping each instrument, and the rationale for similarities and differences across measures. While HSUIs have a common purpose, they are distinctly discrete constructs. Developers recalled complex developmental processes, grounded in unique histories, and these backgrounds help to explain different pathways taken at key decision points during the HSUI development. The basis for the HSUIs was commonly not equivalent conceptually: differently valued concepts and goals drove instrument design and development, according to each HSUI\u27s defined purpose. Developers drew from different sources of knowledge to develop their measure depending on their conceptualisation of HRQoL. Major conclusions/contribution to knowledge: We generated and analysed first-hand accounts of the development of the HSUIs to provide insight, beyond face value, about how and why such instruments differ. Findings enhance our understanding of why the six instruments developed the way they did, from the perspective of key developers of those instruments. Importantly, we provide additional, original explanation for why a QALY is not a QALY is not a QALY
    • …
    corecore