13,399 research outputs found

    Identification of galaxy cluster substructures with the Caustic method

    Get PDF
    We investigate the power of the caustic technique for identifying substructures of galaxy clusters from optical redshift data alone. The caustic technique is designed to estimate the mass profile of galaxy clusters to radii well beyond the virial radius, where dynamical equilibrium does not hold. Two by-products of this technique are the identification of the cluster members and the identification of the cluster substructures. We test the caustic technique as a substructure detector on two samples of 150 mock redshift surveys of clusters; the clusters are extracted from a large cosmological NN-body simulation of a Λ\LambdaCDM model and have masses of M2001014h1MM_{200} \sim 10^{14} h^{-1} M_{\odot} and M2001015h1MM_{200} \sim 10^{15} h^{-1} M_{\odot} in the two samples. We limit our analysis to substructures identified in the simulation with masses larger than 1013h1M10^{13} h^{-1} M_{\odot}. With mock redshift surveys with 200 galaxies within 3R2003R_{200}, (1) the caustic technique recovers 3050\sim 30-50\% of the real substructures, and (2) 1520\sim 15-20\% of the substructures identified by the caustic technique correspond to real substructures of the central cluster, the remaining fraction being low-mass substructures, groups or substructures of clusters in the surrounding region, or chance alignments of unrelated galaxies. These encouraging results show that the caustic technique is a promising approach for investigating the complex dynamics of galaxy clusters.Comment: 13 pages, 15 figures. Accepted for publication in Ap

    Phonon Thermodynamics versus Electron-Phonon Models

    Full text link
    Applying the path integral formalism we study the equilibrium thermodynamics of the phonon field both in the Holstein and in the Su-Schrieffer-Heeger models. The anharmonic cumulant series, dependent on the peculiar source currents of the {\it e-ph} models, have been computed versus temperature in the case of a low energy oscillator. The cutoff in the series expansion has been determined, in the low TT limit, using the constraint of the third law of thermodynamics. In the Holstein model, the free energy derivatives do not show any contribution ascribable to {\it e-ph} anharmonic effect. We find signatures of large {\it e-ph} anharmonicities in the Su-Schrieffer-Heeger model mainly visible in the temperature dependent peak displayed by the phonon heat capacity

    The effect of a nucleating agent on lamellar growth in melt-crystallizing polyethylene oxide

    Full text link
    The effects of a (non co-crystallizing) nucleating agent on secondary nucleation rate and final lamellar thickness in isothermally melt-crystallizing polyethylene oxide are considered. SAXS reveals that lamellae formed in nucleated samples are thinner than in the pure samples crystallized at the same undercoolings. These results are in quantitative agreement with growth rate data obtained by calorimetry, and are interpreted as the effect of a local decrease of the basal surface tension, determined mainly by the nucleant molecules diffused out of the regions being about to crystallize. Quantitative agreement with a simple lattice model allows for some interpretation of the mechanism.Comment: submitted to Journal of Applied Physics (first version on 22 Apr 2002

    Genome-wide multi-parametric analysis of H2AX or γH2AX distributions during ionizing radiation-induced DNA damage response

    Get PDF
    Background: After induction of DNA double strand breaks (DSBs), the DNA damage response (DDR) is activated. One of the earliest events in DDR is the phosphorylation of serine 139 on the histone variant H2AX (gH2AX) catalyzed by phosphatidylinositol 3-kinases-related kinases. Despite being extensively studied, H2AX distribution[1] across the genome and gH2AX spreading around DSBs sites[2] in the context of different chromatin compaction states or transcription are yet to be fully elucidated. Materials and methods: gH2AX was induced in human hepatocellular carcinoma cells (HepG2) by exposure to 10 Gy X-rays (250 kV, 16 mA). Samples were incubated 0.5, 3 or 24 hours post irradiation to investigate early, intermediate and late stages of DDR, respectively. Chromatin immunoprecipitation was performed to select H2AX, H3 and gH2AX-enriched chromatin fractions. Chromatin-associated DNA was then sequenced by Illumina ChIP-Seq platform. HepG2 gene expression and histone modification (H3K36me3, H3K9me3) ChIP-Seq profiles were retrieved from Gene Expression Omnibus (accession numbers GSE30240 and GSE26386, respectively). Results: First, we combined G/C usage, gene content, gene expression or histone modification profiles (H3K36me3, H3K9me3) to define genomic compartments characterized by different chromatin compaction states or transcriptional activity. Next, we investigated H3, H2AX and gH2AX distributions in such defined compartments before and after exposure to ionizing radiation (IR) to study DNA repair kinetics during DDR. Our sequencing results indicate that H2AX distribution followed H3 occupancy and, thus, the nucleosome pattern. The highest H2AX and H3 enrichment was observed in transcriptionally active compartments (euchromatin) while the lowest was found in low G/C and gene-poor compartments (heterochromatin). Under physiological conditions, the body of highly and moderately transcribed genes was devoid of gH2AX, despite presenting high H2AX levels. gH2AX accumulation was observed in 5’ or 3’ flanking regions, instead. The same genes showed a prompt gH2AX accumulation during the early stage of DDR which then decreased over time as DDR proceeded. Finally, during the late stage of DDR the residual gH2AX signal was entirely retained in heterochromatic compartments. At this stage, euchromatic compartments were completely devoid of gH2AX despite presenting high levels of non-phosphorylated H2AX. Conclusions: We show that gH2AX distribution ultimately depends on H2AX occupancy, the latter following H3 occupancy and, thus, nucleosome pattern. Both H2AX and H3 levels were higher in actively transcribed compartments. However, gH2AX levels were remarkably low over the body of actively transcribed genes suggesting that transcription levels antagonize gH2AX spreading. Moreover, repair processes did not take place uniformly across the genome; rather, DNA repair was affected by genomic location and transcriptional activity. We propose that higher H2AX density in euchromaticcompartments results in high relative gH2AXconcentration soon after the activation of DDR, thus favoring the recruitment of the DNA repair machinery to those compartments. When the damage is repaired and gH2AX is removed, its residual fraction is retained in the heterochromatic compartments which are then targeted and repaired at later times

    Impact of Different Fecal Processing Methods on Assessments of Bacterial Diversity in the Human Intestine.

    Get PDF
    The intestinal microbiota are integral to understanding the relationships between nutrition and health. Therefore, fecal sampling and processing protocols for metagenomic surveys should be sufficiently robust, accurate, and reliable to identify the microorganisms present. We investigated the use of different fecal preparation methods on the bacterial community structures identified in human stools. Complete stools were collected from six healthy individuals and processed according to the following methods: (i) randomly sampled fresh stool, (ii) fresh stool homogenized in a blender for 2 min, (iii) randomly sampled frozen stool, and (iv) frozen stool homogenized in a blender for 2 min, or (v) homogenized in a pneumatic mixer for either 10, 20, or 30 min. High-throughput DNA sequencing of the 16S rRNA V4 regions of bacterial community DNA extracted from the stools showed that the fecal microbiota remained distinct between individuals, independent of processing method. Moreover, the different stool preparation approaches did not alter intra-individual bacterial diversity. Distinctions were found at the level of individual taxa, however. Stools that were frozen and then homogenized tended to have higher proportions of Faecalibacterium, Streptococcus, and Bifidobacterium and decreased quantities of Oscillospira, Bacteroides, and Parabacteroides compared to stools that were collected in small quantities and not mixed prior to DNA extraction. These findings indicate that certain taxa are at particular risk for under or over sampling due to protocol differences. Importantly, homogenization by any method significantly reduced the intra-individual variation in bacteria detected per stool. Our results confirm the robustness of fecal homogenization for microbial analyses and underscore the value of collecting and mixing large stool sample quantities in human nutrition intervention studies

    Natural Compression for Distributed Deep Learning

    Full text link
    Modern deep learning models are often trained in parallel over a collection of distributed machines to reduce training time. In such settings, communication of model updates among machines becomes a significant performance bottleneck and various lossy update compression techniques have been proposed to alleviate this problem. In this work, we introduce a new, simple yet theoretically and practically effective compression technique: {\em natural compression (NC)}. Our technique is applied individually to all entries of the to-be-compressed update vector and works by randomized rounding to the nearest (negative or positive) power of two, which can be computed in a "natural" way by ignoring the mantissa. We show that compared to no compression, NC increases the second moment of the compressed vector by not more than the tiny factor \nicefrac{9}{8}, which means that the effect of NC on the convergence speed of popular training algorithms, such as distributed SGD, is negligible. However, the communications savings enabled by NC are substantial, leading to {\em 33-4×4\times improvement in overall theoretical running time}. For applications requiring more aggressive compression, we generalize NC to {\em natural dithering}, which we prove is {\em exponentially better} than the common random dithering technique. Our compression operators can be used on their own or in combination with existing operators for a more aggressive combined effect, and offer new state-of-the-art both in theory and practice.Comment: 8 pages, 20 pages of Appendix, 6 Tables, 14 Figure
    corecore