376 research outputs found

    Yielding and irreversible deformation below the microscale: Surface effects and non-mean-field plastic avalanches

    Get PDF
    Nanoindentation techniques recently developed to measure the mechanical response of crystals under external loading conditions reveal new phenomena upon decreasing sample size below the microscale. At small length scales, material resistance to irreversible deformation depends on sample morphology. Here we study the mechanisms of yield and plastic flow in inherently small crystals under uniaxial compression. Discrete structural rearrangements emerge as series of abrupt discontinuities in stress-strain curves. We obtain the theoretical dependence of the yield stress on system size and geometry and elucidate the statistical properties of plastic deformation at such scales. Our results show that the absence of dislocation storage leads to crucial effects on the statistics of plastic events, ultimately affecting the universal scaling behavior observed at larger scales.Comment: Supporting Videos available at http://dx.plos.org/10.1371/journal.pone.002041

    Combining Shapley value and statistics to the analysis of gene expression data in children exposed to air pollution

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In gene expression analysis, statistical tests for differential gene expression provide lists of candidate genes having, individually, a sufficiently low <it>p</it>-value. However, the interpretation of each single <it>p</it>-value within complex systems involving several interacting genes is problematic. In parallel, in the last sixty years, <it>game theory </it>has been applied to political and social problems to assess the power of interacting agents in forcing a decision and, more recently, to represent the relevance of genes in response to certain conditions.</p> <p>Results</p> <p>In this paper we introduce a Bootstrap procedure to test the null hypothesis that each gene has the same relevance between two conditions, where the relevance is represented by the Shapley value of a particular coalitional game defined on a microarray data-set. This method, which is called <it>Comparative Analysis of Shapley value </it>(shortly, CASh), is applied to data concerning the gene expression in children differentially exposed to air pollution. The results provided by CASh are compared with the results from a parametric statistical test for testing differential gene expression. Both lists of genes provided by CASh and t-test are informative enough to discriminate exposed subjects on the basis of their gene expression profiles. While many genes are selected in common by CASh and the parametric test, it turns out that the biological interpretation of the differences between these two selections is more interesting, suggesting a different interpretation of the main biological pathways in gene expression regulation for exposed individuals. A simulation study suggests that CASh offers more power than t-test for the detection of differential gene expression variability.</p> <p>Conclusion</p> <p>CASh is successfully applied to gene expression analysis of a data-set where the joint expression behavior of genes may be critical to characterize the expression response to air pollution. We demonstrate a synergistic effect between coalitional games and statistics that resulted in a selection of genes with a potential impact in the regulation of complex pathways.</p

    Antipsychotics and Torsadogenic Risk: Signals Emerging from the US FDA Adverse Event Reporting System Database

    Get PDF
    Background: Drug-induced torsades de pointes (TdP) and related clinical entities represent a current regulatory and clinical burden. Objective: As part of the FP7 ARITMO (Arrhythmogenic Potential of Drugs) project, we explored the publicly available US FDA Adverse Event Reporting System (FAERS) database to detect signals of torsadogenicity for antipsychotics (APs). Methods: Four groups of events in decreasing order of drug-attributable risk were identified: (1) TdP, (2) QT-interval abnormalities, (3) ventricular fibrillation/tachycardia, and (4) sudden cardiac death. The reporting odds ratio (ROR) with 95 % confidence interval (CI) was calculated through a cumulative analysis from group 1 to 4. For groups 1+2, ROR was adjusted for age, gender, and concomitant drugs (e.g., antiarrhythmics) and stratified for AZCERT drugs, lists I and II (http://www.azcert.org, as of June 2011). A potential signal of torsadogenicity was defined if a drug met all the following criteria: (a) four or more cases in group 1+2; (b) significant ROR in group 1+2 that persists through the cumulative approach; (c) significant adjusted ROR for group 1+2 in the stratum without AZCERT drugs; (d) not included in AZCERT lists (as of June 2011). Results: Over the 7-year period, 37 APs were reported in 4,794 cases of arrhythmia: 140 (group 1), 883 (group 2), 1,651 (group 3), and 2,120 (group 4). Based on our criteria, the following potential signals of torsadogenicity were found: amisulpride (25 cases; adjusted ROR in the stratum without AZCERT drugs = 43.94, 95 % CI 22.82-84.60), cyamemazine (11; 15.48, 6.87-34.91), and olanzapine (189; 7.74, 6.45-9.30). Conclusions: This pharmacovigilance analysis on the FAERS found 3 potential signals of torsadogenicity for drugs previously unknown for this risk

    Discrete sources as the origin of the Galactic X-ray ridge emission

    Full text link
    An unresolved X-ray glow (at energies above a few kiloelectronvolts) was discovered about 25 years ago and found to be coincident with the Galactic disk -the Galactic ridge X-ray emission. This emission has a spectrum characteristic of a 1e8 K optically thin thermal plasma, with a prominent iron emission line at 6.7 keV. The gravitational well of the Galactic disk, however, is far too shallow to confine such a hot interstellar medium; instead, it would flow away at a velocity of a few thousand kilometres per second, exceeding the speed of sound in gas. To replenish the energy losses requires a source of 10^{43} erg/s, exceeding by orders of magnitude all plausible energy sources in the Milky Way. An alternative is that the hot plasma is bound to a multitude of faint sources, which is supported by the recently observed similarities in the X-ray and near-infrared surface brightness distributions (the latter traces the Galactic stellar distribution). Here we report that at energies of 6-7 keV, more than 80 per cent of the seemingly diffuse X-ray emission is resolved into discrete sources, probably accreting white dwarfs and coronally active stars.Comment: 16 pages, 3 figures. Draft version of the paper that will appear in Nature, Issue April 30, 200

    Holographic c-theorems in arbitrary dimensions

    Full text link
    We re-examine holographic versions of the c-theorem and entanglement entropy in the context of higher curvature gravity and the AdS/CFT correspondence. We select the gravity theories by tuning the gravitational couplings to eliminate non-unitary operators in the boundary theory and demonstrate that all of these theories obey a holographic c-theorem. In cases where the dual CFT is even-dimensional, we show that the quantity that flows is the central charge associated with the A-type trace anomaly. Here, unlike in conventional holographic constructions with Einstein gravity, we are able to distinguish this quantity from other central charges or the leading coefficient in the entropy density of a thermal bath. In general, we are also able to identify this quantity with the coefficient of a universal contribution to the entanglement entropy in a particular construction. Our results suggest that these coefficients appearing in entanglement entropy play the role of central charges in odd-dimensional CFT's. We conjecture a new c-theorem on the space of odd-dimensional field theories, which extends Cardy's proposal for even dimensions. Beyond holography, we were able to show that for any even-dimensional CFT, the universal coefficient appearing the entanglement entropy which we calculate is precisely the A-type central charge.Comment: 62 pages, 4 figures, few typo's correcte

    C–O–H–S fluids and granitic magma : how S partitions and modifies CO2 concentrations of fluid-saturated felsic melt at 200 MPa

    Get PDF
    Author Posting. © The Author(s), 2011. This is the author's version of the work. It is posted here by permission of Springer for personal use, not for redistribution. The definitive version was published in Contributions to Mineralogy and Petrology 162 (2011): 849-865, doi:10.1007/s00410-011-0628-1.Hydrothermal volatile-solubility and partitioning experiments were conducted with fluid-saturated haplogranitic melt, H2O, CO2, and S in an internally heated pressure vessel at 900°C and 200 MPa; three additional experiments were conducted with iron-bearing melt. The run-product glasses were analyzed by electron microprobe, FTIR, and SIMS; and they contain ≤ 0.12 wt% S, ≤ 0.097 wt.% CO2, and ≤ 6.4 wt.% H2O. Apparent values of log ƒO2 for the experiments at run conditions were computed from the [(S6+)/(S6++S2-)] ratio of the glasses, and they range from NNO-0.4 to NNO+1.4. The C-O-H-S fluid compositions at run conditions were computed by mass balance, and they contained 22-99 mol% H2O, 0-78 mol% CO2, 0-12 mol% S, and < 3 wt% alkalis. Eight S-free experiments were conducted to determine the H2O and CO2 concentrations of melt and fluid compositions and to compare them with prior experimental results for C-O-H fluid-saturated rhyolite melt, and the agreement is excellent. Sulfur partitions very strongly in favor of fluid in all experiments, and the presence of S modifies the fluid compositions, and hence, the CO2 solubilities in coexisting felsic melt. The square of the mole fraction of H2O in melt increases in a linear fashion, from 0.05-0.25, with the H2O concentration of the fluid. The mole fraction of CO2 in melt increases linearly, from 0.0003-0.0045, with the CO2 concentration of C-O-H-S fluids. Interestingly, the CO2 concentration in melts, involving relatively reduced runs (log ƒO2 ≤ NNO+0.3) that contain 2.5-7 mol% S in the fluid, decreases significantly with increasing S in the system. This response to the changing fluid composition causes the H2O and CO2 solubility curve for C-O-H-S fluid-saturated haplogranitic melts at 200 MPa to shift to values near that modeled for C-O-H fluid-saturated, S-free rhyolite melt at 150 MPa. The concentration of S in haplogranitic melt increases in a linear fashion with increasing S in C-O-H-S fluids, but these data show significant dispersion that likely reflects the strong influence of ƒO2 on S speciation in melt and fluid. Importantly, the partitioning of S between fluid and melt does not vary with the (H2O/H2O+CO2) ratio of the fluid. The fluid-melt partition coefficients for H2O, CO2, and S and the atomic (C/S) ratios of the run-product fluids are virtually identical to thermodynamic constraints on volatile partitioning and the H, S, and C contents of pre-eruptive magmatic fluids and volcanic gases for subduction-related magmatic systems thus confirming our experiments are relevant to natural eruptive systems.This research was supported in part by National Science Foundation awards EAR 0308866 and EAR-0836741 to J.D.W

    MICE: The muon ionization cooling experiment. Step I: First measurement of emittance with particle physics detectors

    Get PDF
    Copyright @ 2011 APSThe Muon Ionization Cooling Experiment (MICE) is a strategic R&D project intended to demonstrate the only practical solution to providing high brilliance beams necessary for a neutrino factory or muon collider. MICE is under development at the Rutherford Appleton Laboratory (RAL) in the United Kingdom. It comprises a dedicated beamline to generate a range of input muon emittances and momenta, with time-of-flight and Cherenkov detectors to ensure a pure muon beam. The emittance of the incoming beam will be measured in the upstream magnetic spectrometer with a scintillating fiber tracker. A cooling cell will then follow, alternating energy loss in Liquid Hydrogen (LH2) absorbers to RF cavity acceleration. A second spectrometer, identical to the first, and a second muon identification system will measure the outgoing emittance. In the 2010 run at RAL the muon beamline and most detectors were fully commissioned and a first measurement of the emittance of the muon beam with particle physics (time-of-flight) detectors was performed. The analysis of these data was recently completed and is discussed in this paper. Future steps for MICE, where beam emittance and emittance reduction (cooling) are to be measured with greater accuracy, are also presented.This work was supported by NSF grant PHY-0842798

    How Digital Are the Digital Humanities? An Analysis of Two Scholarly Blogging Platforms

    Get PDF
    In this paper we compare two academic networking platforms, HASTAC and Hypotheses, to show the distinct ways in which they serve specific communities in the Digital Humanities (DH) in different national and disciplinary contexts. After providing background information on both platforms, we apply co-word analysis and topic modeling to show thematic similarities and differences between the two sites, focusing particularly on how they frame DH as a new paradigm in humanities research. We encounter a much higher ratio of posts using humanities-related terms compared to their digital counterparts, suggesting a one-way dependency of digital humanities-related terms on the corresponding unprefixed labels. The results also show that the terms digital archive, digital literacy, and digital pedagogy are relatively independent from the respective unprefixed terms, and that digital publishing, digital libraries, and digital media show considerable cross-pollination between the specialization and the general noun. The topic modeling reproduces these findings and reveals further differences between the two platforms. Our findings also indicate local differences in how the emerging field of DH is conceptualized and show dynamic topical shifts inside these respective contexts

    Frustrated hierarchical synchronization and emergent complexity in the human connectome network

    Get PDF
    The spontaneous emergence of coherent behavior through synchronization plays a key role in neural function, and its anomalies often lie at the basis of pathologies. Here we employ a parsimonious (mesoscopic) approach to study analytically and computationally the synchronization (Kuramoto) dynamics on the actual human-brain connectome network. We elucidate the existence of a so-far-uncovered intermediate phase, placed between the standard synchronous and asynchronous phases, i.e. between order and disorder. This novel phase stems from the hierarchical modular organization of the connectome. Where one would expect a hierarchical synchronization process, we show that the interplay between structural bottlenecks and quenched intrinsic frequency heterogeneities at many different scales, gives rise to frustrated synchronization, metastability, and chimera-like states, resulting in a very rich and complex phenomenology. We uncover the origin of the dynamic freezing behind these features by using spectral graph theory and discuss how the emerging complex synchronization patterns relate to the need for the brain to access –in a robust though flexible way– a large variety of functional attractors and dynamical repertoires without ad hoc fine-tuning to a critical pointWe acknowledge financial support from J. de Andalucía, grant P09-FQM-4682 and we thank O. Sporns for providing us access to the human connectome data
    corecore