6,360 research outputs found
Evolution of shocks and turbulence in major cluster mergers
We performed a set of cosmological simulations of major mergers in galaxy
clusters to study the evolution of merger shocks and the subsequent injection
of turbulence in the post-shock region and in the intra-cluster medium (ICM).
The computations were done with the grid-based, adaptive mesh refinement hydro
code Enzo, using an especially designed refinement criteria for refining
turbulent flows in the vicinity of shocks. A substantial amount of turbulence
energy is injected in the ICM due to major merger. Our simulations show that
the shock launched after a major merger develops an ellipsoidal shape and gets
broken by the interaction with the filamentary cosmic web around the merging
cluster. The size of the post-shock region along the direction of shock
propagation is about 300 kpc h^-1, and the turbulent velocity dispersion in
this region is larger than 100 km s^-1. Scaling analysis of the turbulence
energy with the cluster mass within our cluster sample is consistent with
M^(5/3), i.e. the scaling law for the thermal energy in the self-similar
cluster model. This clearly indicates the close relation between virialization
and injection of turbulence in the cluster evolution. We found that the ratio
of the turbulent to total pressure in the cluster core within 2 Gyr after the
major merger is larger than 10%, and it takes about 4 Gyr to get relaxed, which
is substantially longer than typically assumed in the turbulent re-acceleration
models, invoked to explain the statistics of observed radio halos. Striking
similarities in the morphology and other physical parameters between our
simulations and the "symmetrical radio relics" found at the periphery of the
merging cluster A3376 are finally discussed. In particular, the interaction
between the merger shock and the filaments surrounding the cluster could
explain the presence of "notch-like" features at the edges of the double
relics.Comment: 16 pages, 19 figures, Published in Astrophysical Journal (online) and
printed version will be published on 1st January, 201
Computational structure‐based drug design: Predicting target flexibility
The role of molecular modeling in drug design has experienced a significant revamp in the last decade. The increase in computational resources and molecular models, along with software developments, is finally introducing a competitive advantage in early phases of drug discovery. Medium and small companies with strong focus on computational chemistry are being created, some of them having introduced important leads in drug design pipelines. An important source for this success is the extraordinary development of faster and more efficient techniques for describing flexibility in three‐dimensional structural molecular modeling. At different levels, from docking techniques to atomistic molecular dynamics, conformational sampling between receptor and drug results in improved predictions, such as screening enrichment, discovery of transient cavities, etc. In this review article we perform an extensive analysis of these modeling techniques, dividing them into high and low throughput, and emphasizing in their application to drug design studies. We finalize the review with a section describing our Monte Carlo method, PELE, recently highlighted as an outstanding advance in an international blind competition and industrial benchmarks.We acknowledge the BSC-CRG-IRB Joint Research Program in Computational Biology. This work was supported by a grant
from the Spanish Government CTQ2016-79138-R.J.I. acknowledges support from SVP-2014-068797, awarded by the Spanish Government.Peer ReviewedPostprint (author's final draft
Real-space analysis of branch point motion in architecturally complex polymers
By means of large-scale molecular dynamics simulations, we investigate branch point motion in pure branched polymers and in mixtures of stars and linear chains. We perform a purely geometrical density-based cluster analysis of the branch point trajectories and identify regions of strong localization (traps). Our results demonstrate that the branch point motion can be described as the motion over a network of traps at the time scales corresponding to the reptation regime. Residence times within the traps are broadly distributed, even extending to times much longer than the side-arm relaxation time. The distributions of distances between consecutively visited traps are very similar for all the investigated branched polymers, even though tube dilation is much stronger in the star/linear mixtures than in the pure branched systems. Our analysis suggests that the diffusivity of the branch point introduced by hierarchical models must be understood as a parameter to account for the effective friction associated with the relaxed side arm, more than the description of a hopping process with a precise time scale.We acknowledge support from projects FP7-PEOPLE-2007-1-1-ITN (DYNACOP, EU), MAT2012-31088 (Spain), and IT654-13 (GV, Spain). We acknowledge the programs
PRACE, HPC-Europa2 and ESMI (EU), and ICTS (Spain) for generous allocation of CPU time at GENCI (France), HLRS and FZJ-JSC (Germany), and CESGA (Spain).Peer Reviewe
LIDT-DD: A new self-consistent debris disc model including radiation pressure and coupling collisional and dynamical evolution
In most current debris disc models, the dynamical and the collisional
evolutions are studied separately, with N-body and statistical codes,
respectively, because of stringent computational constraints. We present here
LIDT-DD, the first code able to mix both approaches in a fully self-consistent
way. Our aim is for it to be generic enough so as to be applied to any
astrophysical cases where we expect dynamics and collisions to be deeply
interlocked with one another: planets in discs, violent massive breakups,
destabilized planetesimal belts, exozodiacal discs, etc. The code takes its
basic architecture from the LIDT3D algorithm developed by Charnoz et al.(2012)
for protoplanetary discs, but has been strongly modified and updated in order
to handle the very constraining specificities of debris discs physics:
high-velocity fragmenting collisions, radiation-pressure affected orbits,
absence of gas, etc. In LIDT-DD, grains of a given size at a given location in
a disc are grouped into "super-particles", whose orbits are evolved with an
N-body code and whose mutual collisions are individually tracked and treated
using a particle-in-a-box prescription. To cope with the wide range of possible
dynamics, tracers are sorted and regrouped into dynamical families depending on
their orbits. The code retrieves the classical features known for debris discs,
such as the particle size distributions in unperturbed discs, the outer radial
density profiles (slope in -1.5) outside narrow collisionally active rings, and
the depletion of small grains in "dynamically cold" discs. The potential of the
new code is illustrated with the test case of the violent breakup of a massive
planetesimal within a debris disc. The main potential future applications of
the code are planet/disc interactions, and more generally any configurations
where dynamics and collisions are expected to be intricately connected.Comment: Accepted for publication in A&A. 20 pages, 17 figures. Abstract
shortened for astro-p
Scalable Analysis, Verification and Design of IC Power Delivery
Due to recent aggressive process scaling into the nanometer regime, power delivery network design faces many challenges that set more stringent and specific requirements to the EDA tools. For example, from the perspective of analysis, simulation efficiency for large grids must be improved and the entire network with off-chip models and nonlinear devices should be able to be analyzed. Gated power delivery networks have multiple on/off operating conditions that need to be fully verified against the design requirements. Good power delivery network designs not only have to save the wiring resources for signal routing, but also need to have the optimal parameters assigned to various system components such as decaps, voltage regulators and converters. This dissertation presents new methodologies to address these challenging problems.
At first, a novel parallel partitioning-based approach which provides a flexible network partitioning scheme using locality is proposed for power grid static analysis. In addition, a fast CPU-GPU combined analysis engine that adopts a boundary-relaxation method to encompass several simulation strategies is developed to simulate power delivery networks with off-chip models and active circuits. These two proposed analysis approaches can achieve scalable simulation runtime.
Then, for gated power delivery networks, the challenge brought by the large verification space is addressed by developing a strategy that efficiently identifies a number of candidates for the worst-case operating condition. The computation complexity is reduced from O(2^N) to O(N).
At last, motivated by a proposed two-level hierarchical optimization, this dissertation presents a novel locality-driven partitioning scheme to facilitate divide-and-conquer-based scalable wire sizing for large power delivery networks. Simultaneous sizing of multiple partitions is allowed which leads to substantial runtime improvement. Moreover, the electric interactions between active regulators/converters and passive networks and their influences on key system design specifications are analyzed comprehensively. With the derived design insights, the system-level co-design of a complete power delivery network is facilitated by an automatic optimization flow. Results show significant performance enhancement brought by the co-design
Nonparametric Transient Classification using Adaptive Wavelets
Classifying transients based on multi band light curves is a challenging but
crucial problem in the era of GAIA and LSST since the sheer volume of
transients will make spectroscopic classification unfeasible. Here we present a
nonparametric classifier that uses the transient's light curve measurements to
predict its class given training data. It implements two novel components: the
first is the use of the BAGIDIS wavelet methodology - a characterization of
functional data using hierarchical wavelet coefficients. The second novelty is
the introduction of a ranked probability classifier on the wavelet coefficients
that handles both the heteroscedasticity of the data in addition to the
potential non-representativity of the training set. The ranked classifier is
simple and quick to implement while a major advantage of the BAGIDIS wavelets
is that they are translation invariant, hence they do not need the light curves
to be aligned to extract features. Further, BAGIDIS is nonparametric so it can
be used for blind searches for new objects. We demonstrate the effectiveness of
our ranked wavelet classifier against the well-tested Supernova Photometric
Classification Challenge dataset in which the challenge is to correctly
classify light curves as Type Ia or non-Ia supernovae. We train our ranked
probability classifier on the spectroscopically-confirmed subsample (which is
not representative) and show that it gives good results for all supernova with
observed light curve timespans greater than 100 days (roughly 55% of the
dataset). For such data, we obtain a Ia efficiency of 80.5% and a purity of
82.4% yielding a highly competitive score of 0.49 whilst implementing a truly
"model-blind" approach to supernova classification. Consequently this approach
may be particularly suitable for the classification of astronomical transients
in the era of large synoptic sky surveys.Comment: 14 pages, 8 figures. Published in MNRA
Mass transfer in eccentric binaries: the new Oil-on-Water SPH technique
To measure the onset of mass transfer in eccentric binaries we have developed
a two-phase SPH technique. Mass transfer is important in the evolution of close
binaries, and a key issue is to determine the separation at which mass transfer
begins. The circular case is well understood and can be treated through the use
of the Roche formalism. To treat the eccentric case we use a newly-developed
two phase system. The body of the donor star is made up from high-mass "water"
particles, whilst the atmosphere is modelled with low-mass "oil" particles.
Both sets of particles take part fully in SPH interactions. To test the
technique we model circular mass-transfer binaries containing a 0.6 Msun donor
star and a 1 Msun white dwarf; such binaries are thought to form cataclysmic
variable (CV) systems. We find that we can reproduce a reasonable CV
mass-transfer rate, and that our extended atmosphere gives a separation that is
too large by aproximately 16%, although its pressure scale height is
considerably exaggerated. We use the technique to measure the semi-major axis
required for the onset of mass transfer in binaries with a mass ratio of q=0.6
and a range of eccentricities. Comparing to the value obtained by considering
the instantaneous Roche lobe at pericentre we find that the radius of the star
required for mass transfer to begin decreases systematically with increasing
eccentricity.Comment: 9 pages, 8 figures, accepted by MNRA
- …