296 research outputs found
Recommended from our members
Simulating Solidification in Metals at High Pressure: The Drive to Petascale Computing
We investigate solidification in metal systems ranging in size from 64,000 to 524,288,000 atoms on the IBM BlueGene/L computer at LLNL. Using the newly developed ddcMD code, we achieve performance rates as high as 103 TFlops, with a performance of 101.7 TFlop sustained over a 7 hour run on 131,072 cpus. We demonstrate superb strong and weak scaling. Our calculations are significant as they represent the first atomic-scale model of metal solidification to proceed, without finite size effects, from spontaneous nucleation and growth of solid out of the liquid, through the coalescence phase, and into the onset of coarsening. Thus, our simulations represent the first step towards an atomistic model of nucleation and growth that can directly link atomistic to mesoscopic length scales
Recommended from our members
Beyond Finite Size Scaling in Solidification Simulations
Although computer simulation has played a central role in the study of nucleation and growth since the earliest molecular dynamics simulations almost 50 years ago, confusion surrounding the effect of finite size on such simulations have limited their applicability. Modeling solidification in molten tantalum on the BlueGene/L computer, we report here on the first atomistic simulation of solidification that verifies independence from finite size effects during the entire nucleation and growth process, up to the onset of coarsening. We show that finite size scaling theory explains the observed maximal grain sizes for systems up to about 8,000,000 atoms. For larger simulations, a cross-over from finite size scaling to more physical size-independent behavior is observed
Non-magnetic impurities in two dimensional superconductors
A numerical approach to disordered 2D superconductors described by BCS mean
field theory is outlined. The energy gap and the superfluid density at zero
temperature and the quasiparticle density of states are studied. The method
involves approximate self-consistent solutions of the Bogolubov-deGennes
equations on finite square lattices. Where comparison is possible, the results
of standard analytic approaches to this problem are reproduced. Detailed
modeling of impurity effects is practical using this approach. The {\it range}
of the impurity potential is shown to be of {\it quantitative importance} in
the case of strong potential scatterers. We discuss the implications for
experiments, such as the rapid suppression of superconductivity by Zn doping in
Copper-Oxide superconductors.Comment: 16 pages, latex, 8 figures( available upon request
Patenting and licensing of university research: promoting innovation or undermining academic values?
Since the 1980s in the US and the 1990s in Europe, patenting and licensing activities by universities have massively increased. This is strongly encouraged by governments throughout the Western world. Many regard academic patenting as essential to achieve 'knowledge transfer' from academia to industry. This trend has far-reaching consequences for access to the fruits of academic research and so the question arises whether the current policies are indeed promoting innovation or whether they are instead a symptom of a pro-intellectual property (IP) culture which is blind to adverse effects. Addressing this question requires both empirical analysis (how real is the link between academic patenting and licensing and 'development' of academic research by industry?) and normative assessment (which justifications are given for the current policies and to what extent do they threaten important academic values?). After illustrating the major rise of academic patenting and licensing in the US and Europe and commenting on the increasing trend of 'upstream' patenting and the focus on exclusive as opposed to non-exclusive licences, this paper will discuss five negative effects of these trends. Subsequently, the question as to why policymakers seem to ignore these adverse effects will be addressed. Finally, a number of proposals for improving university policies will be made
NS1 Specific CD8(+) T-Cells with Effector Function and TRBV11 Dominance in a Patient with Parvovirus B19 Associated Inflammatory Cardiomyopathy
Background: Parvovirus B19 (B19V) is the most commonly detected virus in endomyocardial biopsies (EMBs) from patients with inflammatory cardiomyopathy (DCMi). Despite the importance of T-cells in antiviral defense, little is known about the role of B19V specific T-cells in this entity.
Methodology and Principal Findings: An exceptionally high B19V viral load in EMBs (115,091 viral copies/mg nucleic acids), peripheral blood mononuclear cells (PBMCs) and serum was measured in a DCMi patient at initial presentation, suggesting B19V viremia. The B19V viral load in EMBs had decreased substantially 6 and 12 months afterwards, and was not traceable in PBMCs and the serum at these times. Using pools of overlapping peptides spanning the whole B19V proteome, strong CD8(+) T-cell responses were elicited to the 10-amico-acid peptides SALKLAIYKA (19.7% of all CD8(+) cells) and QSALKLAIYK (10%) and additional weaker responses to GLCPHCINVG (0.71%) and LLHTDFEQVM (0.06%). Real-time RT-PCR of IFN gamma secretion-assay-enriched T-cells responding to the peptides, SALKLAIYKA and GLCPHCINVG, revealed a disproportionately high T-cell receptor Vbeta (TRBV) 11 expression in this population. Furthermore, dominant expression of type-1 (IFN gamma, IL2, IL27 and Tbet) and of cytotoxic T-cell markers (Perforin and Granzyme B) was found, whereas gene expression indicating type-2 (IL4, GATA3) and regulatory T-cells (FoxP3) was low.
Conclusions: Our results indicate that B19V Ag-specific CD8(+) T-cells with effector function are involved in B19V associated DCMi. In particular, a dominant role of TRBV11 and type-1/CTL effector cells in the T-cell mediated antiviral immune response is suggested. The persistence of B19V in the endomyocardium is a likely antigen source for the maintenance of CD8(+) T-cell responses to the identified epitopes
Molecular Dynamics Simulations of Temperature Equilibration in Dense Hydrogen
The temperature equilibration rate in dense hydrogen (for both T_{i}>T_{e}
and T_i<T_e) has been calculated with molecular dynamics simulations for
temperatures between 10 and 600 eV and densities between 10^{20}/cc to
10^{24}/cc. Careful attention has been devoted to convergence of the
simulations, including the role of semiclassical potentials. We find that for
Coulomb logarithms L>1, a model by Gericke-Murillo-Schlanges (GMS) [Gericke et
al., PRE 65, 036418 (2002)] based on a T-matrix method and the approach by
Brown-Preston-Singleton [Brown et al., Phys. Rep. 410, 237 (2005)] agrees with
the simulation data to within the error bars of the simulation. For smaller
Coulomb logarithms, the GMS model is consistent with the simulation results.
Landau-Spitzer models are consistent with the simulation data for L>4
Reviving calm technology in the e-tourism context
Tourism industry practitioners should understand the controversial nature of the information and communication technology (ICT) proliferation to ensure the ICT solutions do not consume too much of their attention, thus jeopardizing consumer enjoyment of tourism services. The concept of calm technology or calm design serves this purpose. Calm design suggests that technology should quietly recede in the background and come into play with users when and if required, thus delivering and/or enhancing a desired experience. Although this concept is of relevance to e-tourism, until recently, it has never been considered within. This is where this paper contributes to knowledge as, for the first time, it introduces calm design into the e-tourism context and critically evaluates the determinants of its broader adoption within the tourism industry. It positions calm design within the e-tourism realm, discusses its implications for customer service management, supply chain management and destination management, and discloses opportunities for future research
- …