18,937 research outputs found
Shell Model Monte Carlo method in the -formalism and applications to the Zr and Mo isotopes
We report on the development of a new shell-model Monte Carlo algorithm which
uses the proton-neutron formalism. Shell model Monte Carlo methods, within the
isospin formulation, have been successfully used in large-scale shell-model
calculations. Motivation for this work is to extend the feasibility of these
methods to shell-model studies involving non-identical proton and neutron
valence spaces. We show the viability of the new approach with some test
results. Finally, we use a realistic nucleon-nucleon interaction in the model
space described by (1p_1/2,0g_9/2) proton and
(1d_5/2,2s_1/2,1d_3/2,0g_7/2,0h_11/2) neutron orbitals above the Sr-88 core to
calculate ground-state energies, binding energies, B(E2) strengths, and to
study pairing properties of the even-even 90-104 Zr and 92-106 Mo isotope
chains
A position sensitive phoswich hard X-ray detector system
A prototype position sensitive phoswich hard X-ray detector, designed for eventual astronomical usage, was tested in the laboratory. The scintillation crystal geometry was designed on the basis of a Monte Carlo simulation of the internal optics and includes a 3mm thick NaI(T1) primary X-ray detector which is actively shielded by a 20 mm thick CsI(T1) scintillation crystal. This phoswich arrangement is viewed by a number two inch photomultipliers. Measured values of the positional and spectral resolution of incident X-ray photons are compared with calculation
Overscreening in 1D lattice Coulomb gas model of ionic liquids
Overscreening in the charge distribution of ionic liquids at electrified
interfaces is shown to proceed from purely electrostatic and steric
interactions in an exactly soluble one dimensional lattice Coulomb gas model.
Being not a mean-field effect, our results suggest that even in higher
dimensional systems the overscreening could be accounted for by a more accurate
treatment of the basic lattice Coulomb gas model, that goes beyond the mean
field level of approximation, without any additional interactions.Comment: 4 pages 5 .eps figure
Perturbation theory for the effective diffusion constant in a medium of random scatterer
We develop perturbation theory and physically motivated resummations of the
perturbation theory for the problem of a tracer particle diffusing in a random
media. The random media contains point scatterers of density uniformly
distributed through out the material. The tracer is a Langevin particle
subjected to the quenched random force generated by the scatterers. Via our
perturbative analysis we determine when the random potential can be
approximated by a Gaussian random potential. We also develop a self-similar
renormalisation group approach based on thinning out the scatterers, this
scheme is similar to that used with success for diffusion in Gaussian random
potentials and agrees with known exact results. To assess the accuracy of this
approximation scheme its predictions are confronted with results obtained by
numerical simulation.Comment: 22 pages, 6 figures, IOP (J. Phys. A. style
Current-induced nuclear-spin activation in a two-dimensional electron gas
Electrically detected nuclear magnetic resonance was studied in detail in a
two-dimensional electron gas as a function of current bias and temperature. We
show that applying a relatively modest dc-current bias, I_dc ~ 0.5 microAmps,
can induce a re-entrant and even enhanced nuclear spin signal compared with the
signal obtained under similar thermal equilibrium conditions at zero current
bias. Our observations suggest that dynamic nuclear spin polarization by small
current flow is possible in a two-dimensional electron gas, allowing for easy
manipulation of the nuclear spin by simple switching of a dc current.Comment: 5 pages, 3 fig
COMPLIANCE TESTING OF IOWA’S SKID-MOUNTED SIGN DEVICE
A wide variety of traffic control devices are used in work zones, some of which are nont ormally found on the roadside or in the traveled way outsideofthe work zones. These devices are used to enhance the safety of the work zones by controlling the traffic through these areas. Due to the placement of the traffic control devices, the devices themselves may be potentially hazardous to both workers and errant vehicles. The impact performance of many work zone traffic control devices is mainly unknown and to date limited crash testing has been conducted under the criteria of National Cooperative Highway Research Program (NCHRP) Report No. 350, Recommended Procedures for the Safety Performance Evaluation of Highway Features.
The objective of the study was to evaluatethe safety performance of existing skid-mounted sign supports through full- scale crash testing. Two full-scale crash tests were conducted on skid-mounted sign supports to determine their safety performance according to the Test Level 3 (TL-3) criteria set forth in the NCHRP Report No. 350. The safety performancevaluations indicate that these skid-mounted sign supports did not perform satisfactorily in the full-scale crash tests. The results of the crash tests were documented, and conclusions and recommendations pertaining tothe safety performance of the existing work zone traffic control devices were made
Graphene field-effect transistors based on boron nitride gate dielectrics
Graphene field-effect transistors are fabricated utilizing single-crystal
hexagonal boron nitride (h-BN), an insulating isomorph of graphene, as the gate
dielectric. The devices exhibit mobility values exceeding 10,000 cm2/V-sec and
current saturation down to 500 nm channel lengths with intrinsic
transconductance values above 400 mS/mm. The work demonstrates the favorable
properties of using h-BN as a gate dielectric for graphene FETs.Comment: 4 pages, 8 figure
Novel Distances for Dollo Data
We investigate distances on binary (presence/absence) data in the context of
a Dollo process, where a trait can only arise once on a phylogenetic tree but
may be lost many times. We introduce a novel distance, the Additive Dollo
Distance (ADD), which is consistent for data generated under a Dollo model, and
show that it has some useful theoretical properties including an intriguing
link to the LogDet distance. Simulations of Dollo data are used to compare a
number of binary distances including ADD, LogDet, Nei Li and some simple, but
to our knowledge previously unstudied, variations on common binary distances.
The simulations suggest that ADD outperforms other distances on Dollo data.
Interestingly, we found that the LogDet distance performs poorly in the context
of a Dollo process, which may have implications for its use in connection with
conditioned genome reconstruction. We apply the ADD to two Diversity Arrays
Technology (DArT) datasets, one that broadly covers Eucalyptus species and one
that focuses on the Eucalyptus series Adnataria. We also reanalyse gene family
presence/absence data on bacteria from the COG database and compare the results
to previous phylogenies estimated using the conditioned genome reconstruction
approach
SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks
Going deeper and wider in neural architectures improves the accuracy, while
the limited GPU DRAM places an undesired restriction on the network design
domain. Deep Learning (DL) practitioners either need change to less desired
network architectures, or nontrivially dissect a network across multiGPUs.
These distract DL practitioners from concentrating on their original machine
learning tasks. We present SuperNeurons: a dynamic GPU memory scheduling
runtime to enable the network training far beyond the GPU DRAM capacity.
SuperNeurons features 3 memory optimizations, \textit{Liveness Analysis},
\textit{Unified Tensor Pool}, and \textit{Cost-Aware Recomputation}, all
together they effectively reduce the network-wide peak memory usage down to the
maximal memory usage among layers. We also address the performance issues in
those memory saving techniques. Given the limited GPU DRAM, SuperNeurons not
only provisions the necessary memory for the training, but also dynamically
allocates the memory for convolution workspaces to achieve the high
performance. Evaluations against Caffe, Torch, MXNet and TensorFlow have
demonstrated that SuperNeurons trains at least 3.2432 deeper network than
current ones with the leading performance. Particularly, SuperNeurons can train
ResNet2500 that has basic network layers on a 12GB K40c.Comment: PPoPP '2018: 23nd ACM SIGPLAN Symposium on Principles and Practice of
Parallel Programmin
- …