76 research outputs found
2-D materials for ultra-scaled field-effect transistors: hundred candidates under the ab initio microscope
Thanks to their unique properties single-layer 2-D materials appear as
excellent candidates to extend Moore's scaling law beyond the currently
manufactured silicon FinFETs. However, the known 2-D semiconducting components,
essentially transition metal dichalcogenides, are still far from delivering the
expected performance. Based on a recent theoretical study that predicts the
existence of more than 1,800 exfoliable 2-D materials, we investigate here the
100 most promising contenders for logic applications. Their "current vs.
voltage" characteristics are simulated from first-principles, combining
density-functional theory and advanced quantum transport calculations. Both n-
and p-type configurations are considered, with gate lengths ranging from 15
down to 5 nm. From this unprecedented collection of electronic materials, we
identify 13 compounds with electron and hole currents potentially much higher
than in future Si FinFETs. The resulting database widely expands the design
space of 2-D transistors and provides original guidelines to the materials and
device engineering community
Quantum-centric Supercomputing for Materials Science: A Perspective on Challenges and Future Directions
Computational models are an essential tool for the design, characterization,
and discovery of novel materials. Hard computational tasks in materials science
stretch the limits of existing high-performance supercomputing centers,
consuming much of their simulation, analysis, and data resources. Quantum
computing, on the other hand, is an emerging technology with the potential to
accelerate many of the computational tasks needed for materials science. In
order to do that, the quantum technology must interact with conventional
high-performance computing in several ways: approximate results validation,
identification of hard problems, and synergies in quantum-centric
supercomputing. In this paper, we provide a perspective on how quantum-centric
supercomputing can help address critical computational problems in materials
science, the challenges to face in order to solve representative use cases, and
new suggested directions.Comment: 60 pages, 14 figures; comments welcom
Neutrino-driven Turbulent Convection and Standing Accretion Shock Instability in Three-Dimensional Core-Collapse Supernovae
We conduct a series of numerical experiments into the nature of
three-dimensional (3D) hydrodynamics in the postbounce stalled-shock phase of
core-collapse supernovae using 3D general-relativistic hydrodynamic simulations
of a - progenitor star with a neutrino leakage/heating scheme. We
vary the strength of neutrino heating and find three cases of 3D dynamics: (1)
neutrino-driven convection, (2) initially neutrino-driven convection and
subsequent development of the standing accretion shock instability (SASI), (3)
SASI dominated evolution. This confirms previous 3D results of Hanke et al.
2013, ApJ 770, 66 and Couch & Connor 2014, ApJ 785, 123. We carry out
simulations with resolutions differing by up to a factor of 4 and
demonstrate that low resolution is artificially favorable for explosion in the
3D convection-dominated case, since it decreases the efficiency of energy
transport to small scales. Low resolution results in higher radial convective
fluxes of energy and enthalpy, more fully buoyant mass, and stronger neutrino
heating. In the SASI-dominated case, lower resolution damps SASI oscillations.
In the convection-dominated case, a quasi-stationary angular kinetic energy
spectrum develops in the heating layer. Like other 3D studies, we
find in the "inertial range," while theory and
local simulations argue for . We argue that
current 3D simulations do not resolve the inertial range of turbulence and are
affected by numerical viscosity up to the energy containing scale, creating a
"bottleneck" that prevents an efficient turbulent cascade.Comment: 24 pages, 15 figures. Accepted for publication in The Astrophysical
Journal. Added one figure and made minor modifications to text according to
suggestions from the refere
Vision 2040: A Roadmap for Integrated, Multiscale Modeling and Simulation of Materials and Systems
Over the last few decades, advances in high-performance computing, new materials characterization methods, and, more recently, an emphasis on integrated computational materials engineering (ICME) and additive manufacturing have been a catalyst for multiscale modeling and simulation-based design of materials and structures in the aerospace industry. While these advances have driven significant progress in the development of aerospace components and systems, that progress has been limited by persistent technology and infrastructure challenges that must be overcome to realize the full potential of integrated materials and systems design and simulation modeling throughout the supply chain. As a result, NASA's Transformational Tools and Technology (TTT) Project sponsored a study (performed by a diverse team led by Pratt & Whitney) to define the potential 25-year future state required for integrated multiscale modeling of materials and systems (e.g., load-bearing structures) to accelerate the pace and reduce the expense of innovation in future aerospace and aeronautical systems. This report describes the findings of this 2040 Vision study (e.g., the 2040 vision state; the required interdependent core technical work areas, Key Element (KE); identified gaps and actions to close those gaps; and major recommendations) which constitutes a community consensus document as it is a result of over 450 professionals input obtain via: 1) four society workshops (AIAA, NAFEMS, and two TMS), 2) community-wide survey, and 3) the establishment of 9 expert panels (one per KE) consisting on average of 10 non-team members from academia, government and industry to review, update content, and prioritize gaps and actions. The study envisions the development of a cyber-physical-social ecosystem comprised of experimentally verified and validated computational models, tools, and techniques, along with the associated digital tapestry, that impacts the entire supply chain to enable cost-effective, rapid, and revolutionary design of fit-for-purpose materials, components, and systems. Although the vision focused on aeronautics and space applications, it is believed that other engineering communities (e.g., automotive, biomedical, etc.) can benefit as well from the proposed framework with only minor modifications. Finally, it is TTT's hope and desire that this vision provides the strategic guidance to both public and private research and development decision makers to make the proposed 2040 vision state a reality and thereby provide a significant advancement in the United States global competitiveness
Complexity, Emergent Systems and Complex Biological Systems:\ud Complex Systems Theory and Biodynamics. [Edited book by I.C. Baianu, with listed contributors (2011)]
An overview is presented of System dynamics, the study of the behaviour of complex systems, Dynamical system in mathematics Dynamic programming in computer science and control theory, Complex systems biology, Neurodynamics and Psychodynamics.\u
Abstracts of the XL QUITEL Congress
Abstracts of the XL QUITEL Congres
Integrated Nested Laplace Approximations for Large-Scale Spatial-Temporal Bayesian Modeling
Bayesian inference tasks continue to pose a computational challenge. This
especially holds for spatial-temporal modeling where high-dimensional latent
parameter spaces are ubiquitous. The methodology of integrated nested Laplace
approximations (INLA) provides a framework for performing Bayesian inference
applicable to a large subclass of additive Bayesian hierarchical models. In
combination with the stochastic partial differential equations (SPDE) approach
it gives rise to an efficient method for spatial-temporal modeling. In this
work we build on the INLA-SPDE approach, by putting forward a performant
distributed memory variant, INLA-DIST, for large-scale applications. To perform
the arising computational kernel operations, consisting of Cholesky
factorizations, solving linear systems, and selected matrix inversions, we
present two numerical solver options, a sparse CPU-based library and a novel
blocked GPU-accelerated approach which we propose. We leverage the recurring
nonzero block structure in the arising precision (inverse covariance) matrices,
which allows us to employ dense subroutines within a sparse setting. Both
versions of INLA-DIST are highly scalable, capable of performing inference on
models with millions of latent parameters. We demonstrate their accuracy and
performance on synthetic as well as real-world climate dataset applications.Comment: 22 pages, 14 figure
- …