9,098 research outputs found
Large-scale Reservoir Simulations on IBM Blue Gene/Q
This paper presents our work on simulation of large-scale reservoir models on
IBM Blue Gene/Q and studying the scalability of our parallel reservoir
simulators. An in-house black oil simulator has been implemented. It uses MPI
for communication and is capable of simulating reservoir models with hundreds
of millions of grid cells. Benchmarks show that our parallel simulator are
thousands of times faster than sequential simulators that designed for
workstations and personal computers, and the simulator has excellent
scalability
Great SCO2T! Rapid tool for carbon sequestration science, engineering, and economics
CO2 capture and storage (CCS) technology is likely to be widely deployed in
coming decades in response to major climate and economics drivers: CCS is part
of every clean energy pathway that limits global warming to 2C or less and
receives significant CO2 tax credits in the United States. These drivers are
likely to stimulate capture, transport, and storage of hundreds of millions or
billions of tonnes of CO2 annually. A key part of the CCS puzzle will be
identifying and characterizing suitable storage sites for vast amounts of CO2.
We introduce a new software tool called SCO2T (Sequestration of CO2 Tool,
pronounced "Scott") to rapidly characterizing saline storage reservoirs. The
tool is designed to rapidly screen hundreds of thousands of reservoirs, perform
sensitivity and uncertainty analyses, and link sequestration engineering
(injection rates, reservoir capacities, plume dimensions) to sequestration
economics (costs constructed from around 70 separate economic inputs). We
describe the novel science developments supporting SCO2T including a new
approach to estimating CO2 injection rates and CO2 plume dimensions as well as
key advances linking sequestration engineering with economics. Next, we perform
a sensitivity and uncertainty analysis of geology combinations (including
formation depth, thickness, permeability, porosity, and temperature) to
understand the impact on carbon sequestration. Through the sensitivity analysis
we show that increasing depth and permeability both can lead to increased CO2
injection rates, increased storage potential, and reduced costs, while
increasing porosity reduces costs without impacting the injection rate (CO2 is
injected at a constant pressure in all cases) by increasing the reservoir
capacity.Comment: CO2 capture and storage; carbon sequestration; reduced-order
modeling; climate change; economic
Toward bio-inspired information processing with networks of nano-scale switching elements
Unconventional computing explores multi-scale platforms connecting
molecular-scale devices into networks for the development of scalable
neuromorphic architectures, often based on new materials and components with
new functionalities. We review some work investigating the functionalities of
locally connected networks of different types of switching elements as
computational substrates. In particular, we discuss reservoir computing with
networks of nonlinear nanoscale components. In usual neuromorphic paradigms,
the network synaptic weights are adjusted as a result of a training/learning
process. In reservoir computing, the non-linear network acts as a dynamical
system mixing and spreading the input signals over a large state space, and
only a readout layer is trained. We illustrate the most important concepts with
a few examples, featuring memristor networks with time-dependent and history
dependent resistances
Principles of Neuromorphic Photonics
In an age overrun with information, the ability to process reams of data has
become crucial. The demand for data will continue to grow as smart gadgets
multiply and become increasingly integrated into our daily lives.
Next-generation industries in artificial intelligence services and
high-performance computing are so far supported by microelectronic platforms.
These data-intensive enterprises rely on continual improvements in hardware.
Their prospects are running up against a stark reality: conventional
one-size-fits-all solutions offered by digital electronics can no longer
satisfy this need, as Moore's law (exponential hardware scaling),
interconnection density, and the von Neumann architecture reach their limits.
With its superior speed and reconfigurability, analog photonics can provide
some relief to these problems; however, complex applications of analog
photonics have remained largely unexplored due to the absence of a robust
photonic integration industry. Recently, the landscape for
commercially-manufacturable photonic chips has been changing rapidly and now
promises to achieve economies of scale previously enjoyed solely by
microelectronics.
The scientific community has set out to build bridges between the domains of
photonic device physics and neural networks, giving rise to the field of
\emph{neuromorphic photonics}. This article reviews the recent progress in
integrated neuromorphic photonics. We provide an overview of neuromorphic
computing, discuss the associated technology (microelectronic and photonic)
platforms and compare their metric performance. We discuss photonic neural
network approaches and challenges for integrated neuromorphic photonic
processors while providing an in-depth description of photonic neurons and a
candidate interconnection architecture. We conclude with a future outlook of
neuro-inspired photonic processing.Comment: 28 pages, 19 figure
Training Images-Based Stochastic Simulation on Many-Core Architectures
In the past decades, multiple-point geostatistical methods (MPS) are increasing in popularity in various fields. Compared with the traditional techniques, MPS techniques have the ability to characterize geological reality that commonly has complex structures such as curvilinear and long-range channels by using high-order statistics for pattern reconstruction. As a result, the computational burden is heavy, and sometimes, the current algorithms are unable to be applied to large-scale simulations. With the continuous development of hardware architectures, the parallelism implementation of MPS methods is an alternative to improve the performance. In this chapter, we overview the basic elements for MPS methods and provide several parallel strategies on many-core architectures. The GPU-based parallel implementation of two efficient MPS methods known as SNESIM and Direct Sampling is detailed as examples
- …