9,870 research outputs found
Accelerating Atomic Orbital-based Electronic Structure Calculation via Pole Expansion and Selected Inversion
We describe how to apply the recently developed pole expansion and selected
inversion (PEXSI) technique to Kohn-Sham density function theory (DFT)
electronic structure calculations that are based on atomic orbital
discretization. We give analytic expressions for evaluating the charge density,
the total energy, the Helmholtz free energy and the atomic forces (including
both the Hellman-Feynman force and the Pulay force) without using the
eigenvalues and eigenvectors of the Kohn-Sham Hamiltonian. We also show how to
update the chemical potential without using Kohn-Sham eigenvalues. The
advantage of using PEXSI is that it has a much lower computational complexity
than that associated with the matrix diagonalization procedure. We demonstrate
the performance gain by comparing the timing of PEXSI with that of
diagonalization on insulating and metallic nanotubes. For these quasi-1D
systems, the complexity of PEXSI is linear with respect to the number of atoms.
This linear scaling can be observed in our computational experiments when the
number of atoms in a nanotube is larger than a few hundreds. Both the wall
clock time and the memory requirement of PEXSI is modest. This makes it even
possible to perform Kohn-Sham DFT calculations for 10,000-atom nanotubes with a
sequential implementation of the selected inversion algorithm. We also perform
an accurate geometry optimization calculation on a truncated (8,0)
boron-nitride nanotube system containing 1024 atoms. Numerical results indicate
that the use of PEXSI does not lead to loss of accuracy required in a practical
DFT calculation
Coordinated Multicast Beamforming in Multicell Networks
We study physical layer multicasting in multicell networks where each base
station, equipped with multiple antennas, transmits a common message using a
single beamformer to multiple users in the same cell. We investigate two
coordinated beamforming designs: the quality-of-service (QoS) beamforming and
the max-min SINR (signal-to-interference-plus-noise ratio) beamforming. The
goal of the QoS beamforming is to minimize the total power consumption while
guaranteeing that received SINR at each user is above a predetermined
threshold. We present a necessary condition for the optimization problem to be
feasible. Then, based on the decomposition theory, we propose a novel
decentralized algorithm to implement the coordinated beamforming with limited
information sharing among different base stations. The algorithm is guaranteed
to converge and in most cases it converges to the optimal solution. The max-min
SINR (MMS) beamforming is to maximize the minimum received SINR among all users
under per-base station power constraints. We show that the MMS problem and a
weighted peak-power minimization (WPPM) problem are inverse problems. Based on
this inversion relationship, we then propose an efficient algorithm to solve
the MMS problem in an approximate manner. Simulation results demonstrate
significant advantages of the proposed multicast beamforming algorithms over
conventional multicasting schemes.Comment: 10pages, 9 figure
Deep learning versus -minimization for compressed sensing photoacoustic tomography
We investigate compressed sensing (CS) techniques for reducing the number of
measurements in photoacoustic tomography (PAT). High resolution imaging from CS
data requires particular image reconstruction algorithms. The most established
reconstruction techniques for that purpose use sparsity and
-minimization. Recently, deep learning appeared as a new paradigm for
CS and other inverse problems. In this paper, we compare a recently invented
joint -minimization algorithm with two deep learning methods, namely a
residual network and an approximate nullspace network. We present numerical
results showing that all developed techniques perform well for deterministic
sparse measurements as well as for random Bernoulli measurements. For the
deterministic sampling, deep learning shows more accurate results, whereas for
Bernoulli measurements the -minimization algorithm performs best.
Comparing the implemented deep learning approaches, we show that the nullspace
network uniformly outperforms the residual network in terms of the mean squared
error (MSE).Comment: This work has been presented at the Joint Photoacoustics Session with
the 2018 IEEE International Ultrasonics Symposium Kobe, October 22-25, 201
Highly efficient Bayesian joint inversion for receiver-based data and its application to lithospheric structure beneath the southern Korean Peninsula
With the deployment of extensive seismic arrays, systematic and efficient parameter and uncertainty estimation is of increasing importance and can provide reliable, regional models for crustal and upper-mantle structure.We present an efficient Bayesian method for the joint inversion of surface-wave dispersion and receiver-function data that combines trans-dimensional (trans-D) model selection in an optimization phase with subsequent rigorous parameter uncertainty estimation. Parameter and uncertainty estimation depend strongly on the chosen parametrization such that meaningful regional comparison requires quantitative model selection that can be carried out efficiently at several sites. While significant progress has been made for model selection (e.g. trans-D inference) at individual sites, the lack of efficiency can prohibit application to large data volumes or cause questionable results due to lack of convergence. Studies that address large numbers of data sets have mostly ignored model selection in favour of more efficient/simple estimation techniques (i.e. focusing on uncertainty estimation but employing ad-hoc model choices). Our approach consists of a two-phase inversion that combines trans-D optimization to select the most probable parametrization with subsequent Bayesian sampling for uncertainty estimation given that parametrization. The trans-D optimization is implemented here by replacing the likelihood function with the Bayesian information criterion (BIC). The BIC provides constraints on model complexity that facilitate the search for an optimal parametrization. Parallel tempering (PT) is applied as an optimization algorithm. After optimization, the optimal model choice is identified by the minimum BIC value from all PT chains. Uncertainty estimation is then carried out in fixed dimension. Data errors are estimated as part of the inference problem by a combination of empirical and hierarchical estimation. Data covariance matrices are estimated from data residuals (the difference between prediction and observation) and periodically updated. In addition, a scaling factor for the covariance matrix magnitude is estimated as part of the inversion. The inversion is applied to both simulated and observed data that consist of phase- and group-velocity dispersion curves (Rayleigh wave), and receiver functions. The simulation results show that model complexity and important features are well estimated by the fixed dimensional posterior probability density. Observed data for stations in different tectonic regions of the southern Korean Peninsula are considered. The results are consistent with published results, but important features are better constrained than in previous regularized inversions and are more consistent across the stations. For example, resolution of crustal and Moho interfaces, and absolute values and gradients of velocities in lower crust and upper mantle are better constrained
On Multi-Step Sensor Scheduling via Convex Optimization
Effective sensor scheduling requires the consideration of long-term effects
and thus optimization over long time horizons. Determining the optimal sensor
schedule, however, is equivalent to solving a binary integer program, which is
computationally demanding for long time horizons and many sensors. For linear
Gaussian systems, two efficient multi-step sensor scheduling approaches are
proposed in this paper. The first approach determines approximate but close to
optimal sensor schedules via convex optimization. The second approach combines
convex optimization with a \BB search for efficiently determining the optimal
sensor schedule.Comment: 6 pages, appeared in the proceedings of the 2nd International
Workshop on Cognitive Information Processing (CIP), Elba, Italy, June 201
- …