5,688 research outputs found
Design Automation and Design Space Exploration for Quantum Computers
A major hurdle to the deployment of quantum linear systems algorithms and
recent quantum simulation algorithms lies in the difficulty to find inexpensive
reversible circuits for arithmetic using existing hand coded methods. Motivated
by recent advances in reversible logic synthesis, we synthesize arithmetic
circuits using classical design automation flows and tools. The combination of
classical and reversible logic synthesis enables the automatic design of large
components in reversible logic starting from well-known hardware description
languages such as Verilog. As a prototype example for our approach we
automatically generate high quality networks for the reciprocal , which is
necessary for quantum linear systems algorithms.Comment: 6 pages, 1 figure, in 2017 Design, Automation & Test in Europe
Conference & Exhibition, DATE 2017, Lausanne, Switzerland, March 27-31, 201
Parallel Algorithm for Solving Kepler's Equation on Graphics Processing Units: Application to Analysis of Doppler Exoplanet Searches
[Abridged] We present the results of a highly parallel Kepler equation solver
using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX
and the "Compute Unified Device Architecture" programming environment. We apply
this to evaluate a goodness-of-fit statistic (e.g., chi^2) for Doppler
observations of stars potentially harboring multiple planetary companions
(assuming negligible planet-planet interactions). We tested multiple
implementations using single precision, double precision, pairs of single
precision, and mixed precision arithmetic. We find that the vast majority of
computations can be performed using single precision arithmetic, with selective
use of compensated summation for increased precision. However, standard single
precision is not adequate for calculating the mean anomaly from the time of
observation and orbital period when evaluating the goodness-of-fit for real
planetary systems and observational data sets. Using all double precision, our
GPU code outperforms a similar code using a modern CPU by a factor of over 60.
Using mixed-precision, our GPU code provides a speed-up factor of over 600,
when evaluating N_sys > 1024 models planetary systems each containing N_pl = 4
planets and assuming N_obs = 256 observations of each system. We conclude that
modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's
equation and a goodness-of-fit statistic for orbital models when presented with
a large parameter space.Comment: 19 pages, to appear in New Astronom
Getting Feasible Variable Estimates From Infeasible Ones: MRF Local Polytope Study
This paper proposes a method for construction of approximate feasible primal
solutions from dual ones for large-scale optimization problems possessing
certain separability properties. Whereas infeasible primal estimates can
typically be produced from (sub-)gradients of the dual function, it is often
not easy to project them to the primal feasible set, since the projection
itself has a complexity comparable to the complexity of the initial problem. We
propose an alternative efficient method to obtain feasibility and show that its
properties influencing the convergence to the optimum are similar to the
properties of the Euclidean projection. We apply our method to the local
polytope relaxation of inference problems for Markov Random Fields and
demonstrate its superiority over existing methods.Comment: 20 page, 4 figure
Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches
Imaging spectrometers measure electromagnetic energy scattered in their
instantaneous field view in hundreds or thousands of spectral channels with
higher spectral resolution than multispectral cameras. Imaging spectrometers
are therefore often referred to as hyperspectral cameras (HSCs). Higher
spectral resolution enables material identification via spectroscopic analysis,
which facilitates countless applications that require identifying materials in
scenarios unsuitable for classical spectroscopic analysis. Due to low spatial
resolution of HSCs, microscopic material mixing, and multiple scattering,
spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus,
accurate estimation requires unmixing. Pixels are assumed to be mixtures of a
few materials, called endmembers. Unmixing involves estimating all or some of:
the number of endmembers, their spectral signatures, and their abundances at
each pixel. Unmixing is a challenging, ill-posed inverse problem because of
model inaccuracies, observation noise, environmental conditions, endmember
variability, and data set size. Researchers have devised and investigated many
models searching for robust, stable, tractable, and accurate unmixing
algorithms. This paper presents an overview of unmixing methods from the time
of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models
are first discussed. Signal-subspace, geometrical, statistical, sparsity-based,
and spatial-contextual unmixing algorithms are described. Mathematical problems
and potential solutions are described. Algorithm characteristics are
illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensin
Using Statistical Analysis to Improve Data Partitioning in Algorithms for Data Parallel Processing Implementation
In multiprocessor systems, data parallelism is the execution of the same task on data distributed across multiple processors. It involves splitting the data set into smaller data partitions or batches. The process to split the data among the different processors is call “Data Partitioning” and it is an important factor of efficiency for data parallel processing implementation. Data partitioning influences the workload in each processing unit and the network traffic between processes. A poor partition quality can lead to serious performance problems. This research presents a data partitioning method that can be used to improve the performance of data parallel implementations. The proposed method relies on using an initial screening experiment to run a portion of data units. Regression is then used to create a prediction model of the processing times for each data unit. Using the estimated processing time, load balancing is achieved by implementing a greedy algorithm to distribute the units in a parallel environment. Discrete event simulation is used as the application of this research. Comparisons between equal data partitioning and the methodology proposed in this research indicate that time savings and equal load balancing can be achieved
- …