2 research outputs found
Practical Benchmarking of Randomized Measurement Methods for Quantum Chemistry Hamiltonians
Many hybrid quantum-classical algorithms for the application of ground state
energy estimation in quantum chemistry involve estimating the expectation value
of a molecular Hamiltonian with respect to a quantum state through measurements
on a quantum device. To guide the selection of measurement methods designed for
this observable estimation problem, we propose a benchmark called CSHOREBench
(Common States and Hamiltonians for ObseRvable Estimation Benchmark) that
assesses the performance of these methods against a set of common molecular
Hamiltonians and common states encountered during the runtime of hybrid
quantum-classical algorithms. In CSHOREBench, we account for resource
utilization of a quantum computer through measurements of a prepared state, and
a classical computer through computational runtime spent in proposing
measurements and classical post-processing of acquired measurement outcomes. We
apply CSHOREBench considering a variety of measurement methods on Hamiltonians
of size up to 16 qubits. Our discussion is aided by using the framework of
decision diagrams which provides an efficient data structure for various
randomized methods and illustrate how to derandomize distributions on decision
diagrams. In numerical simulations, we find that the methods of decision
diagrams and derandomization are the most preferable. In experiments on IBM
quantum devices against small molecules, we observe that decision diagrams
reduces the number of measurements made by classical shadows by more than 80%,
that made by locally biased classical shadows by around 57%, and consistently
require fewer quantum measurements along with lower classical computational
runtime than derandomization. Furthermore, CSHOREBench is empirically efficient
to run when considering states of random quantum ansatz with fixed depth.Comment: 32 pages, 7 figures with supplementary material of 5 pages, 3 figure
High Performance Multiview Video Coding
Following the standardization of the latest video coding standard High Efficiency Video Coding in 2013, in 2014, multiview extension of HEVC (MV-HEVC) was published and brought significantly better compression performance of around 50% for multiview and 3D videos compared to multiple independent single-view HEVC coding. However, the extremely high computational complexity of MV-HEVC demands significant optimization of the encoder. To tackle this problem, this work investigates the possibilities of using modern parallel computing platforms and tools such as single-instruction-multiple-data (SIMD) instructions, multi-core CPU, massively parallel GPU, and computer cluster to significantly enhance the MVC encoder performance. The aforementioned computing tools have very different computing characteristics and misuse of the tools may result in poor performance improvement and sometimes even reduction. To achieve the best possible encoding performance from modern computing tools, different levels of parallelism inside a typical MVC encoder are identified and analyzed. Novel optimization techniques at various levels of abstraction are proposed, non-aggregation massively parallel motion estimation (ME) and disparity estimation (DE) in prediction unit (PU), fractional and bi-directional ME/DE acceleration through SIMD, quantization parameter (QP)-based early termination for coding tree unit (CTU), optimized resource-scheduled wave-front parallel processing for CTU, and workload balanced, cluster-based multiple-view parallel are proposed. The result shows proposed parallel optimization techniques, with insignificant loss to coding efficiency, significantly improves the execution time performance. This , in turn, proves modern parallel computing platforms, with appropriate platform-specific algorithm design, are valuable tools for improving the performance of computationally intensive applications