3 research outputs found

    Securely Outsourcing Large Scale Eigen Value Problem to Public Cloud

    Full text link
    Cloud computing enables clients with limited computational power to economically outsource their large scale computations to a public cloud with huge computational power. Cloud has the massive storage, computational power and software which can be used by clients for reducing their computational overhead and storage limitation. But in case of outsourcing, privacy of client's confidential data must be maintained. We have designed a protocol for outsourcing large scale Eigen value problem to a malicious cloud which provides input/output data security, result verifiability and client's efficiency. As the direct computation method to find all eigenvectors is computationally expensive for large dimensionality, we have used power iterative method for finding the largest Eigen value and the corresponding Eigen vector of a matrix. For protecting the privacy, some transformations are applied to the input matrix to get encrypted matrix which is sent to the cloud and then decrypting the result that is returned from the cloud for getting the correct solution of Eigen value problem. We have also proposed result verification mechanism for detecting robust cheating and provided theoretical analysis and experimental result that describes high-efficiency, correctness, security and robust cheating resistance of the proposed protocol

    Predicting Expressibility of Parameterized Quantum Circuits using Graph Neural Network

    Full text link
    Parameterized Quantum Circuits (PQCs) are essential to quantum machine learning and optimization algorithms. The expressibility of PQCs, which measures their ability to represent a wide range of quantum states, is a critical factor influencing their efficacy in solving quantum problems. However, the existing technique for computing expressibility relies on statistically estimating it through classical simulations, which requires many samples. In this work, we propose a novel method based on Graph Neural Networks (GNNs) for predicting the expressibility of PQCs. By leveraging the graph-based representation of PQCs, our GNN-based model captures intricate relationships between circuit parameters and their resulting expressibility. We train the GNN model on a comprehensive dataset of PQCs annotated with their expressibility values. Experimental evaluation on a four thousand random PQC dataset and IBM Qiskit's hardware efficient ansatz sets demonstrates the superior performance of our approach, achieving a root mean square error (RMSE) of 0.03 and 0.06, respectively

    BB-ML: Basic Block Performance Prediction using Machine Learning Techniques

    Full text link
    Recent years have seen the adoption of Machine Learning (ML) techniques to predict the performance of large-scale applications, mostly at a coarse level. In contrast, we propose to use ML techniques for performance prediction at a much finer granularity, namely at the Basic Block (BB) level, which are single entry, single exit code blocks that are used for analysis by the compilers to break down a large code into manageable pieces. We extrapolate the basic block execution counts of GPU applications and use them for predicting the performance for large input sizes from the counts of smaller input sizes. We train a Poisson Neural Network (PNN) model using random input values as well as the lowest input values of the application to learn the relationship between inputs and basic block counts. Experimental results show that the model can accurately predict the basic block execution counts of 16 GPU benchmarks. We achieve an accuracy of 93.5% in extrapolating the basic block counts for large input sets when trained on smaller input sets and an accuracy of 97.7% in predicting basic block counts on random instances. In a case study, we apply the ML model to CUDA GPU benchmarks for performance prediction across a spectrum of applications. We use a variety of metrics for evaluation, including global memory requests and the active cycles of tensor cores, ALU, and FMA units. Results demonstrate the model's capability of predicting the performance of large datasets with an average error rate of 0.85% and 0.17% for global and shared memory requests, respectively. Additionally, to address the utilization of the main functional units in Ampere architecture GPUs, we calculate the active cycles for tensor cores, ALU, FMA, and FP64 units and achieve an average error of 2.3% and 10.66% for ALU and FMA units while the maximum observed error across all tested applications and units reaches 18.5%.Comment: Accepted at the 29th IEEE International Conference on Parallel and Distributed Systems (ICPADS 2023
    corecore