7 research outputs found
Deterministic algorithms for skewed matrix products
Recently, Pagh presented a randomized approximation algorithm for the
multiplication of real-valued matrices building upon work for detecting the
most frequent items in data streams. We continue this line of research and
present new {\em deterministic} matrix multiplication algorithms.
Motivated by applications in data mining, we first consider the case of
real-valued, nonnegative -by- input matrices and , and show how to
obtain a deterministic approximation of the weights of individual entries, as
well as the entrywise -norm, of the product . The algorithm is simple,
space efficient and runs in one pass over the input matrices. For a user
defined the algorithm runs in time and space and returns an approximation of the
entries of within an additive factor of , where is the entrywise 1-norm of a matrix and
is the time required to sort real numbers in linear space.
Building upon a result by Berinde et al. we show that for skewed matrix
products (a common situation in many real-life applications) the algorithm is
more efficient and achieves better approximation guarantees than previously
known randomized algorithms.
When the input matrices are not restricted to nonnegative entries, we present
a new deterministic group testing algorithm detecting nonzero entries in the
matrix product with large absolute value. The algorithm is clearly outperformed
by randomized matrix multiplication algorithms, but as a byproduct we obtain
the first -time deterministic algorithm for matrix
products with nonzero entries
On Nondeterministic Derandomization of Freivalds\u27 Algorithm: Consequences, Avenues and Algorithmic Progress
Motivated by studying the power of randomness, certifying algorithms and barriers for fine-grained reductions, we investigate the question whether the multiplication of two n x n matrices can be performed in near-optimal nondeterministic time O~(n^2). Since a classic algorithm due to Freivalds verifies correctness of matrix products probabilistically in time O(n^2), our question is a relaxation of the open problem of derandomizing Freivalds\u27 algorithm.
We discuss consequences of a positive or negative resolution of this problem and provide potential avenues towards resolving it. Particularly, we show that sufficiently fast deterministic verifiers for 3SUM or univariate polynomial identity testing yield faster deterministic verifiers for matrix multiplication. Furthermore, we present the partial algorithmic progress that distinguishing whether an integer matrix product is correct or contains between 1 and n erroneous entries can be performed in time O~(n^2) - interestingly, the difficult case of deterministic matrix product verification is not a problem of "finding a needle in the haystack", but rather cancellation effects in the presence of many errors.
Our main technical contribution is a deterministic algorithm that corrects an integer matrix product containing at most t errors in time O~(sqrt{t} n^2 + t^2). To obtain this result, we show how to compute an integer matrix product with at most t nonzeroes in the same running time. This improves upon known deterministic output-sensitive integer matrix multiplication algorithms for t = Omega(n^{2/3}) nonzeroes, which is of independent interest
On Nondeterministic Derandomization of {F}reivalds' Algorithm: {C}onsequences, Avenues and Algorithmic Progress
Motivated by studying the power of randomness, certifying algorithms and barriers for fine-grained reductions, we investigate the question whether the multiplication of two matrices can be performed in near-optimal nondeterministic time . Since a classic algorithm due to Freivalds verifies correctness of matrix products probabilistically in time , our question is a relaxation of the open problem of derandomizing Freivalds' algorithm. We discuss consequences of a positive or negative resolution of this problem and provide potential avenues towards resolving it. Particularly, we show that sufficiently fast deterministic verifiers for 3SUM or univariate polynomial identity testing yield faster deterministic verifiers for matrix multiplication. Furthermore, we present the partial algorithmic progress that distinguishing whether an integer matrix product is correct or contains between 1 and erroneous entries can be performed in time -- interestingly, the difficult case of deterministic matrix product verification is not a problem of "finding a needle in the haystack", but rather cancellation effects in the presence of many errors. Our main technical contribution is a deterministic algorithm that corrects an integer matrix product containing at most errors in time . To obtain this result, we show how to compute an integer matrix product with at most nonzeroes in the same running time. This improves upon known deterministic output-sensitive integer matrix multiplication algorithms for nonzeroes, which is of independent interest
Efficient Algorithms for Artificial Neural Networks and Explainable AI
Artificial neural networks have allowed some remarkable progress in fields such as pattern recognition and computer vision. However, the increasing complexity of artificial neural networks presents a challenge for efficient computation. In this thesis, we first introduce a novel matrix multiplication method to reduce the complexity of artificial neural networks, where we demonstrate its suitability to compress fully connected layers of artificial neural networks. Our method outperforms other state-of-the-art methods when tested on standard publicly available datasets. This thesis then focuses on Explainable AI, which can be critical in fields like finance and medicine, as it can provide explanations for some decisions taken by sub-symbolic AI models behaving like a black box such as Artificial neural networks and transformation based learning approaches. We have also developed a new framework that facilitates the use of Explainable AI with tabular datasets. Our new framework Exmed, enables nonexpert users to prepare data, train models, and apply Explainable AI techniques effectively.Additionally, we propose a new algorithm that identifies the overall influence of input features and minimises the perturbations that alter the decision taken by a given model. Overall, this thesis introduces innovative and comprehensive techniques to enhance the efficiency of fully connected layers in artificial neural networks and provide a new approach to explain their decisions. These methods have significant practical applications in various fields, including portable medical devices