73,144 research outputs found
An efficient and accurate algorithm for computing the matrix cosine based on New Hermite approximations
[EN] In this work we introduce new rational-polynomial Hermite matrix expansions which allow us to obtain a new accurate and efficient method for computing the matrix cosine. This method is compared with other state-of-the-art methods for computing the matrix cosine, including a method based on Pade approximants, showing a far superior efficiency, and higher accuracy. The algorithm implemented on the basis of this method can also be executed either in one or two NVIDIA GPUs, which demonstrates its great computational capacity. (C) 2018 Elsevier B.V. All rights reserved.This work has been partially supported by Spanish Ministerio de Economia y Competitividad and European Regional Development Fund (ERDF) grants TIN2014-59294-P, and T1N2017-89314-P.Defez Candel, E.; Ibáñez González, JJ.; Peinado Pinilla, J.; Sastre, J.; Alonso-Jordá, P. (2019). An efficient and accurate algorithm for computing the matrix cosine based on New Hermite approximations. Journal of Computational and Applied Mathematics. 348:1-13. https://doi.org/10.1016/j.cam.2018.08.047S11334
Computing Matrix Trigonometric Functions with GPUs through Matlab
[EN] This paper presents an implementation of one of the most up-to-day algorithms proposed to compute the matrix trigonometric functions sine and cosine. The method used is based on Taylor series approximations which intensively uses matrix multiplications. To accelerate matrix products, our application can use from one to four NVIDIA GPUs by using the NVIDIA cublas and cublasXt libraries. The application, implemented in C++, can be used from the Matlab command line thanks to the mex files provided. We experimentally assess our implementation in modern and very high-performance NVIDIA GPUs.This work has been supported by Spanish Ministerio de Economia y Competitividad and the European Regional Development Fund (ERDF) Grants TIN2014-59294-P and TEC2015-67387-C4-1-RAlonso-Jordá, P.; Peinado Pinilla, J.; Ibáñez González, JJ.; Sastre, J.; Defez Candel, E. (2019). Computing Matrix Trigonometric Functions with GPUs through Matlab. The Journal of Supercomputing. 75(3):1227-1240. https://doi.org/10.1007/s11227-018-2354-1S12271240753Serbin SM (1979) Rational approximations of trigonometric matrices with application to second-order systems of differential equations. Appl Math Comput 5(1):75–92Serbin Steven M, Blalock Sybil A (1980) An algorithm for computing the matrix cosine. SIAM J Sci Stat Comput 1(2):198–204Hargreaves GI, Higham NJ (2005) Efficient algorithms for the matrix cosine and sine. Numer Algorithms 40:383–400Al-Mohy Awad H, Higham Nicholas J (2009) A new scaling and squaring algorithm for the matrix exponential. SIAM J Matrix Anal Appl 31(3):970–989Defez E, Sastre J, Ibáñez Javier J, Ruiz Pedro A (2011) Computing matrix functions arising in engineering models with orthogonal matrix polynomials. Math Comput Model 57:1738–1743Sastre J, Ibáñez J, Ruiz P, Defez E (2013) Efficient computation of the matrix cosine. Appl Math Comput 219:7575–7585Al-Mohy Awad H, Higham Nicholas J, Relton Samuel D (2015) New algorithms for computing the matrix sine and cosine separately or simultaneously. SIAM J Sci Comput 37(1):A456–A487Alonso P, Ibáñez J, Sastre J, Peinado J, Defez E (2017) Efficient and accurate algorithms for computing matrix trigonometric functions. J Comput Appl Math 309(1):325–332CUBLAS library (2017) http://docs.nvidia.com/cuda/cublas/index.html . Accessed May 2017Alonso Jordá P, Boratto M, Peinado Pinilla J, Ibáñez González JJ, Sastre MartÃnez J (2014) On the evaluation of matrix polynomials using several GPGPUs. Universitat Politècnica de València, 2014. http://hdl.handle.net/10251/39615 . Accessed Sept 2017Boratto Murilo, Alonso Pedro, Giménez Domingo, Lastovetsky Alexey L (2017) Automatic tuning to performance modelling of matrix polynomials on multicore and multi-gpu systems. J Supercomput 73(1):227–239Alonso P, Peinado J, Ibáñez J, Sastre J, Defez E (2017) A fast implementation of matrix trigonometric functions sine and cosine. In: Proceedings of the 17th International Conference on Computational and Mathematical Methods in Science and Engineering (CMMSE 2017), pp 51–55, Costa Ballena, Rota, Cadiz (Spain), July 4th–8thSastre Jorge, Ibáñez Javier, Alonso Pedro, Peinado Jesús, Defez Emilio (2017) Two algorithms for computing the matrix cosine function. Appl Math Comput 312:66–77Paterson Michael S, Stockmeyer Larry J (1973) On the number of nonscalar multiplications necessary to evaluate polynomials. SIAM J Comput 2(1):60–66Higham Nicholas J (2008) Functions of matrices: theory and computation. SIAM, PhiladelphiaSastre J, Ibáñez Javier J, Defez E, Ruiz Pedro A (2011) Efficient orthogonal matrix polynomial based method for computing matrix exponential. Appl Math Comput 217:6451–6463Sastre J, Ibáñez Javier J, Defez E, Ruiz Pedro A (2015) Efficient scaling-squaring Taylor method for computing matrix exponential. SIAM J Sci Comput 37(1):A439–455Higham NJ, Tisseur F (2000) A block algorithm for matrix 1-norm estimation, with an application to 1-norm pseudospectra. SIAM J Matrix Anal Appl 21:1185–1201Demmel JW (1987) A counterexample for two conjectures about stability. IEEE Trans Autom Control 32:340–343Wright Thomas G (2002) EigTool library. http://www.comlab.ox.ac.uk/pseudospectra/eigtool/ . Accessed May 201
Two algorithms for computing the matrix cosine function
[EN] The computation of matrix trigonometric functions has received remarkable attention in
the last decades due to its usefulness in the solution of systems of second order linear
differential equations. Several state-of-the-art algorithms have been provided recently for
computing these matrix functions. In this work, we present two efficient algorithms based
on Taylor series with forward and backward error analysis for computing the matrix cosine.
A MATLAB implementation of the algorithms is compared to state-of-the-art algorithms,
with excellent performance in both accuracy and cost.This work has been supported by Spanish Ministerio de Economia y Competitividad and the European Regional Development Fund (ERDF) grant TIN2014-59294-P.Sastre, J.; Ibáñez González, JJ.; Alonso-Jordá, P.; Peinado Pinilla, J.; Defez Candel, E. (2017). Two algorithms for computing the matrix cosine function. Applied Mathematics and Computation. 312:66-77. https://doi.org/10.1016/j.amc.2017.05.019S667731
Efficient computation of the matrix cosine
Trigonometric matrix functions play a fundamental role in second order differential equation systems. This work presents an algorithm for computing the cosine matrix function based on Taylor series and the cosine double angle formula. It uses a forward absolute error analysis providing sharper bounds than existing methods. The proposed algorithm had lower cost than state-of-the-art algorithms based on Hermite matrix polynomial series and Padé approximants with higher accuracy in the majority of test matrices.This work has been supported by Universitat Politecnica de Valencia Grant PAID-06-011-2020.Sastre, J.; Ibáñez González, JJ.; Ruiz MartÃnez, PA.; Defez Candel, E. (2013). Efficient computation of the matrix cosine. Applied Mathematics and Computation. 219:7575-7585. https://doi.org/10.1016/j.amc.2013.01.043S7575758521
On the distribution of cosine similarity with application to biology
Cosine similarity is an established similarity metric for computing
associations on vectors, and it is commonly used to identify related samples
from biological perturbational data. The distribution of cosine similarity
changes with the covariance of the data, and this in turn affects the
statistical power to identify related signals. The relationship between the
mean and covariance of the distribution of the data and the distribution of
cosine similarity is poorly understood. In this work, we derive the asymptotic
moments of cosine similarity as a function of the data and identify the
criteria of the data covariance matrix that minimize the variance of cosine
similarity. We find that the variance of cosine similarity is minimized when
the eigenvalues of the covariance matrix are equal for centered data. One
immediate application of this work is characterizing the null distribution of
cosine similarity over a dataset with non-zero covariance structure.
Furthermore, this result can be used to optimize over a set of transformations
or representations on a dataset to maximize power, recall, or other
discriminative metrics, with direct application to noisy biological data. While
we consider the specific biological domain of perturbational data analysis, our
result has potential application for any use of cosine similarity or Pearson's
correlation on data with covariance structure.Comment: 30 pages, 4 figure
A New Algorithm for Computing the Actions of Trigonometric and Hyperbolic Matrix Functions
A new algorithm is derived for computing the actions and
, where is cosine, sinc, sine, hyperbolic cosine, hyperbolic
sinc, or hyperbolic sine function. is an matrix and is
with . denotes any matrix square root of
and it is never required to be computed. The algorithm offers six independent
output options given , , , and a tolerance. For each option, actions
of a pair of trigonometric or hyperbolic matrix functions are simultaneously
computed. The algorithm scales the matrix down by a positive integer ,
approximates by a truncated Taylor series, and finally uses the
recurrences of the Chebyshev polynomials of the first and second kind to
recover . The selection of the scaling parameter and the degree of
Taylor polynomial are based on a forward error analysis and a sequence of the
form in such a way the overall computational cost of the
algorithm is optimized. Shifting is used where applicable as a preprocessing
step to reduce the scaling parameter. The algorithm works for any matrix
and its computational cost is dominated by the formation of products of
with matrices that could take advantage of the implementation of
level-3 BLAS. Our numerical experiments show that the new algorithm behaves in
a forward stable fashion and in most problems outperforms the existing
algorithms in terms of CPU time, computational cost, and accuracy.Comment: 4 figures, 16 page
When Hashes Met Wedges: A Distributed Algorithm for Finding High Similarity Vectors
Finding similar user pairs is a fundamental task in social networks, with
numerous applications in ranking and personalization tasks such as link
prediction and tie strength detection. A common manifestation of user
similarity is based upon network structure: each user is represented by a
vector that represents the user's network connections, where pairwise cosine
similarity among these vectors defines user similarity. The predominant task
for user similarity applications is to discover all similar pairs that have a
pairwise cosine similarity value larger than a given threshold . In
contrast to previous work where is assumed to be quite close to 1, we
focus on recommendation applications where is small, but still
meaningful. The all pairs cosine similarity problem is computationally
challenging on networks with billions of edges, and especially so for settings
with small . To the best of our knowledge, there is no practical solution
for computing all user pairs with, say on large social networks,
even using the power of distributed algorithms.
Our work directly addresses this challenge by introducing a new algorithm ---
WHIMP --- that solves this problem efficiently in the MapReduce model. The key
insight in WHIMP is to combine the "wedge-sampling" approach of Cohen-Lewis for
approximate matrix multiplication with the SimHash random projection techniques
of Charikar. We provide a theoretical analysis of WHIMP, proving that it has
near optimal communication costs while maintaining computation cost comparable
with the state of the art. We also empirically demonstrate WHIMP's scalability
by computing all highly similar pairs on four massive data sets, and show that
it accurately finds high similarity pairs. In particular, we note that WHIMP
successfully processes the entire Twitter network, which has tens of billions
of edges
- …