104 research outputs found

    Descomposiciones ortogonales para el cálculo del rango numérico matricial

    Get PDF
    El cálculo del rango numérico matricial surge en numerosas aplicaciones de la ciencia y de la ingeniería. Actualmente existen tres aproximaciones numéricas básicas para efectuar este cálculo: la descomposición SVD, la descomposición URV y las descomposiciones QE reveladoras de rango (QR1IH). En este trabajo se analizan experimentalmente varios algoritmos secuenciales, basados en las tres aproximaciones anteriores para el cálculo del rango numérico matricial. Así, en el estudio comparativo experimental se emplea una implemeutación propia para el cálculo de la descomposición URV y dos nuevas rutinas para el cálculo de la descomposición QRRR. Además se utilizan las rutinas de la librería LAPACK para el cálculo de la descomposición SVD y la descomposición QR con pivotamiento de columnas. Los resultados experimentales muestran que la descomposición QEUR es en la práctica tan fiable como las costosas descomposiciones SVD y URV. Además, estas descomposiciones QRRR presentan la ventaja fundamental de su bajo coste computacional.Peer Reviewe

    Efficient Numerical Algorithms for Balanced Stochastic Truncation

    Get PDF
    We propose an efficient numerical algorithm for relative error model reduction based on balanced stochastic truncation. The method uses full-rank factors of the Gramians to be balanced versus each other and exploits the fact that for large-scale systems these Gramians are often of low numerical rank. We use the easy-to-parallelize sign function method as the major computational tool in determining these full-rank factors and demonstrate the numerical performance of the suggested implementation of balanced stochastic truncation model reduction

    A kernel regression procedure in the 3D shape space with an application to online sales of children's wear

    Get PDF
    Shape regression is of key importance in many scienti c elds. In this paper, we focus on the case where the shape of an object is represented by a con- guration matrix of landmarks. It is well known that this shape space has a nite-dimensional Riemannian manifold structure (non-Euclidean) which makes it di cult to work with. Papers about regression on this space are scarce in the literature. The majority of them are restricted to the case of a single explanatory variable, usually time or age, and many of them work in the approximated tangent space. In this paper we adapt the general method for kernel regression analysis in manifold-valued data proposed by Davis et al (2007) to the three-dimensional case of Kendall's shape space and generalize it to multiple explanatory variables. We also propose bootstrap con dence intervals for prediction. A simulation study is carried out to check the goodness of the procedure, and nally it is applied to a 3D database obtained from an anthropometric survey of the Spanish child population with a potential application to online sales of children's wear

    Relación entre el método de evaluación del trabajo y el nivel de aprendizaje de los estudiantes

    Get PDF
    El objetivo del presente trabajo es presentar dos métodos para evaluar el trabajo realizado por los estudiantes fuera del aula y comparar el nivel de aprendizaje adquirido en cada uno de ellos. El primero se fundamenta en la evaluación entre compañeros, mientras que el segundo combina la autoevaluación y la realización de una prueba objetiva. En ambos casos, el objetivo fundamental es aportar una rápida retroalimentación a los alumnos. La comparación de las calificaciones de los estudiantes permite concluir que el uso de pruebas objetivas mejora el nivel de aprendizaje de los alumnos. La segunda opción ha requerido el desarrollo de una herramienta informática que evalúa las respuestas de los estudiantes a la vez que detecta posibles problemas en los enunciados de las pruebas objetivas.SUMMARY -- The main goal of this paper is to present two methods to evaluate the students’ homework, and to compare their learning level when these methods are used. The first one is based on peer-assessment, while the second one includes a self-assessment and a test. In both cases, the main objective is to provide a fast feedback to the students. Analyzing the students’ grades, we conclude that the use of tests improves the learning level of the students. The second method has required the development of an application which computes the assessment of the students and, at the same time, detects any problem in the formulation of the objective tests

    randUTV: A Blocked Randomized Algorithm for Computing a Rank-Revealing UTV Factorization

    Get PDF
    A randomized algorithm for computing a so-called UTV factorization efficiently is presented. Given a matrix , the algorithm “randUTV” computes a factorization , where and have orthonormal columns, and is triangular (either upper or lower, whichever is preferred). The algorithm randUTV is developed primarily to be a fast and easily parallelized alternative to algorithms for computing the Singular Value Decomposition (SVD). randUTV provides accuracy very close to that of the SVD for problems such as low-rank approximation, solving ill-conditioned linear systems, and determining bases for various subspaces associated with the matrix. Moreover, randUTV produces highly accurate approximations to the singular values of . Unlike the SVD, the randomized algorithm proposed builds a UTV factorization in an incremental, single-stage, and noniterative way, making it possible to halt the factorization process once a specified tolerance has been met. Numerical experiments comparing the accuracy and speed of randUTV to the SVD are presented. Other experiments also demonstrate that in comparison to column-pivoted QR, which is another factorization that is often used as a relatively economic alternative to the SVD, randUTV compares favorably in terms of speed while providing far higher accuracy

    Computing rank-revealing factorizations of matrices stored out-of-core

    Get PDF
    This paper describes efficient algorithms for computing rank-revealing factorizations of matrices that are too large to fit in main memory (RAM), and must instead be stored on slow external memory devices such as disks (out-of-core or out-of-memory). Traditional algorithms for computing rank-revealing factorizations (such as the column pivoted QR factorization and the singular value decomposition) are very communication intensive as they require many vector-vector and matrix-vector operations, which become prohibitively expensive when data is not in RAM. Randomization allows to reformulate new methods so that large contiguous blocks of the matrix are processed in bulk. The paper describes two distinct methods. The first is a blocked version of column pivoted Householder QR, organized as a “left-looking” method to minimize the number of the expensive write operations. The second method results employs a UTV factorization. It is organized as an algorithm-by-blocks to overlap computations and I/O operations. As it incorporates power iterations, it is much better at revealing the numerical rank. Numerical experiments on several computers demonstrate that the new algorithms are almost as fast when processing data stored on slow memory devices as traditional algorithms are for data stored in RAM

    Algorithm 1033: Parallel Implementations for Computing the Minimum Distance of a Random Linear Code on Distributed-memory Architectures

    Get PDF
    This is the accepted version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published inACM Transactions on Mathematical Software. Volume 49, Issue 1, https://doi.org/10.1145/3573383The minimum distance of a linear code is a key concept in information theory. Therefore, the time required by its computation is very important to many problems in this area. In this article, we introduce a family of implementations of the Brouwer–Zimmermann algorithm for distributed-memory architectures for computing the minimum distance of a random linear code over 2. Both current commercial and public-domain software only work on either unicore architectures or shared-memory architectures, which are limited in the number of cores/processors employed in the computation. Our implementations focus on distributed-memory architectures, thus being able to employ hundreds or even thousands of cores in the computation of the minimum distance. Our experimental results show that our implementations are much faster, even up to several orders of magnitude, than current implementations widely used nowadays.The authors would like to thank the University of Alicante for granting access to the ua cluster. They also want to thank Javier Navarrete for his assistance and support when working on this machine. The authors would also like to thank Robert A. van de Geijn from the University of Texas at Austin for granting access to the skx cluster.Quintana-Ortí was supported by the Spanish Ministry of Science, Innovation and Universities under Grant RTI2018-098156-B-C54 co-financed by FEDER funds. Hernando was supported by the Spanish Ministry of Science, Innovation and Universities under Grants PGC2018-096446-B-C21 and PGC2018-096446-B-C22, and by University Jaume I under Grant PB1-1B2018-10. Igual was supported by Grants PID2021-126576NB-I00 and RTI2018-B-I00, funded by MCIN/AEI/10.13039/501100011033 and by “ERDF A way of making Europe”, and the Spanish CM (S2018/TCS-4423). This work has been supported by the Madrid Government (Comunidad de Madrid, Spain) under the Multiannual Agreement with Complutense University in the line Program to Stimulate Research for Young Doctors in the context of the V PRICIT (Regional Programme of Research and Technological Innovation) under project PR65-19/22445
    corecore