26 research outputs found

    Efficient Representation of Computational Meshes

    Full text link
    We present a simple yet general and efficient approach to representation of computational meshes. Meshes are represented as sets of mesh entities of different topological dimensions and their incidence relations. We discuss a straightforward and efficient storage scheme for such mesh representations and efficient algorithms for computation of arbitrary incidence relations from a given initial and minimal set of incidence relations. The general representation may harbor a wide range of computational meshes, and may also be specialized to provide simple user interfaces for particular meshes, including simplicial meshes in one, two and three space dimensions where the mesh entities correspond to vertices, edges, faces and cells. It is elaborated on how the proposed concepts and data structures may be used for assembly of variational forms in parallel over distributed finite element meshes. Benchmarks are presented to demonstrate efficiency in terms of CPU time and memory usage

    Parallel scientific computing with message-passing toolboxes

    Get PDF
    Los usuarios de Entornos de Computación Científica (SCE, por sus siglas en inglés) siempre requieren mayor potencia de cálculo para sus aplicaciones. Utilizando las herramientas propuestas, los usuarios de las conocidas plataformas Matlab® y Octave, en un cluster de computadores, pueden paralelizar sus aplicaciones interpretadas utilizando paso de mensajes, como el proporcionado por PVM (Parallel Virtual Machine) o MPI (Message Passing Interface). Para muchas aplicaciones SCE es posible encontrar un esquema de paralelización con ganancia en velocidad casi lineal. Estas herramientas son interfaces prácticamente exhaustivas a las correspondientes librerías, soportan todos los tipos de datos compatibles en el SCE base y se han diseñado teniendo en cuenta el rendimiento y la facilidad de mantenimiento. En este artículo se resumen trabajos anteriores, su repercusión, y algunos resultados obtenidos por usuarios finales. Con base en la herramienta más reciente, la Toolbox MPI para Octave, se describen brevemente sus características principales, y se presenta un estudio de caso, el conjunto de Mandelbrotusers of Scientific Computing Environments (SCE) always demand more computing power for their CPu-intensive SCE applications. using the proposed toolboxes, users of the well-known Matlab® and Octave platforms in a computer cluster can parallelize their interpreted applications using the native multi-computer programming paradigm of message-passing, such as that provided by PVM (Parallel Virtual Machine) and MPI (Message Passing Inter-face). For many SCE applications, a parallelization scheme can be found so that the resulting speedup is nearly linear on the number of computers used. The toolboxes are almost compre-hensive interfaces to the corresponding libraries, they support all the compatible data types in the base SCE and they have been designed with performance and maintainability in mind. In this paper, we summarize our previous work, its repercussion, and some results obtained by end-users. Focusing on our most recent MPI Toolbox for Octave, we briefly describe its main features, and introduce a case study: the Mandelbrot se

    High-throughput fuzzy clustering on heterogeneous architectures

    Full text link
    [EN] The Internet of Things (IoT) is pushing the next economic revolution in which the main players are data and immediacy. IoT is increasingly producing large amounts of data that are now classified as "dark data'' because most are created but never analyzed. The efficient analysis of this data deluge is becoming mandatory in order to transform it into meaningful information. Among the techniques available for this purpose, clustering techniques, which classify different patterns into groups, have proven to be very useful for obtaining knowledge from the data. However, clustering algorithms are computationally hard, especially when it comes to large data sets and, therefore, they require the most powerful computing platforms on the market. In this paper, we investigate coarse and fine grain parallelization strategies in Intel and Nvidia architectures of fuzzy minimals (FM) algorithm; a fuzzy clustering technique that has shown very good results in the literature. We provide an in-depth performance analysis of the FM's main bottlenecks, reporting a speed-up factor of up to 40x compared to the sequential counterpart version.This work was partially supported by the Fundacion Seneca del Centro de Coordinacion de la Investigacion de la Region de Murcia under Project 20813/PI/18, and by Spanish Ministry of Science, Innovation and Universities under grants TIN2016-78799-P (AEI/FEDER, UE), RTI2018-096384-B-I00, RTI2018-098156-B-C53 and RTC-2017-6389-5.Cebrian, JM.; Imbernón, B.; Soto, J.; García, JM.; Cecilia-Canales, JM. (2020). High-throughput fuzzy clustering on heterogeneous architectures. Future Generation Computer Systems. 106:401-411. https://doi.org/10.1016/j.future.2020.01.022S401411106Waldrop, M. M. (2016). The chips are down for Moore’s law. Nature, 530(7589), 144-147. doi:10.1038/530144aCecilia, J. M., Timon, I., Soto, J., Santa, J., Pereniguez, F., & Munoz, A. (2018). High-Throughput Infrastructure for Advanced ITS Services: A Case Study on Air Pollution Monitoring. IEEE Transactions on Intelligent Transportation Systems, 19(7), 2246-2257. doi:10.1109/tits.2018.2816741Singh, D., & Reddy, C. K. (2014). A survey on platforms for big data analytics. Journal of Big Data, 2(1). doi:10.1186/s40537-014-0008-6Stephens, N., Biles, S., Boettcher, M., Eapen, J., Eyole, M., Gabrielli, G., … Walker, P. (2017). The ARM Scalable Vector Extension. IEEE Micro, 37(2), 26-39. doi:10.1109/mm.2017.35Wright, S. A. (2019). Performance Modeling, Benchmarking and Simulation of High Performance Computing Systems. Future Generation Computer Systems, 92, 900-902. doi:10.1016/j.future.2018.11.020Jain, A. K., Murty, M. N., & Flynn, P. J. (1999). Data clustering. ACM Computing Surveys, 31(3), 264-323. doi:10.1145/331499.331504Lee, J., Hong, B., Jung, S., & Chang, V. (2018). Clustering learning model of CCTV image pattern for producing road hazard meteorological information. Future Generation Computer Systems, 86, 1338-1350. doi:10.1016/j.future.2018.03.022Pérez-Garrido, A., Girón-Rodríguez, F., Bueno-Crespo, A., Soto, J., Pérez-Sánchez, H., & Helguera, A. M. (2017). Fuzzy clustering as rational partition method for QSAR. Chemometrics and Intelligent Laboratory Systems, 166, 1-6. doi:10.1016/j.chemolab.2017.04.006H.S. Nagesh, S. Goil, A. Choudhary, A scalable parallel subspace clustering algorithm for massive data sets, in: Proceedings 2000 International Conference on Parallel Processing, 2000, pp. 477–484.Bezdek, J. C., Ehrlich, R., & Full, W. (1984). FCM: The fuzzy c-means clustering algorithm. Computers & Geosciences, 10(2-3), 191-203. doi:10.1016/0098-3004(84)90020-7Havens, T. C., Bezdek, J. C., Leckie, C., Hall, L. O., & Palaniswami, M. (2012). Fuzzy c-Means Algorithms for Very Large Data. IEEE Transactions on Fuzzy Systems, 20(6), 1130-1146. doi:10.1109/tfuzz.2012.2201485Flores-Sintas, A., Cadenas, J., & Martin, F. (1998). A local geometrical properties application to fuzzy clustering. Fuzzy Sets and Systems, 100(1-3), 245-256. doi:10.1016/s0165-0114(97)00038-9Soto, J., Flores-Sintas, A., & Palarea-Albaladejo, J. (2008). Improving probabilities in a fuzzy clustering partition. Fuzzy Sets and Systems, 159(4), 406-421. doi:10.1016/j.fss.2007.08.016Timón, I., Soto, J., Pérez-Sánchez, H., & Cecilia, J. M. (2016). Parallel implementation of fuzzy minimals clustering algorithm. Expert Systems with Applications, 48, 35-41. doi:10.1016/j.eswa.2015.11.011Flores-Sintas, A., M. Cadenas, J., & Martin, F. (2001). Detecting homogeneous groups in clustering using the Euclidean distance. Fuzzy Sets and Systems, 120(2), 213-225. doi:10.1016/s0165-0114(99)00110-4Wang, H., Potluri, S., Luo, M., Singh, A. K., Sur, S., & Panda, D. K. (2011). MVAPICH2-GPU: optimized GPU to GPU communication for InfiniBand clusters. Computer Science - Research and Development, 26(3-4), 257-266. doi:10.1007/s00450-011-0171-3Kaltofen, E., & Villard, G. (2005). On the complexity of computing determinants. computational complexity, 13(3-4), 91-130. doi:10.1007/s00037-004-0185-3Johnson, S. C. (1967). Hierarchical clustering schemes. Psychometrika, 32(3), 241-254. doi:10.1007/bf02289588Saxena, A., Prasad, M., Gupta, A., Bharill, N., Patel, O. P., Tiwari, A., … Lin, C.-T. (2017). A review of clustering techniques and developments. Neurocomputing, 267, 664-681. doi:10.1016/j.neucom.2017.06.053Woodley, A., Tang, L.-X., Geva, S., Nayak, R., & Chappell, T. (2019). Parallel K-Tree: A multicore, multinode solution to extreme clustering. Future Generation Computer Systems, 99, 333-345. doi:10.1016/j.future.2018.09.038Kwedlo, W., & Czochanski, P. J. (2019). A Hybrid MPI/OpenMP Parallelization of KK -Means Algorithms Accelerated Using the Triangle Inequality. IEEE Access, 7, 42280-42297. doi:10.1109/access.2019.2907885Li, Y., Zhao, K., Chu, X., & Liu, J. (2013). Speeding up k-Means algorithm by GPUs. Journal of Computer and System Sciences, 79(2), 216-229. doi:10.1016/j.jcss.2012.05.004Saveetha, V., & Sophia, S. (2018). Optimal Tabu K-Means Clustering Using Massively Parallel Architecture. Journal of Circuits, Systems and Computers, 27(13), 1850199. doi:10.1142/s0218126618501992Djenouri, Y., Djenouri, D., Belhadi, A., & Cano, A. (2019). Exploiting GPU and cluster parallelism in single scan frequent itemset mining. Information Sciences, 496, 363-377. doi:10.1016/j.ins.2018.07.020Krawczyk, B. (2016). GPU-Accelerated Extreme Learning Machines for Imbalanced Data Streams with Concept Drift. Procedia Computer Science, 80, 1692-1701. doi:10.1016/j.procs.2016.05.509Fang, Y., Chen, Q., & Xiong, N. (2019). A multi-factor monitoring fault tolerance model based on a GPU cluster for big data processing. Information Sciences, 496, 300-316. doi:10.1016/j.ins.2018.04.053Tanweer, S., & Rao, N. (2019). Novel Algorithm of CPU-GPU hybrid system for health care data classification. Journal of Drug Delivery and Therapeutics, 9(1-s), 355-357. doi:10.22270/jddt.v9i1-s.244

    Grid'5000: a large scale and highly reconfigurable grid experimental testbed

    Full text link
    Large scale distributed systems such as Grids are difficult to study from theoretical models and simulators only. Most Grids deployed at large scale are production plat-forms that are inappropriate research tools because of their limited reconfiguration, control and monitoring capa-bilities. In this paper, we present Grid’5000, a 5000 CPU nation-wide infrastructure for research in Grid computing. Grid’5000 is designed to provide a scientific tool for com-puter scientists similar to the large-scale instruments used by physicists, astronomers, and biologists. We describe the motivations, design considerations, architec-ture, control, and monitoring infrastructure of this experi-mental platform. We present configuration examples and performance results for the reconfiguration subsystem

    Proyecto Docente e Investigador, Trabajo Original de Investigación y Presentación de la Defensa, preparado por Germán Moltó para concursar a la plaza de Catedrático de Universidad, concurso 082/22, plaza 6708, área de Ciencia de la Computación e Inteligencia Artificial

    Full text link
    Este documento contiene el proyecto docente e investigador del candidato Germán Moltó Martínez presentado como requisito para el concurso de acceso a plazas de Cuerpos Docentes Universitarios. Concretamente, el documento se centra en el concurso para la plaza 6708 de Catedrático de Universidad en el área de Ciencia de la Computación en el Departamento de Sistemas Informáticos y Computación de la Universitat Politécnica de València. La plaza está adscrita a la Escola Técnica Superior d'Enginyeria Informàtica y tiene como perfil las asignaturas "Infraestructuras de Cloud Público" y "Estructuras de Datos y Algoritmos".También se incluye el Historial Académico, Docente e Investigador, así como la presentación usada durante la defensa.Germán Moltó Martínez (2022). Proyecto Docente e Investigador, Trabajo Original de Investigación y Presentación de la Defensa, preparado por Germán Moltó para concursar a la plaza de Catedrático de Universidad, concurso 082/22, plaza 6708, área de Ciencia de la Computación e Inteligencia Artificial. http://hdl.handle.net/10251/18903

    Computational General Relativistic Force-Free Electrodynamics: I. Multi-Coordinate Implementation and Testing

    Full text link
    General relativistic force-free electrodynamics is one possible plasma-limit employed to analyze energetic outflows in which strong magnetic fields are dominant over all inertial phenomena. The amazing images of black hole shadows from the galactic center and the M87 galaxy provide a first direct glimpse into the physics of accretion flows in the most extreme environments of the universe. The efficient extraction of energy in the form of collimated outflows or jets from a rotating BH is directly linked to the topology of the surrounding magnetic field. We aim at providing a tool to numerically model the dynamics of such fields in magnetospheres around compact objects, such as black holes and neutron stars. By this, we probe their role in the formation of high energy phenomena such as magnetar flares and the highly variable teraelectronvolt emission of some active galactic nuclei. In this work, we present numerical strategies capable of modeling fully dynamical force-free magnetospheres of compact astrophysical objects. We provide implementation details and extensive testing of our implementation of general relativistic force-free electrodynamics in Cartesian and spherical coordinates using the infrastructure of the Einstein Toolkit. The employed hyperbolic/parabolic cleaning of numerical errors with full general relativistic compatibility allows for fast advection of numerical errors in dynamical spacetimes. Such fast advection of divergence errors significantly improves the stability of the general relativistic force-free electrodynamics modeling of black hole magnetospheres.Comment: 19 pages, 15 figures, submitted to A&
    corecore