90 research outputs found
Automatic Probabilistic Program Verification through Random Variable Abstraction
The weakest pre-expectation calculus has been proved to be a mature theory to
analyze quantitative properties of probabilistic and nondeterministic programs.
We present an automatic method for proving quantitative linear properties on
any denumerable state space using iterative backwards fixed point calculation
in the general framework of abstract interpretation. In order to accomplish
this task we present the technique of random variable abstraction (RVA) and we
also postulate a sufficient condition to achieve exact fixed point computation
in the abstract domain. The feasibility of our approach is shown with two
examples, one obtaining the expected running time of a probabilistic program,
and the other the expected gain of a gambling strategy.
Our method works on general guarded probabilistic and nondeterministic
transition systems instead of plain pGCL programs, allowing us to easily model
a wide range of systems including distributed ones and unstructured programs.
We present the operational and weakest precondition semantics for this programs
and prove its equivalence
q-State Potts model metastability study using optimized GPU-based Monte Carlo algorithms
We implemented a GPU based parallel code to perform Monte Carlo simulations
of the two dimensional q-state Potts model. The algorithm is based on a
checkerboard update scheme and assigns independent random numbers generators to
each thread. The implementation allows to simulate systems up to ~10^9 spins
with an average time per spin flip of 0.147ns on the fastest GPU card tested,
representing a speedup up to 155x, compared with an optimized serial code
running on a high-end CPU. The possibility of performing high speed simulations
at large enough system sizes allowed us to provide a positive numerical
evidence about the existence of metastability on very large systems based on
Binder's criterion, namely, on the existence or not of specific heat
singularities at spinodal temperatures different of the transition one.Comment: 30 pages, 7 figures. Accepted in Computer Physics Communications.
code available at:
http://www.famaf.unc.edu.ar/grupos/GPGPU/Potts/CUDAPotts.htm
Recommended from our members
Basal Dynamics and Internal Structure of Ice Sheets
The internal structure of ice sheets reflects the history of flow and deformation experienced by the ice mass. Flow and deformation are controlled by processes occurring within the ice mass and at its boundaries, including surface accumulation or ablation, ice rheology, basal topography, basal sliding, and basal melting or freezing. The internal structure and basal environment of ice sheets is studied with ice-penetrating radar. Recently, radar observations in Greenland and Antarctica have imaged large englacial structures rising from near the bed that deform the overlying stratigraphy into anticlines, synclines, and overturned folds. The mechanisms that may produce these structures include basal freeze-on, travelling slippery patches at the ice base, and rheological contrasts within the ice column.
In this thesis, I explore the setting and mechanisms that produce large basal stratigraphic structures inside ice sheets. First, I use radar data to map subglacial hydrologic networks that deliver meltwater uphill towards freeze-on structures in East Antarctica. Next, I use a thermomechanical flowline model to demonstrate that trains of alternating slippery and sticky patches can form underneath ice sheets and travel downstream over time. The disturbances to the ice flow field produced by these travelling patches produce stratigraphic folds resembling the observations. I then examine the overturned folds produced by a single travelling sticky patch using a kinematic flowline model. This model is used to interpret
stratigraphic measurements in terms of the dynamic properties of basal slip. Finally, I use a simple local one-dimensional model to estimate the thickness of basal freeze-on that can be produced based on the supply of available meltwater, the thermal boundary conditions, ice sheet geometry, and the ice flow regime
Bisimulations for non-deterministic labelled Markov processes
We extend the theory of labelled Markov processes to include internal non-determinism, which is a fundamental concept for the further development of a process theory with abstraction on non-deterministic continuous probabilistic systems. We define non-deterministic labelled Markov processes (NLMP) and provide three definitions of bisimulations: a bisimulation following a traditional characterisation; a state-based bisimulation tailored to our 'measurable' non-determinism; and an event-based bisimulation. We show the relations between them, including the fact that the largest state bisimulation is also an event bisimulation. We also introduce a variation of the Hennessy-Milner logic that characterises event bisimulation and is sound with respect to the other bisimulations for an arbitrary NLMP. This logic, however, is infinitary as it contains a denumerable. We then introduce a finitary sublogic that characterises all bisimulations for an image finite NLMP whose underlying measure space is also analytic. Hence, in this setting, all the notions of bisimulation we consider turn out to be equal. Finally, we show that all these bisimulation notions are different in the general case. The counterexamples that separate them turn out to be non-probabilistic NLMPs.Fil: D'argenio, Pedro Ruben. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía y Física. Sección Ciencias de la Computación; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba; ArgentinaFil: Sanchez Terraf, Pedro Octavio. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía y Física. Sección Ciencias de la Computación; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba; ArgentinaFil: Wolovick, Nicolás. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía y Física. Sección Ciencias de la Computación; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba; Argentin
Boosting materials science simulations by high performance computing
Ponencia presentada en el XXIII Congreso de Métodos Numéricos y sus Aplicaciones. La Plata, Argentina, del 7 al 10 de noviembre de 2017.Fil: Millán, Emmanuel Nicolás. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina.Fil: Millán, Emmanuel Nicolás. Universidad Nacional de Cuyo. Facultad de Ciencias Exactas y Naturales; Argentina.Fil: Ruestes, Carlos Javier. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina.Fil: Ruestes, Carlos Javier. Universidad Nacional de Cuyo. Facultad de Ciencias Exactas y Naturales; Argentina.Fil: Wolovick, Nicolás. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía, Física y Computación; Argentina.Fil: Bringa, Eduardo Marcial. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina.Fil: Bringa, Eduardo Marcial. Universidad Nacional de Cuyo. Facultad de Ciencias Exactas y Naturales; Argentina.Technology development is often limited by knowledge of materials engineering and manufacturing processes. This scenario spans across scales and disciplines, from aerospace engineering to MicroElectroMechanical Systems (MEMS) and NanoElectroMechanical Systems (NEMS). The mechanical response of materials is dictated by atomic/nanometric scale processes that can be explored by molecular dynamics (MD) simulations. In this work we employ atomistic simulations to prove indentation as a prototypical deformation process showing the advantage of High Performance Computing (HPC) implementations for speeding up research. Selecting the right HPC hardware for executing simulations is a process that usually involves testing different hardware architectures and software configurations. Currently, there are several alternatives, using HPC cluster facilities shared between several researchers, as provided by Universities or Government Institutions, owning a small cluster, acquiring a local workstation with a high-end microprocessor, and using accelerators such as Graphics Processing Units (GPU), Field Programmable Gate Arrays (FPGA), or Intel Many Integrated Cores (MIC). Given this broad set of alternatives, we run several benchmarks using various University HPC clusters, a former TOP500 cluster in a foreign computing center, two high-end workstations and several accelerators. A number of different metrics are proposed to compare the performance and aid in the selection of the best hardware architecture according to the needs and budget of researchers. Amongst several results, we find that the Titan X Pascal GPU has a ∼3 x speedup against 64 AMD Opteron CPU cores.https://cimec.org.ar/ojs/index.php/mc/article/view/5277Fil: Millán, Emmanuel Nicolás. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina.Fil: Millán, Emmanuel Nicolás. Universidad Nacional de Cuyo. Facultad de Ciencias Exactas y Naturales; Argentina.Fil: Ruestes, Carlos Javier. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina.Fil: Ruestes, Carlos Javier. Universidad Nacional de Cuyo. Facultad de Ciencias Exactas y Naturales; Argentina.Fil: Wolovick, Nicolás. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía, Física y Computación; Argentina.Fil: Bringa, Eduardo Marcial. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina.Fil: Bringa, Eduardo Marcial. Universidad Nacional de Cuyo. Facultad de Ciencias Exactas y Naturales; Argentina.Ciencias de la Computació
Lizzie I, la computadora de La Voz
En 1983 un equipo liderado por el ingeniero Juan Carlos Cammisa desarrolla y fabrica una computadora de 8 bits compatible CP/M para el diario cordobés La Voz del Interior. En el desarrollo participa un equipo de cuatro personas y en la fabricación se utilizan proveedores de MicroSistemas y dos de sus trabajadores pasan a este equipo. El contexto histórico, el desarrollo local de la formación superior y el complejo industrial electrónico argentino permiten realizar un producto sólido que sirvió tanto para la redacción como para las receptorías. Se fabricaron cuarenta unidades. La informatización de la redacción generó miedos entre las y los periodistas. Durante 1984 las receptorías aumentaron seis veces la cantidad de avisos recibidos. El desarrollo local se trunca con la llegada masiva de PC/DOS a precios muy bajos.Sociedad Argentina de Informática e Investigación Operativ
Enseñar a Programar y Programar para Aprender
En estos últimos años diferentes organizaciones gubernamentales y privadas, en muchos países desarrollados y en desarrollo, han cuestionado los contenidos que se enseñan en las materias relacionadas con la computación en la escuela obligatoria (primaria y secundaria), y las estrategias de transmisión que se seleccionan para formar a los futuros analistas, ingenieros y licenciados en computación en las universidades. Informes realizados en varios países del mundo, por comisiones especiales con referentes del área de la computación y la educación, apuntan a que lo que actualmente se enseña en las escuelas en el nombre de la computación son habilidades de usuarios de programas (Fundación Sadosky, 2013; Furber, 2012; Shackelford, 2006). Lejos de acercar a los estudiantes y futuras generaciones a la comprensión del mundo digital, las comisiones internacionales revelan que éstas enseñanzas contribuyen a generar falsas ideas sobre la disciplina.Fil: Wolovick, Nicolás. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía y Física; ArgentinaFil: Martinez, Maria Cecilia. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba; Argentina. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía y Física; Argentin
Una experiencia en GPU Computing entre FaMAF e INVAP.
Mostramos la experiencia realizada entre INVAP SE y FaMAF-UNC, para el
desarrollo del software de imágenes sobre placas de procesamiento gráfico de propósitos
generales (GPGPU por sus siglas en inglés).
En el contexto de un sistema de adquisición de imágenes desarrollado por INVAP para uno
de sus clientes, resulta necesario incluir un módulo de software capaz de realizar un
seguimiento automático de puntos de interés identificados en un video. Se encomienda a
un grupo de investigación de la UTN un primer desarrollo de este módulo sobre
procesadores convencionales CPU, y a partir de la integración del sistema, se evidencia la
necesidad de optimizar el desempeño de dicho módulo en términos de tiempos de
ejecución y uso de recursos computacionales, a fin de que se obtenga un procesamiento en
tiempo real (30 cuadros por segundo). Para satisfacer esa necesidad, se decide migrar el
módulo a una GPU presente en el sistema, para lo cual se recurre al grupo de investigación
GPGPU Computing de FaMAF, reconocido a nivel nacional por su experiencia en esa
tecnología. Se establece una metodología de trabajo en conjunto a través de un enunciado
de trabajo en el que se define el alcance, los entregables, los plazos de ejecución de los
trabajos y los interlocutores por ambas partes. En términos generales, el equipo de
investigación toma a su cargo el diseño, la implementación y la verificación del módulo
desarrollado en el ambiente de laboratorio, mientras que el equipo de INVAP se
responsabiliza por la verificación y validación en el ambiente real de ejecución. La
interacción se caracteriza por la comunicación fluida y la rigurosidad en la metodología de
medición del desempeño de la aplicación, su verificación y validación. Con motivo de esta
experiencia, se ejercitaron los canales administrativos de ambas entidades para generar el
marco formal de trabajo mediante los correspondientes convenios. En resumen, se presenta
un caso de aplicación exitosa de los saberes del sistema científico tecnológico nacional
para la resolución de un problema de la industria que tiene impacto concreto en un
proyecto de desarrollo de un sistema de alta complejidad.publishedVersio
Boosting materials science simulations by high performance computing
Technology development is often limited by knowledge of materials engineering and manufacturing processes. This scenario spans across scales and disciplines, from aerospace engineering to MicroElectroMechanical Systems (MEMS) and NanoElectroMechanical Systems (NEMS). The mechanical response of materials is dictated by atomic/nanometric scale processes that can be explored by molecular dynamics (MD) simulations. In this work we employ atomistic simulations to prove indentation as a prototypical deformation process showing the advantage of High Performance Computing (HPC) implementations for speeding up research. Selecting the right HPC hardware for executing simulations is a process that usually involves testing different hardware architectures and software configurations. Currently, there are several alternatives, using HPC cluster facilities shared between several researchers, as provided by Universities or Government Institutions, owning a small cluster, acquiring a local workstation with a high-end microprocessor, and using accelerators such as Graphics Processing Units (GPU), Field Programmable Gate Arrays (FPGA), or Intel Many Integrated Cores (MIC). Given this broad set of alternatives, we run several benchmarks using various University HPC clusters, a former TOP500 cluster in a foreign computing center, two high-end workstations and several accelerators. A number of different metrics are proposed to compare the performance and aid in the selection of the best hardware architecture according to the needs and budget of researchers. Amongst several results, we find that the Titan X Pascal GPU has a ∼3 x speedup against 64 AMD Opteron CPU cores.Publicado en: Mecánica Computacional vol. XXXV, no. 10.Facultad de Ingenierí
Boosting materials science simulations by high performance computing
Technology development is often limited by knowledge of materials engineering and manufacturing processes. This scenario spans across scales and disciplines, from aerospace engineering to MicroElectroMechanical Systems (MEMS) and NanoElectroMechanical Systems (NEMS). The mechanical response of materials is dictated by atomic/nanometric scale processes that can be explored by molecular dynamics (MD) simulations. In this work we employ atomistic simulations to prove indentation as a prototypical deformation process showing the advantage of High Performance Computing (HPC) implementations for speeding up research. Selecting the right HPC hardware for executing simulations is a process that usually involves testing different hardware architectures and software configurations. Currently, there are several alternatives, using HPC cluster facilities shared between several researchers, as provided by Universities or Government Institutions, owning a small cluster, acquiring a local workstation with a high-end microprocessor, and using accelerators such as Graphics Processing Units (GPU), Field Programmable Gate Arrays (FPGA), or Intel Many Integrated Cores (MIC). Given this broad set of alternatives, we run several benchmarks using various University HPC clusters, a former TOP500 cluster in a foreign computing center, two high-end workstations and several accelerators. A number of different metrics are proposed to compare the performance and aid in the selection of the best hardware architecture according to the needs and budget of researchers. Amongst several results, we find that the Titan X Pascal GPU has a ∼3 x speedup against 64 AMD Opteron CPU cores.Publicado en: Mecánica Computacional vol. XXXV, no. 10.Facultad de Ingenierí
- …