22 research outputs found

    Towards efficient large scale epidemiological simulations in EpiGraph

    Get PDF
    The work we present in this paper focuses on understanding the propagation of flu-like infectious outbreaks between geographically distant regions due to the movement of people outside their base location. Our approach incorporates geographic location and a transportation model into our existing region-based, closed-world EpiGraph simulator to model a more realistic movement of the virus between different geographic areas. This paper describes the MPI-based implementation of this simulator, including several optimization techniques such as a novel approach for mapping processes onto available processing elements based on the temporal distribution of process loads. We present an extensive evaluation of EpiGraph in terms of its ability to simulate large-scale scenarios, as well as from a performance perspective.We would like to acknowledge the assistance provided by David del Río Astorga and Alberto Martín Cajal. This work has been partially supported by the Spanish Ministry of Science TIN2010-16497, 2010.Peer ReviewedPostprint (author's final draft

    Leveraging social networks for understanding the evolution of epidemics

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To understand how infectious agents disseminate throughout a population it is essential to capture the social model in a realistic manner. This paper presents a novel approach to modeling the propagation of the influenza virus throughout a realistic interconnection network based on actual individual interactions which we extract from online social networks. The advantage is that these networks can be extracted from existing sources which faithfully record interactions between people in their natural environment. We additionally allow modeling the characteristics of each individual as well as customizing his daily interaction patterns by making them time-dependent. Our purpose is to understand how the infection spreads depending on the structure of the contact network and the individuals who introduce the infection in the population. This would help public health authorities to respond more efficiently to epidemics.</p> <p>Results</p> <p>We implement a scalable, fully distributed simulator and validate the epidemic model by comparing the simulation results against the data in the 2004-2005 New York State Department of Health Report (NYSDOH), with similar temporal distribution results for the number of infected individuals. We analyze the impact of different types of connection models on the virus propagation. Lastly, we analyze and compare the effects of adopting several different vaccination policies, some of them based on individual characteristics -such as age- while others targeting the super-connectors in the social model.</p> <p>Conclusions</p> <p>This paper presents an approach to modeling the propagation of the influenza virus via a realistic social model based on actual individual interactions extracted from online social networks. We implemented a scalable, fully distributed simulator and we analyzed both the dissemination of the infection and the effect of different vaccination policies on the progress of the epidemics. The epidemic values predicted by our simulator match real data from NYSDOH. Our results show that our simulator can be a useful tool in understanding the differences in the evolution of an epidemic within populations with different characteristics and can provide guidance with regard to which, and how many, individuals should be vaccinated to slow down the virus propagation and reduce the number of infections.</p

    Assessing population-sampling strategies for reducing the COVID-19 incidence

    Get PDF
    As long as critical levels of vaccination have not been reached to ensure heard immunity, and new SARS-CoV-2 strains are developing, the only realistic way to reduce the infection speed in a population is to track the infected individuals before they pass on the virus. Testing the population via sampling has shown good results in slowing the epidemic spread. Sampling can be implemented at different times during the epidemic and may be done either per individual or for combined groups of people at a time. The work we present here makes two main contributions. We first extend and refine our scalable agent-based COVID-19 simulator to incorporate an improved socio-demographic model which considers professions, as well as a more realistic population mixing model based on contact matrices per country. These extensions are necessary to develop and test various sampling strategies in a scenario including the 62 largest cities in Spain; this is our second contribution. As part of the evaluation, we also analyze the impact of different parameters, such as testing frequency, quarantine time, percentage of quarantine breakers, or group testing, on sampling efficacy. Our results show that the most effective strategies are pooling, rapid antigen test campaigns, and requiring negative testing for access to public areas. The effectiveness of all these strategies can be greatly increased by reducing the number of contacts for infected individual.This work has been supported by the Carlos III Institute of Health under the project grant 2020/00183/001, the project grant BCV-2021-1-0011, of the Spanish Supercomputing Network (RES) and the European Union's Horizon 2020 JTI-EuroHPC research and innovation program under grant agreement No 956748. The role of all study sponsors was limited to financial support and did not imply participation of any kind in the study and collection, analysis, and interpretation of data, nor in the writing of the manuscript.S

    New cross-layer techniques for multi-criteria scheduling in large-scale systems

    Get PDF
    The global ecosystem of information technology (IT) is in transition to a new generation of applications that require more and more intensive data acquisition, processing and storage systems. As a result of that change towards data intensive computing, there is a growing overlap between high performance computing (HPC) and Big Data techniques in applications, since many HPC applications produce large volumes of data, and Big Data needs HPC capabilities. The hypothesis of this PhD. thesis is that the potential interoperability and convergence of the HPC and Big Data systems are crucial for the future, being essential the unification of both paradigms to address a broad spectrum of research domains. For this reason, the main objective of this Phd. thesis is purposing and developing a monitoring system to allow the HPC and Big Data convergence, thanks to giving information about behaviors of applications in a system which execute both kind of them, giving information to improve scalability, data locality, and to allow adaptability to large scale computers. To achieve this goal, this work is focused on the design of resource monitoring and discovery to exploit parallelism at all levels. These collected data are disseminated to facilitate global improvements at the whole system, and, thus, avoid mismatches between layers. The result is a two-level monitoring framework (both at node and application level) with a low computational load, scalable, and that can communicate with different modules thanks to an API provided for this purpose. All data collected is disseminated to facilitate the implementation of improvements globally throughout the system, and thus avoid mismatches between layers, which combined with the techniques applied to deal with fault tolerance, makes the system robust and with high availability. On the other hand, the developed framework includes a task scheduler capable of managing the launch of applications, their migration between nodes, as well as the possibility of dynamically increasing or decreasing the number of processes. All these thanks to the cooperation with other modules that are integrated into LIMITLESS, and whose objective is to optimize the execution of a stack of applications based on multi-criteria policies. This scheduling mode is called coarse-grain scheduling based on monitoring. For better performance and in order to further reduce the overhead during the monitorization, different optimizations have been applied at different levels to try to reduce communications between components, while trying to avoid the loss of information. To achieve this objective, data filtering techniques, Machine Learning (ML) algorithms, and Neural Networks (NN) have been used. In order to improve the scheduling process and to design new multi-criteria scheduling policies, the monitoring information has been combined with other ML algorithms to identify (through classification algorithms) the applications and their execution phases, doing offline profiling. Thanks to this feature, LIMITLESS can detect which phase is executing an application and tries to share the computational resources with other applications that are compatible (there is no performance degradation between them when both are running at the same time). This feature is called fine-grain scheduling, and can reduce the makespan of the use cases while makes efficient use of the computational resources that other applications do not use.El ecosistema global de las tecnologías de la información (IT) se encuentra en transición a una nueva generación de aplicaciones que requieren sistemas de adquisición de datos, procesamiento y almacenamiento cada vez más intensivo. Como resultado de ese cambio hacia la computación intensiva de datos, existe una superposición, cada vez mayor, entre la computación de alto rendimiento (HPC) y las técnicas Big Data en las aplicaciones, pues muchas aplicaciones HPC producen grandes volúmenes de datos, y Big Data necesita capacidades HPC. La hipótesis de esta tesis es que hay un gran potencial en la interoperabilidad y convergencia de los sistemas HPC y Big Data, siendo crucial para el futuro tratar una unificación de ambos para hacer frente a un amplio espectro de problemas de investigación. Por lo tanto, el objetivo principal de esta tesis es la propuesta y desarrollo de un sistema de monitorización que facilite la convergencia de los paradigmas HPC y Big Data gracias a la provisión de datos sobre el comportamiento de las aplicaciones en un entorno en el que se pueden ejecutar aplicaciones de ambos mundos, ofreciendo información útil para mejorar la escalabilidad, la explotación de la localidad de datos y la adaptabilidad en los computadores de gran escala. Para lograr este objetivo, el foco se ha centrado en el diseño de mecanismos de monitorización y localización de recursos para explotar el paralelismo en todos los niveles de la pila del software. El resultado es un framework de monitorización en dos niveles (tanto a nivel de nodo como de aplicación) con una baja carga computacional, escalable, y que se puede comunicar con distintos módulos gracias a una API proporcionada para tal objetivo. Todos datos recolectados se difunden para facilitar la realización de mejoras de manera global en todo el sistema, y así evitar desajustes entre capas, lo que combinado con las técnicas aplicadas para lidiar con la tolerancia a fallos, hace que el sistema sea robusto y con una alta disponibilidad. Por otro lado, el framework desarrollado incluye un planificador de tareas capaz de gestionar el lanzamiento de aplicaciones, la migración de las mismas entre nodos, además de la posibilidad de incrementar o disminuir su número de procesos de forma dinámica. Todo ello gracias a la cooperación con otros módulos que se integran en LIMITLESS, y cuyo objetivo es optimizar la ejecución de una pila de aplicaciones en base a políticas multicriterio. Esta funcionalidad se llama planificación de grano grueso. Para un mejor desempeño y con el objetivo de reducir más aún la carga durante la ejecución, se han aplicado distintas optimizaciones en distintos niveles para tratar de reducir las comunicaciones entre componentes, a la vez que se trata de evitar la pérdida de información. Para lograr este objetivo se ha hecho uso de técnicas de filtrado de datos, algoritmos de Machine Learning (ML), y Redes Neuronales (NN). Finalmente, para obtener mejores resultados en la planificación de aplicaciones y para diseñar nuevas políticas de planificación multi-criterio, los datos de monitorización recolectados han sido combinados con nuevos algoritmos de ML para identificar (por medio de algoritmos de clasificación) aplicaciones y sus fases de ejecución. Todo ello realizando tareas de profiling offline. Gracias a estas técnicas, LIMITLESS puede detectar en qué fase de su ejecución se encuentra una determinada aplicación e intentar compartir los recursos de computacionales con otras aplicaciones que sean compatibles (no se produce una degradación del rendimiento entre ellas cuando ambas se ejecutan a la vez en el mismo nodo). Esta funcionalidad se llama planificación de grano fino y puede reducir el tiempo total de ejecución de la pila de aplicaciones en los casos de uso porque realiza un uso más eficiente de los recursos de las máquinas.This PhD dissertation has been partially supported by the Spanish Ministry of Science and Innovation under an FPI fellowship associated to a National Project with reference TIN2016-79637-P (from July 1, 2018 to October 10, 2021)Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Félix García Carballeira.- Secretario: Pedro Ángel Cuenca Castillo.- Vocal: María Cristina V. Marinesc

    Optimization techniques for adaptability in MPI application

    Get PDF
    The first version of MPI (Message Passing Interface) was released in 1994. At that time, scientific applications for HPC (High Performance Computing) were characterized by a static execution environment. These applications usually had regular computation and communication patterns, operated on dense data structures accessed with good data locality, and ran on homogeneous computing platforms. For these reasons, MPI has become the de facto standard for developing scientific parallel applications for HPC during the last decades. In recent years scientific applications have evolved in order to cope with several challenges posed by different fields of engineering, economics and medicine among others. These challenges include large amounts of data stored in irregular and sparse data structures with poor data locality to be processed in parallel (big data), algorithms with irregular computation and communication patterns, and heterogeneous computing platforms (grid, cloud and heterogeneous cluster). On the other hand, over the last years MPI has introduced relevant improvements and new features in order to meet the requirements of dynamic execution environments. Some of them include asynchronous non-blocking communications, collective I/O routines and the dynamic process management interface introduced in MPI 2.0. The dynamic process management interface allows the application to spawn new processes at runtime and enable communication with them. However, this feature has some technical limitations that make the implementation of malleable MPI applications still a challenge. This thesis proposes FLEX-MPI, a runtime system that extends the functionalities of the MPI standard library and features optimization techniques for adaptability of MPI applications to dynamic execution environments. These techniques can significantly improve the performance and scalability of scientific applications and the overall efficiency of the HPC system on which they run. Specifically, FLEX-MPI focuses on dynamic load balancing and performance-aware malleability for parallel applications. The main goal of the design and implementation of the adaptability techniques is to efficiently execute MPI applications on a wide range of HPC platforms ranging from small to large-scale systems. Dynamic load balancing allows FLEX-MPI to adapt the workload assignments at runtime to the performance of the computing elements that execute the parallel application. On the other hand, performance-aware malleability leverages the dynamic process management interface of MPI to change the number of processes of the application at runtime. This feature allows to improve the performance of applications that exhibit irregular computation patterns and execute in computing systems with dynamic availability of resources. One of the main features of these techniques is that they do not require user intervention nor prior knowledge of the underlying hardware. We have validated and evaluated the performance of the adaptability techniques with three parallel MPI benchmarks and different execution environments with homogeneous and heterogeneous cluster configurations. The results show that FLEXMPI significantly improves the performance of applications when running with the support of dynamic load balancing and malleability, along with a substantial enhancement of their scalability and an improvement of the overall system efficiency.La primera versión de MPI (Message Passing Interface) fue publicada en 1994, cuando la base común de las aplicaciones científicas para HPC (High Performance Computing) se caracterizaba por un entorno de ejecución estático. Dichas aplicaciones presentaban generalmente patrones regulares de cómputo y comunicaciones, accesos a estructuras de datos densas con alta localidad, y ejecución sobre plataformas de computación homogéneas. Esto ha hecho que MPI haya sido la alternativa más adecuada para la implementación de aplicaciones científicas para HPC durante más de 20 años. Sin embargo, en los últimos años las aplicaciones científicas han evolucionado para adaptarse a diferentes retos propuestos por diferentes campos de la ingeniería, la economía o la medicina entre otros. Estos nuevos retos destacan por características como grandes cantidades de datos almacenados en estructuras de datos irregulares con baja localidad para el análisis en paralelo (big data), algoritmos con patrones irregulares de cómputo y comunicaciones, e infraestructuras de computación heterogéneas (cluster heterogéneos, grid y cloud). Por otra parte, MPI ha evolucionado significativamente en cada una de sus sucesivas versiones, siendo algunas de las mejoras más destacables presentadas hasta la reciente versión 3.0 las operaciones de comunicación asíncronas no bloqueantes, rutinas de E/S colectiva, y la interfaz de procesos dinámicos presentada en MPI 2.0. Esta última proporciona un procedimiento para la creación de procesos en tiempo de ejecución de la aplicación. Sin embargo, la implementación de la interfaz de procesos dinámicos por parte de las diferentes distribuciones de MPI aún presenta numerosas limitaciones que condicionan el desarrollo de aplicaciones maleables en MPI. Esta tesis propone FLEX-MPI, un sistema que extiende las funcionalidades de la librería MPI y proporciona técnicas de optimización para la adaptación de aplicaciones MPI a entornos de ejecución dinámicos. Las técnicas integradas en FLEX-MPI permiten mejorar el rendimiento y escalabilidad de las aplicaciones científicas y la eficiencia de las plataformas sobre las que se ejecutan. Entre estas técnicas destacan el balanceo de carga dinámico y maleabilidad para aplicaciones MPI. El diseño e implementación de estas técnicas está dirigido a plataformas de cómputo HPC de pequeña a gran escala. El balanceo de carga dinámico permite a las aplicaciones adaptar de forma eficiente su carga de trabajo a las características y rendimiento de los elementos de procesamiento sobre los que se ejecutan. Por otro lado, la técnica de maleabilidad aprovecha la interfaz de procesos dinámicos de MPI para modificar el número de procesos de la aplicación en tiempo de ejecución, una funcionalidad que permite mejorar el rendimiento de aplicaciones con patrones irregulares o que se ejecutan sobre plataformas de cómputo con disponibilidad dinámica de recursos. Una de las principales características de estas técnicas es que no requieren intervención del usuario ni conocimiento previo de la arquitectura sobre la que se ejecuta la aplicación. Hemos llevado a cabo un proceso de validación y evaluación de rendimiento de las técnicas de adaptabilidad con tres diferentes aplicaciones basadas en MPI, bajo diferentes escenarios de computación homogéneos y heterogéneos. Los resultados demuestran que FLEX-MPI permite obtener un significativo incremento del rendimiento de las aplicaciones, unido a una mejora sustancial de la escalabilidad y un aumento de la eficiencia global del sistema.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: Francisco Fernández Rivera.- Secretario: Florín Daniel Isaila.- Vocal: María Santos Pérez Hernánde

    Handling Information and its Propagation to Engineer Complex Embedded Systems

    Get PDF
    Avec l’intérêt que la technologie d’aujourd’hui a sur les données, il est facile de supposer que l’information est au bout des doigts, prêt à être exploité. Les méthodologies et outils de recherche sont souvent construits sur cette hypothèse. Cependant, cette illusion d’abondance se brise souvent lorsqu’on tente de transférer des techniques existantes à des applications industrielles. Par exemple, la recherche a produit divers méthodologies permettant d’optimiser l’utilisation des ressources de grands systèmes complexes, tels que les avioniques de l’Airbus A380. Ces approches nécessitent la connaissance de certaines mesures telles que les temps d’exécution, la consommation de mémoire, critères de communication, etc. La conception de ces systèmes complexes a toutefois employé une combinaison de compétences de différents domaines (probablement avec des connaissances en génie logiciel) qui font que les données caractéristiques au système sont incomplètes ou manquantes. De plus, l’absence d’informations pertinentes rend difficile de décrire correctement le système, de prédire son comportement, et améliorer ses performances. Nous faisons recours au modèles probabilistes et des techniques d’apprentissage automatique pour remédier à ce manque d’informations pertinentes. La théorie des probabilités, en particulier, a un grand potentiel pour décrire les systèmes partiellement observables. Notre objectif est de fournir des approches et des solutions pour produire des informations pertinentes. Cela permet une description appropriée des systèmes complexes pour faciliter l’intégration, et permet l’utilisation des techniques d’optimisation existantes. Notre première étape consiste à résoudre l’une des difficultés rencontrées lors de l’intégration de système : assurer le bon comportement temporelle des composants critiques des systèmes. En raison de la mise à l’échelle de la technologie et de la dépendance croissante à l’égard des architectures à multi-coeurs, la surcharge de logiciels fonctionnant sur différents coeurs et le partage d’espace mémoire n’est plus négligeable. Pour tel, nous étendons la boîte à outils des système temps réel avec une analyse temporelle probabiliste statique qui estime avec précision l’exécution d’un logiciel avec des considerations pour les conflits de mémoire partagée. Le modèle est ensuite intégré dans un simulateur pour l’ordonnancement de systèmes temps réel multiprocesseurs. ----------ABSTRACT: In today’s data-driven technology, it is easy to assume that information is at the tip of our fingers, ready to be exploited. Research methodologies and tools are often built on top of this assumption. However, this illusion of abundance often breaks when attempting to transfer existing techniques to industrial applications. For instance, research produced various methodologies to optimize the resource usage of large complex systems, such as the avionics of the Airbus A380. These approaches require the knowledge of certain metrics such as the execution time, memory consumption, communication delays, etc. The design of these complex systems, however, employs a mix of expertise from different fields (likely with limited knowledge in software engineering) which might lead to incomplete or missing specifications. Moreover, the unavailability of relevant information makes it difficult to properly describe the system, predict its behavior, and improve its performance. We fall back on probabilistic models and machine learning techniques to address this lack of relevant information. Probability theory, especially, has great potential to describe partiallyobservable systems. Our objective is to provide approaches and solutions to produce relevant information. This enables a proper description of complex systems to ease integration, and allows the use of existing optimization techniques. Our first step is to tackle one of the difficulties encountered during system integration: ensuring the proper timing behavior of critical systems. Due to technology scaling, and with the growing reliance on multi-core architectures, the overhead of software running on different cores and sharing memory space is no longer negligible. For such, we extend the real-time system tool-kit with a static probabilistic timing analysis technique that accurately estimates the execution of software with an awareness of shared memory contention. The model is then incorporated into a simulator for scheduling multi-processor real-time systems

    The Datafied Society. Studying Culture through Data

    Get PDF
    As more and more aspects of everyday life are turned into machine-readable data, researchers are provided with rich resources for researching society. The novel methods and innovative tools to work with this data not only require new knowledge and skills, but also raise issues concerning the practices of investigation and publication. This book critically reflects on the role of data in academia and society and challenges overly optimistic expectations considering data practices as means for understanding social reality. It introduces its readers to the practices and methods for data analysis and visualization and raises questions not only about the politics of data tools, but also about the ethics in collecting, sifting through data, and presenting data research. AUP S17 Catalogue text As machine-readable data comes to play an increasingly important role in everyday life, researchers find themselves with rich resources for studying society. The novel methods and tools needed to work with such data require not only new knowledge and skills, but also a new way of thinking about best research practices. This book critically reflects on the role and usefulness of big data, challenging overly optimistic expectations about what such information can reveal, introducing practices and methods for its analysis and visualization, and raising important political and ethical questions regarding its collection, handling, and presentation

    Researchers' Assumptions and Mathematical Models: A Philosophical Study of Metabolic Systems Biology

    Get PDF
    This thesis examines the philosophical implications of the assumptions made by researchers involved in the development of mathematical models of metabolism. It does this through an analysis of several detailed historical case studies of models between the 1960’s and the present day, thus also contributing to the growing literature on the historiography of biochemical systems biology. The chapters focus on four main topics: the relationship between models and theory, temporal decomposition as a simplifying strategy for building models of complex metabolic systems, interactions between modellers and experimental biochemists, and the role of biochemical data. Four categories of assumptions are shown to play a significant role in these different aspects of model development; ontological assumptions, idealising assumptions, assumptions about data, and researchers’ commitments. Building on this analysis, the thesis brings to light the importance of researcher’s ontological and idealising assumptions about the temporal organisation, alongside the spatial organisation, of metabolic systems. It also offers an account of different forms of interactions between research groups – hostile interactions, closed collaboration, and open collaboration – on the basis of differences in the characteristics of researcher’s commitments. Throughout the case studies, biological data play a powerful role in model development by virtue of the contents of available data sets, as well as researchers’ perceptions of those data, which are in turn influenced by their ontological assumptions. The historical trajectories explored illustrate how the relationships between different facets of model building, and their associated philosophical abstractions, are often best understood as transient features within a highly dynamic research process, whose role depends on the specific stage of modelling in which they are enacted. This thesis provides an expanded perspective on the different types and roles of assumptions in the development of mathematical models of metabolism, which is firmly grounded in a historical analysis of scientific practice.AHR
    corecore