13,555 research outputs found

    Domain-specific implementation of high-order Discontinuous Galerkin methods in spherical geometry

    Get PDF
    In recent years, domain-specific languages (DSLs) have achieved significant success in large-scale efforts to reimplement existing meteorological models in a performance portable manner. The dynamical cores of these models are based on finite difference and finite volume schemes, and existing DSLs are generally limited to supporting only these numerical methods. In the meantime, there have been numerous attempts to use high-order Discontinuous Galerkin (DG) methods for atmospheric dynamics, which are currently largely unsupported in main-stream DSLs. In order to link these developments, we present two domain-specific languages which extend the existing GridTools (GT) ecosystem to high-order DG discretization. The first is a C++-based DSL called G4GT, which, despite being no longer supported, gave us the impetus to implement extensions to the subsequent Python-based production DSL called GT4Py to support the operations needed for DG solvers. As a proof of concept, the shallow water equations in spherical geometry are implemented in both DSLs, thus providing a blueprint for the application of domain-specific languages to the development of global atmospheric models. We believe this is the first GPU-capable DSL implementation of DG in spherical geometry. The results demonstrate that a DSL designed for finite difference/volume methods can be successfully extended to implement a DG solver, while preserving the performance-portability of the DSL.ISSN:0010-4655ISSN:1879-294

    Detection of the significant impact of source clustering on higher-order statistics with DES Year 3 weak gravitational lensing data

    Get PDF
    We measure the impact of source galaxy clustering on higher-order summary statistics of weak gravitational lensing data. By comparing simulated data with galaxies that either trace or do not trace the underlying density field, we show this effect can exceed measurement uncertainties for common higher-order statistics for certain analysis choices. We evaluate the impact on different weak lensing observables, finding that third moments and wavelet phase harmonics are more affected than peak count statistics. Using Dark Energy Survey Year 3 data we construct null tests for the source-clustering-free case, finding a p-value of p = 4 × 10−3 (2.6σ) using third-order map moments and p = 3 × 10−11 (6.5σ) using wavelet phase harmonics. The impact of source clustering on cosmological inference can be either be included in the model or minimized through ad-hoc procedures (e.g. scale cuts). We verify that the procedures adopted in existing DES Y3 cosmological analyses were sufficient to render this effect negligible. Failing to account for source clustering can significantly impact cosmological inference from higher-order gravitational lensing statistics, e.g. higher-order N-point functions, wavelet-moment observables, and deep learning or field level summary statistics of weak lensing maps

    Recent development of fully kinetic particle-in-cell method and its application to fusion plasma instability study

    Get PDF
    This paper reviews the recent advancements of the algorithm and application to fusion plasma instability study of the fully kinetic Particle-in-Cell (PIC) method. The strengths and limitations of both explicit and implicit PIC methods are described and compared. Additionally, the semi-implicit PIC method and the code ECsim used in our research are introduced. Furthermore, the application of PIC methods in fusion plasma instabilities is delved into. A detailed account of the recent progress achieved in the realm of tokamak plasma simulation through fully kinetic PIC simulations is also provided. Finally the prospective future development and application of PIC methods are discussed as well

    Serverless Strategies and Tools in the Cloud Computing Continuum

    Full text link
    Tesis por compendio[ES] En los últimos años, la popularidad de la computación en nube ha permitido a los usuarios acceder a recursos de cómputo, red y almacenamiento sin precedentes bajo un modelo de pago por uso. Esta popularidad ha propiciado la aparición de nuevos servicios para resolver determinados problemas informáticos a gran escala y simplificar el desarrollo y el despliegue de aplicaciones. Entre los servicios más destacados en los últimos años se encuentran las plataformas FaaS (Función como Servicio), cuyo principal atractivo es la facilidad de despliegue de pequeños fragmentos de código en determinados lenguajes de programación para realizar tareas específicas en respuesta a eventos. Estas funciones son ejecutadas en los servidores del proveedor Cloud sin que los usuarios se preocupen de su mantenimiento ni de la gestión de su elasticidad, manteniendo siempre un modelo de pago por uso de grano fino. Las plataformas FaaS pertenecen al paradigma informático conocido como Serverless, cuyo propósito es abstraer la gestión de servidores por parte de los usuarios, permitiéndoles centrar sus esfuerzos únicamente en el desarrollo de aplicaciones. El problema del modelo FaaS es que está enfocado principalmente en microservicios y tiende a tener limitaciones en el tiempo de ejecución y en las capacidades de computación (por ejemplo, carece de soporte para hardware de aceleración como GPUs). Sin embargo, se ha demostrado que la capacidad de autoaprovisionamiento y el alto grado de paralelismo de estos servicios pueden ser muy adecuados para una mayor variedad de aplicaciones. Además, su inherente ejecución dirigida por eventos hace que las funciones sean perfectamente adecuadas para ser definidas como pasos en flujos de trabajo de procesamiento de archivos (por ejemplo, flujos de trabajo de computación científica). Por otra parte, el auge de los dispositivos inteligentes e integrados (IoT), las innovaciones en las redes de comunicación y la necesidad de reducir la latencia en casos de uso complejos han dado lugar al concepto de Edge computing, o computación en el borde. El Edge computing consiste en el procesamiento en dispositivos cercanos a las fuentes de datos para mejorar los tiempos de respuesta. La combinación de este paradigma con la computación en nube, formando arquitecturas con dispositivos a distintos niveles en función de su proximidad a la fuente y su capacidad de cómputo, se ha acuñado como continuo de la computación en la nube (o continuo computacional). Esta tesis doctoral pretende, por lo tanto, aplicar diferentes estrategias Serverless para permitir el despliegue de aplicaciones generalistas, empaquetadas en contenedores de software, a través de los diferentes niveles del continuo computacional. Para ello, se han desarrollado múltiples herramientas con el fin de: i) adaptar servicios FaaS de proveedores Cloud públicos; ii) integrar diferentes componentes software para definir una plataforma Serverless en infraestructuras privadas y en el borde; iii) aprovechar dispositivos de aceleración en plataformas Serverless; y iv) facilitar el despliegue de aplicaciones y flujos de trabajo a través de interfaces de usuario. Además, se han creado y adaptado varios casos de uso para evaluar los desarrollos conseguidos.[CA] En els últims anys, la popularitat de la computació al núvol ha permès als usuaris accedir a recursos de còmput, xarxa i emmagatzematge sense precedents sota un model de pagament per ús. Aquesta popularitat ha propiciat l'aparició de nous serveis per resoldre determinats problemes informàtics a gran escala i simplificar el desenvolupament i desplegament d'aplicacions. Entre els serveis més destacats en els darrers anys hi ha les plataformes FaaS (Funcions com a Servei), el principal atractiu de les quals és la facilitat de desplegament de petits fragments de codi en determinats llenguatges de programació per realitzar tasques específiques en resposta a esdeveniments. Aquestes funcions són executades als servidors del proveïdor Cloud sense que els usuaris es preocupen del seu manteniment ni de la gestió de la seva elasticitat, mantenint sempre un model de pagament per ús de gra fi. Les plataformes FaaS pertanyen al paradigma informàtic conegut com a Serverless, el propòsit del qual és abstraure la gestió de servidors per part dels usuaris, permetent centrar els seus esforços únicament en el desenvolupament d'aplicacions. El problema del model FaaS és que està enfocat principalment a microserveis i tendeix a tenir limitacions en el temps d'execució i en les capacitats de computació (per exemple, no té suport per a maquinari d'acceleració com GPU). Tot i això, s'ha demostrat que la capacitat d'autoaprovisionament i l'alt grau de paral·lelisme d'aquests serveis poden ser molt adequats per a més aplicacions. A més, la seva inherent execució dirigida per esdeveniments fa que les funcions siguen perfectament adequades per ser definides com a passos en fluxos de treball de processament d'arxius (per exemple, fluxos de treball de computació científica). D'altra banda, l'auge dels dispositius intel·ligents i integrats (IoT), les innovacions a les xarxes de comunicació i la necessitat de reduir la latència en casos d'ús complexos han donat lloc al concepte d'Edge computing, o computació a la vora. L'Edge computing consisteix en el processament en dispositius propers a les fonts de dades per millorar els temps de resposta. La combinació d'aquest paradigma amb la computació en núvol, formant arquitectures amb dispositius a diferents nivells en funció de la proximitat a la font i la capacitat de còmput, s'ha encunyat com a continu de la computació al núvol (o continu computacional). Aquesta tesi doctoral pretén, doncs, aplicar diferents estratègies Serverless per permetre el desplegament d'aplicacions generalistes, empaquetades en contenidors de programari, a través dels diferents nivells del continu computacional. Per això, s'han desenvolupat múltiples eines per tal de: i) adaptar serveis FaaS de proveïdors Cloud públics; ii) integrar diferents components de programari per definir una plataforma Serverless en infraestructures privades i a la vora; iii) aprofitar dispositius d'acceleració a plataformes Serverless; i iv) facilitar el desplegament d'aplicacions i fluxos de treball mitjançant interfícies d'usuari. A més, s'han creat i s'han adaptat diversos casos d'ús per avaluar els desenvolupaments aconseguits.[EN] In recent years, the popularity of Cloud computing has allowed users to access unprecedented compute, network, and storage resources under a pay-per-use model. This popularity led to new services to solve specific large-scale computing challenges and simplify the development and deployment of applications. Among the most prominent services in recent years are FaaS (Function as a Service) platforms, whose primary appeal is the ease of deploying small pieces of code in certain programming languages to perform specific tasks on an event-driven basis. These functions are executed on the Cloud provider's servers without users worrying about their maintenance or elasticity management, always keeping a fine-grained pay-per-use model. FaaS platforms belong to the computing paradigm known as Serverless, which aims to abstract the management of servers from the users, allowing them to focus their efforts solely on the development of applications. The problem with FaaS is that it focuses on microservices and tends to have limitations regarding the execution time and the computing capabilities (e.g. lack of support for acceleration hardware such as GPUs). However, it has been demonstrated that the self-provisioning capability and high degree of parallelism of these services can be well suited to broader applications. In addition, their inherent event-driven triggering makes functions perfectly suitable to be defined as steps in file processing workflows (e.g. scientific computing workflows). Furthermore, the rise of smart and embedded devices (IoT), innovations in communication networks and the need to reduce latency in challenging use cases have led to the concept of Edge computing. Edge computing consists of conducting the processing on devices close to the data sources to improve response times. The coupling of this paradigm together with Cloud computing, involving architectures with devices at different levels depending on their proximity to the source and their compute capability, has been coined as Cloud Computing Continuum (or Computing Continuum). Therefore, this PhD thesis aims to apply different Serverless strategies to enable the deployment of generalist applications, packaged in software containers, across the different tiers of the Cloud Computing Continuum. To this end, multiple tools have been developed in order to: i) adapt FaaS services from public Cloud providers; ii) integrate different software components to define a Serverless platform on on-premises and Edge infrastructures; iii) leverage acceleration devices on Serverless platforms; and iv) facilitate the deployment of applications and workflows through user interfaces. Additionally, several use cases have been created and adapted to assess the developments achieved.Risco Gallardo, S. (2023). Serverless Strategies and Tools in the Cloud Computing Continuum [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/202013Compendi

    Machine Learning Approach to Investigating the Relative Importance of Meteorological and Aerosol-Related Parameters in Determining Cloud Microphysical Properties

    Get PDF
    Aerosol effects on cloud properties are notoriously difficult to disentangle from variations driven by meteorological factors. Here, a machine learning model is trained on reanalysis data and satellite retrievals to predict cloud microphysical properties, as a way to illustrate the relative importance of meteorology and aerosol, respectively, on cloud properties. It is found that cloud droplet effective radius can be predicted with some skill from only meteorological information, including estimated air mass origin and cloud top height. For ten geographical regions the mean coefficient of determination is 0.41 and normalised root-mean square error 24%. The machine learning model thereby performs better than a reference linear regression model, and a model predicting the climatological mean. A gradient boosting regression performs on par with a neural network regression model. Adding aerosol information as input to the model improves its skill somewhat, but the difference is small and the direction of the influence of changing aerosol burden on cloud droplet effective radius is not consistent across regions, and thereby also not always consistent with what is expected from cloud brightening

    Robustness of point measurements of carbon dioxide concentration for the inference of ventilation rates in a wintertime classroom

    Get PDF
    Indoor air quality in schools and classrooms is paramount for the health and well-being of pupils and staff. Carbon dioxide sensors offer a cost-effective way to assess and manage ventilation provision. However, often only a single point measurement is available which might not be representative of the CO₂ distribution within the room. A relatively generic UK classroom in wintertime is simulated using Computational Fluid Dynamics. The natural ventilation provision is driven by buoyancy through high- and low-level openings in both an opposite-ended or single-ended configuration, in which only the horizontal location of the high-level vent is modified. CO₂ is modelled as a passive scalar and is shown not to be ‘well-mixed’ within the space. Perhaps surprisingly, the single-ended configuration leads to a ‘more efficient’ ventilation, with lower average CO₂ concentration. Measurements taken near the walls, often the location of CO₂ sensors, are compared with those made throughout the classroom and found to be more representative of the ventilation rate if made above the breathing zone. These findings are robust with respect to ventilation flow rates and to the flow patterns observed, which were tested by varying the effective vent areas and the ratio of the vent areas

    Beyond the Global Brain Differences:Intraindividual Variability Differences in 1q21.1 Distal and 15q11.2 BP1-BP2 Deletion Carriers

    Get PDF
    BACKGROUND: Carriers of the 1q21.1 distal and 15q11.2 BP1-BP2 copy number variants exhibit regional and globalbrain differences compared with noncarriers. However, interpreting regional differences is challenging if a globaldifference drives the regional brain differences. Intraindividual variability measures can be used to test for regionaldifferences beyond global differences in brain structure.METHODS: Magnetic resonance imaging data were used to obtain regional brain values for 1q21.1 distal deletion (n =30) and duplication (n = 27) and 15q11.2 BP1-BP2 deletion (n = 170) and duplication (n = 243) carriers and matchednoncarriers (n = 2350). Regional intra-deviation scores, i.e., the standardized difference between an individual’sregional difference and global difference, were used to test for regional differences that diverge from the globaldifference.RESULTS: For the 1q21.1 distal deletion carriers, cortical surface area for regions in the medial visual cortex, posterior cingulate, and temporal pole differed less and regions in the prefrontal and superior temporal cortex differedmore than the global difference in cortical surface area. For the 15q11.2 BP1-BP2 deletion carriers, cortical thicknessin regions in the medial visual cortex, auditory cortex, and temporal pole differed less and the prefrontal andsomatosensory cortex differed more than the global difference in cortical thickness.CONCLUSIONS: We find evidence for regional effects beyond differences in global brain measures in 1q21.1 distaland 15q11.2 BP1-BP2 copy number variants. The results provide new insight into brain profiling of the 1q21.1 distaland 15q11.2 BP1-BP2 copy number variants, with the potential to increase understanding of the mechanismsinvolved in altered neurodevelopment

    Black hole binary mergers in dense star clusters: the importance of primordial binaries

    Get PDF
    Dense stellar clusters are expected to house the ideal conditions for binary black hole (BBH) formation, both through binary stellar evolution and through dynamical encounters. We use theoretical arguments as well as N-body simulations to make predictions for the evolution of BBHs formed through stellar evolution inside clusters from the cluster birth (which we term primordial binaries), and for the sub-population of merging BBHs. We identify three key populations: (i) BBHs that form in the cluster, and merge before experiencing any strong dynamical interaction; (ii) binaries that are ejected from the cluster after only one dynamical interaction; and (iii) BBHs that experience more than one strong interaction inside the cluster. We find that populations (i) and (ii) are the dominant source of all BBH mergers formed in clusters with escape velocity vesc ≤ 30 ⁠. At higher escape velocities, dynamics are predicted to play a major role both for the formation and subsequent evolution of BBHs. Finally, we argue that for sub-Solar metallicity clusters with vesc ≲ 100 ⁠, the dominant form of interaction experienced by primordial BBHs (BBHs formed from primordial binaries) within the cluster is with other BBHs. The complexity of these binary–binary interactions will complicate the future evolution of the BBH and influence the total number of mergers produced
    corecore