100 research outputs found
A summary of my twenty years of research according to Google Scholars
I am David Pardo, a researcher from Spain working mainly on numerical analysis
applied to geophysics. I am 40 years old, and over a decade ago, I realized that my performance as
a researcher was mainly evaluated based on a number called \h-index". This single number contains
simultaneously information about the number of publications and received citations. However, dif-
ferent h-indices associated to my name appeared in di erent webpages. A quick search allowed me
to nd the most convenient (largest) h-index in my case. It corresponded to Google Scholars.
In this work, I naively analyze a few curious facts I found about my Google Scholars and, at
the same time, this manuscript serves as an experiment to see if it may serve to increase my Google
Scholars h-index
A summary of my twenty years of research according to Google Scholars
I am David Pardo, a researcher from Spain working mainly on numerical analysis
applied to geophysics. I am 40 years old, and over a decade ago, I realized that my performance as
a researcher was mainly evaluated based on a number called \h-index". This single number contains
simultaneously information about the number of publications and received citations. However, dif-
ferent h-indices associated to my name appeared in di erent webpages. A quick search allowed me
to nd the most convenient (largest) h-index in my case. It corresponded to Google Scholars.
In this work, I naively analyze a few curious facts I found about my Google Scholars and, at
the same time, this manuscript serves as an experiment to see if it may serve to increase my Google
Scholars h-index
Recommended from our members
Mastering the scales: A survey on the benefits of multiscale computing software
Electronic supplementary material is available online at https://doi.org/10.6084/m9.
figshare.c.4352660.© 2019 The Authors. In the last few decades, multiscale modeling has emerged as one of the dominant modeling paradigms in many areas of science and engineering. Its rise to dominance is primarily driven by advancements in computing power and the need to model systems of increasing complexity. The multiscale modeling paradigm is now accompanied by a vibrant ecosystem of multiscale computing software (MCS) which promise to address many challenges in the development of multiscale applications. In this paper, we define the common steps in the multiscale application development process and investigate to what degree a set of 22 representative MCS tools enhance each development step. We observe several gaps in the features provided by MCS tools, specially for application deployment and the preparation and management of production runs. In addition, we find that many MCS tools are tailored to a particular multiscale computing pattern, even though they are otherwise application agnostic. We conclude that the gaps we identify are characteristic of a field that is still maturing and features that enhance the deployment and production steps of multiscale application development are desirable for the long term success of MCS in its application fields.The European Union’s Horizon 2020 research, innovation programme under grant agreement and the project “Task-based load balancing and auto-tuning in particle simulations”European Union’s Horizon 2020 Research and Innovation
Programme under grant agreement nos. 800925 and 671564; ‘Task-based load balancing and auto-tuning in particle simulations’ project (TaLPas), grant no. 01IH16008B
Swarm Reinforcement Learning For Adaptive Mesh Refinement
The Finite Element Method, an important technique in engineering, is aided by
Adaptive Mesh Refinement (AMR), which dynamically refines mesh regions to allow
for a favorable trade-off between computational speed and simulation accuracy.
Classical methods for AMR depend on task-specific heuristics or expensive error
estimators, hindering their use for complex simulations. Recent learned AMR
methods tackle these problems, but so far scale only to simple toy examples. We
formulate AMR as a novel Adaptive Swarm Markov Decision Process in which a mesh
is modeled as a system of simple collaborating agents that may split into
multiple new agents. This framework allows for a spatial reward formulation
that simplifies the credit assignment problem, which we combine with Message
Passing Networks to propagate information between neighboring mesh elements. We
experimentally validate the effectiveness of our approach, Adaptive Swarm Mesh
Refinement (ASMR), showing that it learns reliable, scalable, and efficient
refinement strategies on a set of challenging problems. Our approach
significantly speeds up computation, achieving up to 30-fold improvement
compared to uniform refinements in complex simulations. Additionally, we
outperform learned baselines and achieve a refinement quality that is on par
with a traditional error-based AMR strategy without expensive oracle
information about the error signal.Comment: Version 1 of this paper is a preliminary workshop version that was
accepted as a workshop paper in the ICLR 2023 Workshop on Physics for Machine
Learnin
Swarm Reinforcement Learning For Adaptive Mesh Refinement
Adaptive Mesh Refinement (AMR) enhances the Finite Element Method, an important technique for simulating complex problems in engineering, by dynamically refining mesh regions, enabling a favorable trade-off between computational speed and simulation accuracy. Classical methods for AMR depend on heuristics or expensive error estimators, hindering their use for complex simulations. Recent learning-based AMR methods tackle these issues, but so far scale only to simple toy examples. We formulate AMR as a novel Adaptive Swarm Markov Decision Process in which a mesh is modeled as a system of simple collaborating agents that may split into multiple new agents. This framework allows for a spatial reward formulation that simplifies the credit assignment problem, which we combine with Message Passing Networks to propagate information between neighboring mesh elements. We experimentally validate our approach, Adaptive Swarm Mesh Refinement (ASMR), on challenging refinement tasks. Our approach learns reliable and efficient refinement strategies that can robustly generalize to different domains during inference. Additionally, it achieves a speedup of up to orders of magnitude compared to uniform refinements in more demanding simulations. We outperform learned baselines and heuristics, achieving a refinement quality that is on par with costly error-based oracle AMR strategies
Easy-to-implement hp-adaptivity for non-elliptic goal-oriented problems
The FEM has become a foundational numerical technique in computational mechanics and civil engineering since its inception by Courant in 1943 Courant1943. Originating from the Ritz method and variational calculus, the FEM was primarily employed to derive solutions for vibrational systems. A distinctive strength of the FEM is its capability to represent mathematical models through the weak variational formulation of PDE, facilitating computational feasibility even in intricate geometries. However, the search for accuracy often imposes a significant computational task.
In the FEM, adaptive methods have emerged to balance the accuracy of solutions with computational costs. The -adaptive FEM designs more efficient meshes by reducing the mesh size locally while keeping the polynomial order of approximation fixed (usually ). An alternative approach to the -adaptive FEM is the -adaptive FEM, which locally enriches the polynomial space while keeping the mesh size constant. By dynamically adapting and , the -adaptive FEM achieves exponential convergence rates.
Adaptivity is crucial for obtaining accurate solutions. However, the traditional focus on global norms, such as or , might only sometimes serve the requirements of specific applications. In engineering, controlling errors in specific domains related to a particular QoI is often more critical than focusing on the overall solution. That motivated the development of GOA strategies.
In this dissertation, we develop automatic GO -adaptive algorithms tailored for non-elliptic problems. These algorithms shine in terms of robustness and simplicity in their implementation, attributes that make them especially suitable for industrial applications. A key advantage of our methodologies is that they do not require computing reference solutions on globally refined grids. Nevertheless, our approach is limited to anisotropic and isotropic refinements.
We conduct multiple tests to validate our algorithms. We probe the convergence behavior of our GO - and -adaptive algorithms using Helmholtz and convection-diffusion equations in one-dimensional scenarios. We test our GO -adaptive algorithms on Poisson, Helmholtz, and convection-diffusion equations in two dimensions. We use a Helmholtz-like scenario for three-dimensional cases to highlight the adaptability of our GO algorithms.
We also create efficient ways to build large databases ideal for training DNN using MAGO FEM. As a result, we efficiently generate large databases, possibly containing hundreds of thousands of synthetic datasets or measurements
Quantum computing for finance
Quantum computers are expected to surpass the computational capabilities of
classical computers and have a transformative impact on numerous industry
sectors. We present a comprehensive summary of the state of the art of quantum
computing for financial applications, with particular emphasis on stochastic
modeling, optimization, and machine learning. This Review is aimed at
physicists, so it outlines the classical techniques used by the financial
industry and discusses the potential advantages and limitations of quantum
techniques. Finally, we look at the challenges that physicists could help
tackle
hp-Adaptive simulation and inversion of magnetotelluric measurements
xlix, 121 p.The magnetotelluric (MT) method is a passive exploration technique that aims at estimating the resistivity distribution of the Earth’s subsurface, and therefore at providing an image of it. This process is divided into two different steps. The first one consists in recording the data. In a second step, recorded measurements are analyzed by employing numerical methods. This dissertation focuses in this second task. We provide a rigorous mathematical setting in the context of the Finite Element Method (FEM) that helps to understand the MT problem and its inversion process. In order to recover a map of the subsurface based on 2D MT measurements, we employ for the first time in MTs a multigoal oriented self adaptive hp-Finite Element Method (FEM). We accurately solve both the full formulation as well as a secondary field formulation where the primary field is given by the solution of a 1D layered media. To truncate the computational domain, we design a Perfectly Matched Layer (PML) that automatically adapts to high-contrast material properties that appear within the subsurface and on the air-ground interface. For the inversion process, we develop a first step of a Dimensionally Adaptive Method (DAM) by considering the dimension of the problem as a variable in the inversion. Additionally, this dissertation supplies a rigorous numerical analysis for the forward and inverse problems. Regarding the forward modelization, we perform a frequency sensitivity analysis, we study the effect of the source, the convergence of the hp-adaptivity, or the effect of the PML in the computation of the electromagnetic fields and impedance. As far as the inversion is concerned, we study the impact of the selected variable for the inversion process, the different information that each mode provides, and the gains of the DAM approachUniversité de Pau et des Pays de l'Adour. bca
- …