1977 research outputs found
Sort by
A SEMI-ANALYTICAL SOLUTION FOR THE LUBRICATION FORCE BETWEEN TWO SPHERES APPROACHING IN VISCOELASTIC FLUIDS DESCRIBED BY THE OLDROYD-B MODEL UNDER SMALL DEBORAH NUMBERS
Viscoelastic fluids play a critical role in various engineering and biological applications, where their
lubrication properties are strongly influenced by relaxation times ranging from microseconds to min-
utes. Although the lubrication mechanism for Newtonian fluids is well-established, its extension into
viscoelastic materials—particularly under squeezing flow conditions—remains less explored. This
study presents a semi-analytical solution for the lubrication force between two spheres approaching
in a Boger fluid under small Deborah numbers. Unlike previous works that assumed a Newtonian
velocity field, we derive the velocity profile directly from the mass-momentum conservation and
Oldroyd-B constitutive equations using lubrication theory and order-of-magnitude analysis techniques.
Under steady-state conditions, viscoelasticity induces a marginal increase in the surface-to-surface
normal force as a result of the increased pressure required to overcome the resistance originating
from the first normal-stress difference. Transient analyses reveal that the normal lubrication force is
bounded by two Newtonian plateaus and is non-symmetric as the spheres approach or separate. Our
findings highlight the role of viscoelasticity in improving load capacity and provide new insights for
modelling dense particle suspensions in Boger fluids, where short-range interactions dominate
Optimizing Variational Physics-Informed Neural Networks Using Least Squares
Variational Physics-Informed Neural Networks often suffer from poor convergence when using stochastic gradient-descent-based optimizers. By introducing a least squares solver for the weights of the last layer of the neural network, we improve the convergence of the loss during training in most practical scenarios. This work analyzes the computational cost of the resulting hybrid least-squares/gradient-descent optimizer and explains how to implement it efficiently. In particular, we show that a traditional implementation based on backward-mode automatic differentiation leads to a prohibitively expensive algorithm. To remedy this, we propose using either forward-mode automatic differentiation or an ultraweak-type scheme that avoids the differentiation of trial functions in the discrete weak formulation. The proposed alternatives are up to one hundred times faster than the traditional one, recovering a computational cost-per-iteration similar to that of a conventional gradient-descent-based optimizer alone. To support our analysis, we derive computational estimates and conduct numerical experiments in one- and two-dimensional problems
Tensorial Implementation for Robust Variational Physics-Informed Neural Networks
Variational Physics-Informed Neural Networks (VPINN) train the parameters of neural networks (NN) to solve partial differential equations (PDEs). They perform unsupervised training based on the physical laws described by the weak-form residuals of the PDE over an underlying discretized variational setting; thus defining a loss function in the form of a weighted sum of multiple definite integrals representing a testing scheme. However, this classical VPINN loss function is not robust. To overcome this, we employ Robust Variational Physics-Informed Neural Networks (RVPINN), which modifies the original VPINN loss into a robust counterpart that produces both lower and upper bounds of the true error. The robust loss modifies the original VPINN loss by using the inverse of the Gram matrix computed with the inner product of the energy norm. The drawback of this robust loss is the computational cost related to the need to compute several integrals of residuals, one for each test function, multiplied by the inverse of the proper Gram matrix. In this work, we show how to perform efficient generation of the loss and training of RVPINN method on GPGPU using a sequence of einsum tensor operations. As a result, we can solve our 2D model problem within 350 s on A100 GPGPU card from Google Colab Pro. We advocate using the RVPINN with proper tensor operations to solve PDEs efficiently and robustly. Our tensorial implementation allows for 18 times speed up in comparison to for-loop type implementation on the A100 GPGPU card
On tool wear optimized motion planning for 5-axis CNC machining of free-form surfaces using toroidal cutting tools
We propose a computational framework for motion planning for 5-axis CNC machining of free-form surfaces. Given a reference surface, a set of contact paths on it, and a shape of a toroidal cutting tool as input, the proposed algorithm designs the tool motions that are by construction locally and globally collision-free, and offers a trade-off between approximation quality and tool wear using an optimization-based framework. The proposed algorithm first quickly constructs 2D time-tilt configuration spaces along each contact path, detecting regions that are collision-free. The configuration spaces are then merged into a single time-tilt configuration space to find a global tilt function to control the overall motion of the tool. An initial collision-free tilt function in B-spline form is first estimated and then optimized to minimize the machining error while distributing the tool wear as uniformly as possible along the entire cutting edge of the tool while staying in the collision-free region. Our algorithm is validated on both synthetic free-form surfaces and industrial benchmarks, showing that one can considerably reduce the tool wear without degrading the machining accuracy.AURRERA (Elkartek KK-2024/00024)
PID2023-146640NB-I00 (MICIU/AEI/10.13039/501100011033)
CEX2021-001142-S
RYC-2017-22649 funded by MICIU/AEI/10.13039/501100011033 and EI ESF "ESF Investing in your future"
project R&D of Technologies for Advanced Digitization in the Pilsen Metropolitan Area (DigiTech) No.: CZ.02.01.01/00/23\_021/0008436 co-financed by the European Union
Joint modeling with beta-binomial distribution for patient-reported outcomes and survival data
This thesis addresses critical methodological gaps in the joint analysis of patient-reported outcomes (PROs) and survival data. PROs, as discrete bounded measures with inherent overdispersion, require specialized statistical treatment that conventional Gaussian-based joint models fail to provide. We develop novel methodological frameworks that properly account for PRO characteristics through beta-binomial distributions, overcoming limitations of existing approaches.
In this work, we propose, explore, and discuss various statistical approaches for joint modeling, from frequentist to bayesian proposals. Our work highlights the advantages of joint models that integrate longitudinal and survival data while emphasizing the importance of choosing appropriate distributions for PRO data.
In particular, in this dissertation, we propose three joint models to analyze both, the longitudinal PRO and survival data:
a) A frequentist two-stage approach, providing initial practical solutions. In this proposal, the central innovation lies in a joint model based on a two-stage methodology that incorporates the beta-binomial distribution for the longitudinal submodel. This methodology avoids computational complexities while ensuring a distributional fit that considers the natural characteristics of PRO (discrete, bounded and overdispersed).
b) A Bayesian one-stage joint model, offering improved estimation. In this proposal, the main objective was to keep the distributional features for PRO data regarding beta-binomial distribution while performing a joint specification approach. The Bayesian formulation of the problem allows us to avoid the computational complexities we found in frequentist approaches. Moreover, we considered the parameters’ posterior estimations to perform dynamic predictions of the survival probabilities, being updated as more longitudinal PRO information is considered.
c) A multivariate Bayesian framework, enabling simultaneous analysis of multiple PRO dimensions with survival outcomes. In this proposal, our primary objective was to address the multidimensional structure of questionnaires within the joint modeling framework. However, when dealing with multivariate approaches, it might be necessary to use regularization techniques to avoid possible multicollinearity. Therefore, we explored common regularization techniques in the literature within the joint modeling framework.
The proposed methods' performance is evaluated using simulation studies, and comparisons with common approaches in the literature are provided. Additionally, we applied these methods to analyze a study carried out with chronic obstructive pulmonary disease (COPD) patients, where longitudinal tendencies for PRO data collected and their relationship with patients’ mortality are of interest
Teacher privileged distillation: How to deal with imperfect teachers?
The paradigm of learning using privileged information leverages privileged features present at training time, but not at prediction, as additional training information. The privileged learning process is addressed through a knowledge distillation perspective: information from a teacher learned with regular and privileged features is transferred to a student composed exclusively of regular features. While most approaches assume perfect knowledge for the teacher, it can commit mistakes. Assuming that, we propose a novel privileged distillation framework with a double contribution. Firstly, a designed function to imitate the teacher when it classifies correctly and to differ in cases of misclassification. Secondly, an adaptation of the cross-entropy loss to appropriately penalize the instances where the student outperforms the teacher. Its effectiveness is empirically demonstrated on datasets with imperfect teachers, significantly enhancing the performance of state-of-the-art frameworks. Furthermore, necessary conditions for successful privileged learning are presented, along with a dataset categorization based on the information provided by the privileged features.PID 2022-137442NB-I00
PRE2021-09927
Evaluating the risk of mosquito-borne diseases in non-endemic regions: A dynamic modeling approach
Mosquito-borne diseases are spreading into temperate zones, raising concerns about local outbreaks driven by imported cases. Using stochastic methods, we developed a vector-host model to estimate the risk of import-driven autochthonous outbreaks in non-endemic regions. The model explores key factors such as imported cases and vector abundance. Our analysis shows that mosquito population abundance significantly affects the probability and timing of outbreaks. Even with moderate mosquito populations, isolated or clustered outbreaks can be triggered, highlighting the importance of monitoring vector abundance for effective public health planning and interventions.This work is supported by the ARBOSKADI project for monitoring vector-borne diseases in the Basque Country, Euskadi. We wish to extend our acknowledgments to Jesús Ángel Ocio Armentia, Oscar Goñi Laguardia and Ana Ramírez de La Peciña Pérez, Dirección de Salud Pública for their fruitful discussions, and to Madalen Oribe Amores, Unidad de Vigilancia Epidemiológica de Bizkaia, for her cooperation in providing the requested epidemiological data that was essential for carrying out this research.
M.A. and A.C acknowledges the financial support by the Ministerio de Ciência e Innovacion (MICINN) of the Spanish Government through the Ramon
Cajal grant RYC2021-031380-I, RYC2021-033084-I, respectively. This research is also supported by the Basque Government through the “Mathematical Modeling Applied to Health” (BMTF) Project, BERC 2022–2025 program and by the Spanish Ministry of Sciences, Innovation and Universities : BCAM Severo Ochoa accreditation CEX2021-001142-S/MICIN / AEI/10.13039/501100011033
Complex non-Markovian dynamics and the dual role of astrocytes in Alzheimer's disease development and propagation
Alzheimer’s disease (AD) is a common neurodegenerative disorder nowadays. Amyloid-beta (Aβ)
and tau proteins are among the main contributors to the AD progression. In AD, Aβ proteins clump
together to form plaques and disrupt cell functions. On the other hand, the abnormal chemical change
in the brain helps to build sticky tau tangles that block the neuron’s transport system. Astrocytes
generally maintain a healthy balance in the brain by clearing the Aβ plaques (toxic Aβ). However,
over-activated astrocytes release chemokines and cytokines in the presence of Aβ and react to proinflammatory cytokines, further increasing the production of Aβ. In this paper, we construct a mathematical model that can capture astrocytes’ dual behaviour. Furthermore, we reveal that the disease
progression depends on the current time instance and the disease’s earlier status, called the “memory
effect”. We consider a fractional order network mathematical model to capture the influence of such
memory effect on AD progression. We have integrated brain connectome data into the model and
studied the memory effect, the dual role of astrocytes, and the brain’s neuronal damage. Based on the
pathology, primary, secondary, and mixed tauopathies parameters are considered in the model. Due
to the mixed tauopathy, different brain nodes or regions in the brain connectome accumulate different
toxic concentrations of Aβ and tau proteins. Finally, we explain how the memory effect can slow
down the propagation of such toxic proteins in the brain, decreasing the rate of neuronal damage
Modeling COVID-19 dynamics in the Basque Country: characterizing population immunity profile from 2020 to 2022
Background: COVID-19, caused by SARS-CoV-2, has spread globally, presenting a signifcant public health challenge. Vaccination has played a critical role in reducing severe disease and deaths. However, the waning of immunity after vaccination and the emergence of immune-escape variants require the continuation of vaccination
efforts, including booster doses, to maintain population immunity. This study models the dynamics of COVID-19
in the Basque Country, Spain, aiming to characterize the population’s immunity profle and assess its impact
on the severity of outbreaks from 2020 to 2022.
Methods: A SIR/DS model was developed to analyze the interplay of virus-specifc and vaccine-induced immunity.
The model includes three levels of immunity, with boosting efects from reinfection and/or vaccination. It was validated using empirical daily case data from the Basque Country. The model tracks shifts in immunity status and their
effects on disease dynamics over time.
Results: The COVID-19 epidemic in the Basque Country progressed through three distinct phases, each shaped
by dynamic interactions between virus transmission, public health interventions, and vaccination eforts. The initial phase
was marked by a rapid surge in cases, followed by a decline due to strict public health measures, with a seroprevalence
of1.3%. In the intermediate phase, multiple smaller outbreaks emerged as restrictions were relaxed and new variants,
such as Alpha and Delta, appeared. During this period, reinfection rates reached 20%, and seroprevalence increased
to 32%. The final phase, dominated by the Omicron variant, saw a significant rise in cases driven by waning immunity
and the variant’s high transmissibility. Notably, 34% of infections during this phase occurred in the naive population,
with seroprevalence peaking at 43%. Across all phases, the infection of naive and unvaccinated individuals contributed
significantly to the severity of outbreaks, emphasizing the critical role of vaccination in mitigating disease impact.
Conclusion The findings underscore the importance of continuous monitoring and adaptive public health strategies to mitigate the evolving epidemiological and immunological landscape of COVID-19. Dynamic interactions
between immunity levels, reinfections, and vaccinations are critical in shaping outbreak severity and guiding evidence-based interventions
C*-Algebraic Formulation of Quantum Mechanics
This manuscript is based on a series of lectures given in the XX Jacques-Louis Lions Spanish-French School on Numerical Simulations in Physics & Engineering. While quantum mechanics is widely presented within the Hilbert space formalism in undergraduate courses, in these notes we give an introduction to its algebraic formulation. It is based on the theory of C^{∗}-algebras, which is shortly outlined here. This viewpoint is more general than the usual formulation of quantum mechanics, because of the the non-uniqueness of irreducible representations of C^{∗}-algebras of infinite dimension. This is related to the Rosenberg theorem and Naimark's problem.W. de Siqueira Pedra has been supported by CNPq (309723/2020-5). J.-B. Bru is supported by the Basque Government through the grant IT1615-22 and the BERC 2022-2025 program, by the COST Action CA18232 financed by the European Cooperation in Science and Technology (COST), and by the grant PID2020-112948GB-I00 funded by MCIN/AEI/10.13039/501100011033 and by "ERDF A way of making Europe"