HAL-Polytechnique
Not a member yet
51270 research outputs found
Sort by
ADAPT: Multimodal Learning for Detecting Physiological Changes under Missing Modalities
International audienceMultimodality has recently gained attention in the medical domain, where imaging or video modalities may be integrated with biomedical signals or health records. Yet, two challenges remain: balancing the contributions of modalities, especially in cases with a limited amount of data available, and tackling missing modalities. To address both issues, in this paper, we introduce the AnchoreD multimodAl Physiological Transformer (ADAPT), a multimodal, scalable framework with two key components: (i) aligning all modalities in the space of the strongest, richest modality (called anchor) to learn a joint embedding space, and (ii) a Masked Multimodal Transformer, leveraging both inter- and intra-modality correlations while handling missing modalities. We focus on detecting physiological changes in two real-life scenarios: stress in individuals induced by specific triggers and fighter pilots' loss of consciousness induced by g-forces. We validate the generalizability of ADAPT through extensive experiments on two datasets for these tasks, where we set the new state of the art while demonstrating its robustness across various modality scenarios and its high potential for real-life applications. Our code is available at https://github.com/jumdc/ADAPT.git
Augmented Quantization: a General Approach to Mixture Models
International audienceThe investigation of mixture models is a key to understand and visualize the distribution of multivariate data. Most mixture models approaches are based on likelihoods, and are not adapted to distribution with finite support or without a well-defined density function. This study proposes the Augmented Quantization method, which is a reformulation of the classical quantization problem but which uses the p-Wasserstein distance. This metric can be computed in very general distribution spaces, in particular with varying supports. The clustering interpretation of quantization is revisited in a more general framework. The performance of Augmented Quantization is first demonstrated through analytical toy problems. Subsequently, it is applied to a practical case study involving river flooding, wherein mixtures of Dirac and Uniform distributions are built in the input space, enabling the identification of the most influential variables
CO2/CH4 Glow Discharge Plasma. Part II: Study of Plasma Catalysis Interaction Mechanisms on CeO2
International audienceA fundamental study of CO2/CH4 plasma is performed in a glow discharge at a few Torr. Experimentaland numerical results are compared to identify the main reaction pathways. OES-based techniques and FTIR(Fourier Transform Infrared) spectroscopy are used to determine molecules densities and gas temperature. Several conditions of pressure, initial mixture and residence time are measured. The main dissociation productsare found to be CO and H2. The LoKI simulation tool was used to build a simplified kinetic scheme to limit theuncertainties on rate coefficients, but sufficient to reproduce the experimental data. To this aim, only moleculescontaining at most one carbon atom are considered based on the experimental observations. Obtaining a goodmatch between the experimental data and the simulation requires the inclusion of reactions involving the excitedstate O(1D). The key role of CH3 radical is also emphasized. The good match obtained between the experimentand the simulation allows to draw the main reaction pathways of the low-pressure CO2-CH4 plasmas, in particular to identify the main back reaction mechanisms for CO2. The role of CH2O and H2O in the gas phaseis also discussed in depth as they appear to play an important role on catalytic surface studied in the part II ofthis stud
Multiplicity of electron- and photon-seeded electromagnetic showers at multi-petawatt laser facilities
International audienceElectromagnetic showers developing from the collision of an ultra-intense laser pulse with a beam of high-energy electrons or photons are investigated under conditions relevant to future experiments on multi-petawatt laser facilities. A semi-analytical model is derived that predicts the shower multiplicity, i.e. the number of pairs produced per incident seed particle (electron or gamma photon). The model is benchmarked against particle-in-cell simulations and shown to be accurate over a wide range of seed particle energies (100 MeV - 40 GeV), laser relativistic field strengths (), and quantum parameter (ranging from 1 to 40). It is shown that, for experiments expected in the next decade, only the first generations of pairs contribute to the shower while multiplicities larger than unity are predicted. Guidelines for forthcoming experiments are discussed considering laser facilities such as Apollon and ELI Beamlines. The difference between electron- and photon seeding and the influence of the laser pulse duration are investigated
Proven Runtime Guarantees for How the MOEA/D Computes the Pareto Front from the Subproblem Solutions
International audienceThe decomposition-based multi-objective evolutionary algorithm (MOEA/D) does not directly optimize a given multi-objective function f , but instead optimizes N + 1 single-objective subproblems of f in a co-evolutionary manner. It maintains an archive of all nondominated solutions found and outputs it as approximation to the Pareto front. Once the MOEA/D found all optima of the subproblems (the goptima), it may still miss Pareto optima of f . The algorithm is then tasked to find the remaining Pareto optima directly by mutating the g-optima.In this work, we analyze for the first time how the MOEA/D with only standard mutation operators computes the whole Pareto front of the OneMinMax benchmark when the g-optima are a strict subset of the Pareto front. For standard bit mutation, we prove an expected runtime of O(nN log n+n n/(2N) N log n) function evaluations. Especially for the second, more interesting phase when the algorithm start with all g-optima, we prove an Ω(n (1/2)(n/N+1) √ N 2 -n/N ) expected runtime. This runtime is super-polynomial if N = o(n), since this leaves large gaps between the g-optima, which require costly mutations to cover. For power-law mutation with exponent β ∈ (1, 2), we prove an expected runtime of O nN log n + n β log n function evaluations. The O n β log n term stems from the second phase of starting with all g-optima, and it is independent of the number of subproblems N . This leads to a huge speedup compared to the lower bound for standard bit mutation. In general, our overall bound for power-law suggests that the MOEA/D performs best for N = O(n β-1 ), resulting in an O(n β log n) bound. In contrast to standard bit mutation, smaller values of N are better for power-law mutation, as it is capable of easily creating missing solutions.</div
Deciphering the impact of future individual Antarctic freshwater sources on the Southern Ocean properties and ice shelf basal melting
International audienceThe Antarctic ice sheet is losing mass. This mass loss is primarily due to ice shelf basal melting and the subsequent acceleration of glaciers. The substantial freshwater fluxes resulting from ice shelf and iceberg melting affect the Southern Ocean and beyond. As emphasized by some studies, they slow down the decline of Antarctic sea ice and hinder mixing between surface water and Circumpolar Deep Waters, further intensifying ice shelf basal melting. In this context, most studies so far have neglected the impact of surface meltwater runoff , but recent CMIP6 projections using the SSP5-8.5 scenario challenge this view, suggesting runoff values in 2100 similar to current basal melt rates. This prompts a reassessment of surface meltwater future impact on the ocean. We use the ocean and sea-ice model NEMO-SI3 resolving the sub-shelf cavities of Antarctica and including an interactive iceberg module. We perform thorough sensitivity experiments to disentangle the effect of changes in the atmospheric forcing, increased ice shelf basal melting, surface freshwater runoff and iceberg calving flux by 2100 in a high-end scenario. Contrary to expectations, the atmosphere alone does not substantially warm ice shelf cavities compared to present temperatures. However, the introduction of additional freshwater sources amplifies warming, leading to escalated melt rates and establishing a positive feedback. The magnitude of this effect correlates with the quantity of released freshwater, with the most substantial impact originating from ice shelf basal melting. Moreover, larger surface freshwater runoff and iceberg calving flux contribute to further cavity warming, resulting in a noteworthy 10% increase in ice shelf basal melt rates. We also describe a potential tipping point for cold ice shelves, such as Filchner-Ronne, before the year 2100
Global ocean ventilation: a comparison between a general circulation model and data-constrained inverse models
International audienceOcean ventilation, or the transfer of tracers from the surface boundary layer into the ocean interior, is a critical process in the climate system. Here, we assess steady-state ventilation patterns and rates in three models of ocean transports: a 1° global configuration of the Nucleus for European Modelling of the Ocean (NEMO), version 2 of the Ocean Circulation Inverse Model (OCIM), and the Total Matrix Intercomparison (TMI). We release artificial dyes in six surface regions of each model and compare equilibrium dye distributions as well as ideal age distributions. We find good qualitative agreement in large-scale dye distributions across the three models. However, the distributions indicate that TMI is more diffusive than OCIM, itself more diffusive than NEMO. NEMO simulates a sharp separation between bottom and intermediate water ventilation zones in the Southern Ocean, leading to a weaker influence of the latter zone on the abyssal ocean. A shallow bias of North Atlantic ventilation in NEMO contributes to a stronger presence of the North Atlantic dye in the mid-depth Southern Ocean and Pacific. This isopycnal communication between the North Atlantic surface and the mid-depth Pacific is very slow, however, and NEMO simulates a maximum age in the North Pacific about 900 years higher than the data-constrained models. Possible causes of this age bias are interrogated with NEMO sensitivity experiments. Implementation of an observation-based 3D map of isopycnal diffusivity augments the maximum age, due to weaker isopycnal diffusion at depths. We suggest that tracer upwelling in the subarctic Pacific is underestimated in NEMO and a key missing piece in the representation of global ocean ventilation in general circulation models
Modélisation interactive de formes 3D bio-inspirées évolutives et émergentes.
Due to manufacturing constraints, Computer-Aided Design has primarily focused on combinations of mathematical functions and simple parametric forms. However, the landscape changed with the advent of 3D printing, which allows for high shape complexity. The cost of additive manufacturing is now dominated by part size and material used rather than complexity, paving the way for a reevaluation of 3D modeling practices, including interactive conception and increased complexity.Inspired by the self-organizing principles observed in living organisms, the field of morphogenesis presents an intriguing alternative for 3D modeling. Unlike traditional CAD systems relying on explicit user-defined parameters, morphogenetic models leverage dynamic processes that exhibit emergence, evolution, adaptation to the environment, or self-healing.The general purpose of this Ph.D. is to explore and develop new approaches to 3D modeling based on highly detailed evolutionary shapes inspired by morphogenesis. The thesis commences with an in-depth exploration of bio-inspired 3D modeling, encompassing various methodologies, challenges, and options for incorporating bio-inspired concepts into 3D modeling practices.Subsequent chapters delve into specific morphogenesis models.In the first part, the focus extends to adapting a biologically inspired model, specifically Physarum polycephalum, into computer graphics for designing organic-like microstructures. This section offers a comprehensive methodological development, analyzes model parameters, and discusses potential applications in diverse fields such as additive manufacturing, design, and biology.In the second part, a novel approach is investigated, utilizing Reaction/Diffusion models to grow lattice-like and membrane-like structures within arbitrary shapes. The methodology is based on anisotropic Reaction-Diffusion systems and diffusion tensor fields, demonstrating applications in mechanical properties, validation through nonlinear analysis, user interaction, and scalability.Finally, the third part explores the application of deep learning techniques to learn the rules of morphogenesis processes, specifically Reaction/Diffusion. It begins by illustrating the richness offered by Reaction/Diffusion systems before delving into the training of Cellular Automata and Reaction/Diffusion rules to learn system parameters, resulting in robust and "life-like" behaviors.En raison de contraintes de fabrication, la Conception Assistée par Ordinateur (CAO) s'est principalement concentrée sur des combinaisons de fonctions mathématiques et de formes paramétriques simples. Cependant, ceci a changé avec l'avènement de l'impression 3D, qui permet désormais de manufacturer simplement des pièces topologiquemnt complexes. Le coût de la fabrication additive est désormais déterminé par la taille de la pièce et le matériau utilisé plutôt que par sa complexité, ouvrant la voie à une réévaluation des pratiques de modélisation 3D, incluant la conception interactive et complexe.Inspiré par les principes d'auto-organisation observés dans les organismes vivants, le domaine de la morphogenèse présente une alternative intéressante pour la modélisation 3D. Contrairement aux systèmes CAO traditionnels reposant sur des paramètres explicites définis par l'utilisateur, les modèles morphogénétiques exploitent des processus dynamiques tels que l'émergence, l'évolution, l'adaptation à l'environnement ou l'auto-guérison.Le but général de cette thèse est d'explorer et de développer de nouvelles approches de modélisation 3D basées sur des formes évolutives hautement détaillées inspirées par la morphogenèse. La thèse commence par une exploration approfondie de la modélisation 3D bio-inspirée, englobant diverses méthodologies, défis et possibilités pour incorporer des concepts bio-inspirés dans les pratiques de modélisation 3D.Les chapitres suivants se penchent sur des modèles spécifiques de morphogenèse.La première partie étudie comment adapter un modèle biologiquement inspiré, extit{Physarum polycephalum}, au domaine d'informatique graphique, dans le but de concevoir des microstructures organiques. Ce chapitre propose une étude méthodologique complète, analyse les paramètres du modèle et discute des applications potentielles dans divers domaines tels que la fabrication additive, le design et la biologie.Dans la deuxième partie, une nouvelle approche est étudiée, utilisant un modèle de réaction/diffusion pour faire croître des structures lattice et membranes à l'intérieur de formes arbitraires. La méthode se base sur des systèmes de réaction-diffusion anisotropes et des champs de tenseurs de diffusion, et démontre de remarquables propriétés mécaniques pour les structures générées, validées par analyse non linéaire. Cette approche est scalable au grand volume et permet une interactivité utilisateur en temps réel.Enfin, la troisième partie explore l'application de techniques d'apprentissage profond pour apprendre les règles des processus de morphogenèse, en particulier ceux de réaction/diffusion. Elle commence par illustrer la richesse offerte par les systèmes de réaction/diffusion avant de se plonger dans l'entraînement d'automates cellulaires et de règles de réaction/diffusion pour apprendre les paramètres de ces systèmes. Ces derniers se révèlent être robustes et montrent un comportement très semblable au vivant
Bayesian Calibration in a multi-output transposition context
Bayesian calibration is an effective approach for ensuring that numerical simulations accurately reflect the behavior of physical systems. However, because numerical models are never perfect, a discrepancy known as model error exists between the model outputs and the observed data, and must be quantified. Conventional methods can not be implemented in transposition situations, such as when a model has multiple outputs but only one is experimentally observed. To account for the model error in this context, we propose augmenting the calibration process by introducing additional input numerical parameters through a hierarchical Bayesian model, which includes hyperparameters for the prior distribution of the calibration variables. Importance sampling estimators are used to avoid increasing computational costs. Performance metrics are introduced to assess the proposed probabilistic model and the accuracy of its predictions. The method is applied on a computer code with three outputs that models the Taylor cylinder impact test. The outputs are considered as the observed variables one at a time, to work with three different transposition situations. The proposed method is compared with other approaches that embed model errors to demonstrate the significance of the hierarchical formulation