HAL-CentraleSupelec
Not a member yet
65949 research outputs found
Sort by
Fast and Reliable Human Exposure Assessment Around High Power Systems Using Surrogate Modelling
International audienceDue to the high level of magnetic stray field around high power electromagnetic systems, the human exposure needs to be properly assessed in order to check the compliance with international standards and guidelines. Such analyses are usually made in two steps: first a proper map of the magnetic field in the vicinity area is computed, where, in a second time a human model is used to compute induced dosimetric quantities. Unfortunately, such high power systems have a high computational cost in addition to the complexity of 3D human models. Thus, this paper shows the useful combination of stochastic tools with numerical solvers in order to build accurate predictors at a low computation cost in the case of human exposure for various high power systems. These surrogate models can be used to accurately analyze the sensitivity of the exposure problem regarding various input parameters at a low computation cost. A dosimetric methodology for assessing the safety of a human body around an inductive power transfer system for automotive applications, using an adaptive metamodelling algorithm coupled with a voxelized 3D human model, has been developed. Such analysis has been successfully extended to a system where human exposure assessment are crucially needed: medium-frequency direct-current welding guns, treating the case of human exposure to a pulsed magnetic field. This methodology manages to reduce the computation time by more than 99.9% compared to a classical analysis for both exposure problems. INDEX TERMS Human exposure, stochastic methods, numerical dosimetry, metamodel, wireless power transfer, spot welding
Ascending Stepped Cryptanalytic Time-Memory Trade-Off
The concept of time-memory trade-off was introduced in 1980 by Martin Hellman to conduct brute-force attacks against DES. The method consists of an intensive precomputation phase whose results are stored in tables, and subsequently used to significantly reduce the time required by the brute-force. An important improvement is the introduction in 2003 of rainbow tables by Philippe Oechslin. However, the process of precomputing rainbow tables is rather inefficient, primarily due to the high rate of computed values that are eventually discarded. Avoine, Carpent, and Leblanc-Albarel introduced in 2023 the descending stepped rainbow tables, which consists in recycling chains during the precomputation phase. In this paper, a new variant called ascending stepped rainbow tables is introduced. Formulas to predict attack time, precomputation time, memory requirements, and coverage are provided. Through theoretical results and implementation, the analysis demonstrates that this new variant offers significant improvements over both descending stepped rainbow tables and vanilla rainbow tables for high coverage. Specifically, for the typical 99.5% coverage, the precomputation time of ascending stepped rainbow tables is (up to) 30% faster than descending stepped tables, and (up to) 45% faster than vanilla rainbow tables, while also reducing the attack time up to 15% and 11%, respectively
Enrichissement de corpus par approche générative et impact sur les modèles de reconnaissance d'entités nommées
International audienceIndustrial applications of Named Entity Recognition (NER) are usually confronted with imbalanced corpora. This could harm the performance of trained models when dealing with unknown data. In this paper we develop two generation-based data enrichment approaches to improve entity distribution. We compare the impact of enriched corpora on NER models, using both non-contextual and contextual embeddings, and a biLSTM-CRF as entity classifier. The approach is evaluated on a contract renewal detection task. The results show that the proposed enrichment significantly improves the model's effectiveness on unkonwn data, while not degrading the performance on the original test set
Controlled nano-roughening of the GaN surface by post-growth thermal annealing
International audienceGaN-based semiconductor structures have been widely used in photonic, electronic, and optoelectronic devices. However, due to the lack of lattice-matched substrates for GaN, several schemes related to material growth and device structure, including surface roughening, have been proposed to improve the efficiency of GaN-based devices. In our work, we developed an experimental approach to achieving a controllable roughening of the GaN epilayer surface by thermal post-annealing. The degree of surface roughness, i.e., nanometric modification, is controlled by the number of annealing cycles. The samples were annealed at 1200 °C under N2 ambient. Under this condition, the first stage of GaN decomposition occurs. This annealing process led to the formation of Ga-rich GaN nanoparticles at the film surface with no significant change in the dislocation density. Furthermore, this treatment greatly decreases the compressive stress degree, resulting in a considerable improvement in the optical properties of the GaN samples
Designing Visualizations for Enhancing Carbon Numeracy
International audienceThis position statement discusses the challenges of designing visualizations to enhance the carbon numeracy of the general public. Carbon numeracy refers to an individual's quantitative awareness of their CO2 emissions, which can vary widely from grams to tons across different activities. Effective visualizations must accurately represent these ranges and facilitate quantitative comparisons. By leveraging insights from both visualization research and cognitive psychology on numerical perception and the representation of large numbers, we propose two novel design solutions to address these challenges. We aim to foster discussions on improving public carbon numeracy, ultimately aiding in mitigating climate change
Réponse aux questions vidéo avec supervision limitée
Video content has significantly increased in volume and diversity in the digital era, and this expansion has highlighted the necessity for advanced video understanding technologies. Driven by this necessity, this thesis explores semantically understanding videos, leveraging multiple perceptual modes similar to human cognitive processes and efficient learning with limited supervision similar to human learning capabilities. This thesis specifically focuses on video question answering as one of the main video understanding tasks. Our first contribution addresses long-range video question answering, requiring an understanding of extended video content. While recent approaches rely on human-generated external sources, we process raw data to generate video summaries. Our following contribution explores zero-shot and few-shot video question answering, aiming to enhance efficient learning from limited data. We leverage the knowledge of existing large-scale models by eliminating challenges in adapting pre-trained models to limited data. We demonstrate that these contributions significantly enhance the capabilities of multimodal video question-answering systems, where specifically human-annotated labeled data is limited or unavailable.Le contenu vidéo a considérablement augmenté en volume et en diversité à l'ère numérique, et cette expansion a souligné la nécessité de technologies avancées de compréhension des vidéos. Poussée par cette nécessité, cette thèse explore la compréhension sémantique des vidéos, en exploitant plusieurs modes perceptuels similaires aux processus cognitifs humains et un apprentissage efficace avec une supervision limitée, semblable aux capacités d'apprentissage humain. Cette thèse se concentre spécifiquement sur la réponse aux questions sur les vidéos comme l'une des principales tâches de compréhension vidéo. Notre première contribution traite de la réponse aux questions sur les vidéos à long terme, nécessitant une compréhension du contenu vidéo étendu. Alors que les approches récentes dépendent de sources externes générées par les humains, nous traitons des données brutes pour générer des résumés vidéo. Notre contribution suivante explore la réponse aux questions vidéo en zéro-shot et en few-shot, visant à améliorer l'apprentissage efficace à partir de données limitées. Nous exploitons la connaissance des modèles à grande échelle existants en éliminant les défis d'adaptation des modèles pré-entraînés à des données limitées. Nous démontrons que ces contributions améliorent considérablement les capacités des systèmes de réponse aux questions vidéo multimodaux, où les données étiquetées spécifiquement annotées par l'homme sont limitées ou indisponibles
Vieillissement de mélanges de polymères ignifugés
Polymer materials are an integral part of our daily life thanks to their large range of properties and their low cost. The amounts produced continues to increase and the environmental impact of these materials too. Prioritizing biopolymers allows limiting this environmental impact but to this end some properties need to be improved. This is the case of the fire properties which remain a brake to their industrial development. PLA which is the most used biopolymer and PHB which is an interesting candidate for a broad range of applications are concerned by this issue. The purpose of this thesis is to study the parameters which can influence the fire properties of the PLA/PHB blend (composition, processing parameters,…), to identify the efficient flame retardant systems for this blend and to evaluate the impact of aging on the fire properties. To achieve this, the influence of the ratio between the two polymers on the the physico-chemical, the thermal and the flammability properties was studied. Thanks to this study, the PLA/PHB matrix having the best thermal stability and the lowest flammability of PHB was identified. The study of the processing parameters allowed optimizing these parameters in order to obtain the best material properties. For the first time, a study on the flame retardancy of a PLA/PHB matrix was carried out. This work on the flame retardancy has enabled to identify the criteria to be effective for this blend. Moreover, efficient bio-based flame retardants were identified. Finally, a work on the evolution of the fire properties during aging was carried out. This work aimed to observe the evolution of the material and the FR when the samples are immersed in water.Les matériaux polymères font partie intégrante de notre quotidien grâce à leur large gamme de propriétés et leur faible coût. Les quantités produites ne cessent d'augmenter et également l'impact environnemental de ces matériaux. Privilégier les biopolymères permet de limiter cet impact environnemental mais pour cela certaines de leurs propriétés doivent être améliorées. C'est notamment le cas des propriétés au feu qui restent un frein important à leur développement industriel. Le PLA qui est le biopolymère le plus utilisé et le PHB qui est un candidat intéressant pour de nombreuses applications sont concernés par cette problématique. L'objectif de cette thèse est d'étudier les paramètres qui influencent les propriétés au feu du mélange PLA/PHB (composition, paramètres de mise en œuvre,…), d'identifier des systèmes retardateurs de flamme efficaces pour ce mélange et d'évaluer l'impact du vieillissement sur les propriétés au feu. Pour cela, l'influence du ratio entre les deux polymères sur les propriétés physico-chimiques, thermiques et l'inflammabilité a été étudiée. Grâce à cette étude, la matrice PLA/PHB présentant la meilleure stabilité thermique et la plus faible inflammabilité du PHB a été identifiée. L'étude des paramètres de mise en œuvre a permis d'optimiser ces derniers pour obtenir les meilleures propriétés du matériau. Pour la première fois, l'étude de systèmes retardateur de flamme a été menée sur une matrice PLA/PHB. Cette étude sur l'ignifugation a permis d'identifier les critères que doit remplir un retardateur de flamme (RF) afin d'être performant pour ce mélange. De plus, des retardateurs de flamme biosourcés efficaces ont pu être identifiés. Enfin, une étude sur l'évolution des propriétés au feu au cours du vieillissement a été réalisée. Cette étude visait à observer l'évolution du matériau et du RF lorsque les échantillons sont immergés dans l'eau
Microalgae bio-reactive façade: System thermal–biological optimization
International audienceThis article explores numerically the biotechnological performances of microalgae biofaçade. The model computes the system's thermal behavior using a radiative-convective approach accounting for location on Earth and actual weather data. In a coupled manner, it simulates the microalgae culture behavior, i.e. light-driven growth and cell pigment content acclimation. In addition, it features refinement such as wavelength-dependent biomass optical properties and thermal-modulated biological rates. Thanks to this model, operation strategies and design possibilities were evaluated using actual weather data for a biofaçade module deployed in Marseille in 2023. Investigations revealed that a semi-batch mode of operation, while simplistic, is the most efficient way to operate a biofaçade if sole biological production is considered (about 18.0 ± 0.9 kg per year, 2.44 ± 0.12 g/L output concentration). However, if intended as an office glazing, turbidostat mode of operation should be preferred for aesthetic and visual comfort reasons (about 19.1 ± 1.1 kg per year, 0.64 ± 0.07 g/L output concentration). System optimization also confirmed the experimental observation that the system could be prone to overheating. Nevertheless, while overheating can be mitigated by increasing the reservoir thickness, this strategy is detrimental to the average output concentration. Finally, location-specific optimization revealed that a standard biofaçade module could be deployed over France, and system performances are derived for the whole country thanks to the weather forecast agency data.</div