École de Technologie Supérieure

Espace ÉTS
Not a member yet
    1762 research outputs found

    Étude de la réponse dynamique d'une structure existante avec prise en compte de l'interaction dynamique sol-structure dans le contexte de l'Est du Canada

    Get PDF
    Ce travail quantifie l’impact de la prise en considération de l’interaction dynamique sol-structure (IDSS) dans le processus d’évaluation sismique d’un bâtiment institutionnel type du Québec construit sur un dépôt de sol postglaciaire dans le contexte sismique de l’Est du Canada. Pour y parvenir, des analyses comparatives de la réponse d’une structure sous sollicitation sismique ont été réalisées suivant des approches d’analyse sismique de complexité croissante, soit la méthode pseudo-dynamique, l’analyse temporelle non-linéaire sans considération de l’IDSS, l’analyse temporelle non-linéaire avec considération de l’IDSS via la méthode des sous-structures et finalement l’analyse temporelle non-linéaire d’un modèle couplé sol-structure. Le système structural étudié provient d’un bâtiment institutionnel typique des années 1970 dont le système de résistance aux forces sismiques est composé de quatre cadres en béton armé de trois travées et trois étages. L’analyse s’est concentrée sur l’étude individuelle d’un des cadres du bâtiment. Le cadre possède une hauteur de 12,2 m et une largeur de 20 m. Le dépôt de sol considéré est un emplacement de catégorie E situé dans le bassin de l’ancienne mer de Champlain dans les environs de la vallée de Breckenridge Creek (QC). Il s’agit d’un dépôt argileux profond ayant fait l’objet d’une vaste campagne d’investigation géotechnique par la Commission géologique du Canada (Crow et al., 2017). La modélisation numérique de la structure et du dépôt de sol est réalisée dans le logiciel OpenSees. Des modèles bidimensionnels sont développés afin d’étudier la réponse dynamique de la structure sans prise en compte de l’IDSS, l’interaction cinématique, la réponse dynamique de la structure avec prise en compte de l’IDSS avec la méthode des sous-structures et la réponse dynamique de la structure dans un modèle global sol-structure. La non-linéarité du sol est modélisée avec une loi de comportement non-linéaire avancée capable de considérer l’écrouissage isotropique et la réduction du module de cisaillement en fonction de la déformation de cisaillement. Les accélérations des différents étages, les déplacements relatifs ainsi que les efforts dans les colonnes sont enregistrés pour chaque approche de modélisation et servent de base de comparaison pour évaluer l’impact de l’IDSS sur la réponse de la structure. Pour le cas étudié, les résultats montrent que la méthode des sous-structures peut-être efficacement appliquée à des situations où la sismicité anticipée est faible à modérée, c’est-à-dire des situations ou la non-linéarité des systèmes (sol et structure) demeure marginale. Dans le cas de forts séismes, où la non-linéarité des systèmes est importante, l’utilisation de la méthode directe est préférable à condition de simuler : (a) la non-linéarité de la structure, afin de capturer la perte de résistance et les modifications à la rigidité en cours de chargement dynamique, (b) la non-linéarité du sol au moyen d’une courbe adéquate de réduction de module permettant de simuler correctement le comportement hystérétique du sol, et (c) l’impact du poids de la structure sur les contraintes effectives de confinement du sol. Les résultats montrent également que la prise en considération de l’IDSS mène généralement, pour les situations qui sont considérées dans le présent travail, à une réduction des efforts et des accélérations aux étages de la structure (jusqu’à 30 %), mais entraine une augmentation des déplacements relatifs (de 12 à 30 % selon les étages). Ces résultats sont similaires à ceux rapportés dans la littérature. Les résultats montrent également que les déplacements du sol de fondation entrainent une modification importante de la réponse structurale qui n’est pas capturée par la méthode des sous-structures

    Design and development of intelligent actuator control methodologies for morphing wing in wind tunnel

    Get PDF
    In order to protect our environment by reducing the aviation carbon emissions and making the airline operations more fuel efficient, internationally, various collaborations were established between the academia and aeronautical industries around the world. Following the successful research and development efforts of the CRIAQ 7.1 project, the CRIAQ MDO 505 project was launched with a goal of maximizing the potential of electric aircraft. In the MDO 505, novel morphing wing actuators based on brushless DC motors are used. These actuators are placed chord-wise on two actuation lines. The demonstrator wing, included ribs, spars and a flexible skin, that is composed of glass fiber. The 2D and 3D models of the wing were developed in XFOIL and Fluent. These wing models can be programmed to morph the wing at various flight conditions composed of various Mach numbers, angles of attack and Reynolds number by allowing the computation of various optimized airfoils. The wing was tested in the wind tunnel at the IAR NRC Ottawa. In this thesis actuators are mounted with LVDT sensors to measure the linear displacement. The flexible skin is embedded with the pressure sensors to sense the location of the laminar-to-turbulent transition point. This thesis presents both linear and nonlinear modelling of the novel morphing actuator. Both classical and modern Artificial Intelligence (AI) techniques for the design of the actuator control system are presented. Actuator control design and validation in the wind tunnel is presented through three journal articles; The first article presents the controller design and wind tunnel testing of the novel morphing actuator for the wing tip of a real aircraft wing. The new morphing actuators are made up of BLDC motors coupled with a gear system, which converts the rotational motion into linear motion. Mathematical modelling is carried out in order to obtain a transfer function based on differential equations. In order to control the morphing wing it was concluded that a combined position, speed and current control of the actuator needs to be designed. This controller is designed using the Internal Model Control (IMC) method for the linear model of the actuator. Finally, the bench testing of the actuator is carried out and is further followed by its wind testing. The infra red thermography and kulite sensors data revealed that on average on all flight cases, the laminar to turbulent transition point was delayed close to the trailing edge of the wing. The second journal article presents the application of Particle Swarm Optimization (PSO) to the control design of the novel morphing actuator. Recently PSO algorithm has gained reputation in the family of evolutionary algorithms in solving non-convex problems. Although it does not guarantee convergence, however, by running it several times and by varying the initialization conditions the desired results were obtained. Following the successful computation of controller design, the PSO was validated using successful bench testing. Finally, the wind tunnel testing was performed based on the designed controller, and the Infra red testing and kulite sensor measurements results revealed the expected extension of laminar flows over the morphing wing. The third and final article presents the design of fuzzy logic controller. The BLDC motor is coupled with the gear which converts the rotary motion into linear motion, this phenomenon is used to push and pull the flexible morphing skin. The BLDC motor itself and its interaction with the gear and morphing skin, which is exposed to the aerodynamic loads, makes it a complex nonlinear system. It was therefore decided to design a fuzzy controller, which can control the actuator in an appropriate way. Three fuzzy controllers were designed each of these controllers was designed for current, speed and position control of the morphing actuator. Simulation results revealed that the designed controller can successfully control the actuator. Finally, the designed controller was tested in the wind tunnel; the results obtained through the wind tunnel test were compared, and further validated with the infra red and kulite sensors measurements which revealed improvement in the delay of transition point location over the morphed wing

    Application au contrôle non destructif ultrasonore d'un algorithme de déconvolution utilisant des réseaux de neurones convolutifs

    Get PDF
    Par rapport à d’autres méthodes de contrôle non destructif, comme la radiographie, on peut reprocher aux ultrasons leur manque de résolution et/ou de pénétration à l’intérieur du volume d’une pièce. Le but de cette étude est de proposer une méthode de traitement des signaux ultrasonores A-Scans, imitant la déconvolution mais utilisant des techniques d’apprentissage machine. La déconvolution permet d’obtenir de très bons résultats en théorie mais perd beaucoup en efficacité en pratique. L’objectif est double : permettre une inspection à une fréquence plus faible de manière à ce que le signal soit moins atténué tout en conservant une résolution équivalente voire meilleure. L’entrainement de l’algorithme d’apprentissage machine devait s’effectuer avec le minimum de cas expérimentaux, c’est pourquoi des simulations ont été utilisées. Ainsi une architecture de réseau de neurones à couche de convolution a été développée. L’apprentissage, supervisé, s’est fait avec des simulations par éléments finis générées avec Pogo, un logiciel utilisant le calcul sur carte graphique. Ces simulations ont été traitées pour les rendre plus vraisemblables. Des mesures expérimentales ont été menées pour tester la résolution de l’algorithme. Des réflecteurs éloignés d’une demi-longueur d’onde à 2.25 MHz dans de l’aluminium ont pu être distingués. Un bruit expérimental et amplifié à 20 puis 5 dB a été ajouté. La résolution maximale diminue mais le nombre de fausses détections augmente avec le bruit. Deux exemples d’utilisation du réseau de neurones développé ici sont présentés. Dans un premier temps, la résolution axiale d’une image produite par une méthode de focalisation en tout point (TFM) a été sensiblement améliorée en traitant préalablement les données avec le CNN. Deux interfaces distantes de 0.96 mm dans de l’aluminium sont très facilement distinguables alors qu’elle ne le sont pas sur une image TFM classique. Une tentative d’identification de réflecteur en dessinant le contour d’une interface est aussi présentée

    Modélisation hygrothermique de l'enveloppe du bâtiment avec le matériau de chanvre

    Get PDF
    La construction de bâtiment en béton de chanvre est une technique ancienne qui connait un nouvel essor aujourd’hui en France grâce à sa performance énergétique dans tout le cycle de vie d’un bâtiment. Ce point fort permet de considérer le béton de chanvre comme un matériau prometteur pour la construction durable à l’avenir. Ce béton est un matériau de construction renouvelable et écologique relativement récent au Canada et inutilisé en Afrique de l’ouest, plus précisément au Burkina Faso. Cependant, le béton de chanvre s’inscrit dans la tendance actuelle de l’utilisation des matériaux biosourcés. Il a une empreinte écologique favorable, contribuant à la réduction des émissions de GES. De ce fait, l’intégration de ce béton dans la construction canadienne et burkinabé est primordiale et bénéfique du point de vue développement durable. Ce travail consiste à trouver une alternative de construction traditionnelle par l’intégration de matériaux nouveaux et surtout avec une empreinte écologique pour l’amélioration de la performance hygrothermique de l’enveloppe du bâtiment. D’où notre intérêt se penche à l’étude des performances du béton de chanvre en tant que matériau et surtout son intégration et son comportement hygrothermique dans le système d’enveloppe étudiée. Les résultats de la simulation avec le logiciel WUFI Pro 6.2 des murs dans le cadre d’une étude pour la ville de Montréal ont révélé que le béton de chanvre présente une excellente qualité de régulation de la chaleur et de l’humidité, donc aucun problème de dégradation causé par l’humidité n’est détecté pour des infiltrations d’eau de moins de 5m3/m2h. Toute fois pour un taux de pluie battante de plus de 5% pendant toute l’année, que le mur soit en laine de verre ou béton de chanvre, son coté extérieur ne supporte pas. Pour la ville de Dori, les résultats de la simulation avec le logiciel WUFI Pro 6.2 des murs de l’étude ont révélé que le béton de chanvre présente une excellente qualité de régulation de la chaleur quel que soit la quantité de fraction de rayonnement solaire incident envoyée à sa surface extérieure en plus de la température extérieure. En effet, les températures de la surface intérieure des murs en béton de chanvre tournent autour de 21,6 ℃ à 24,5 ℃ comparativement aux murs en brique de terre comprimée et en bloc de ciment, dont la température intérieure est entre 21 ℃ et 27 ℃ respectivement 19 ℃ et 29 ℃. En conclusion, le béton de chanvre semble être un excellent matériau à intégrer dans la construction canadienne et burkinabé tout en respectant les codes de construction

    Learning from imbalanced data in face re-identification using ensembles of classifiers

    Get PDF
    Face re-identification is a video surveillance application where systems for video-to-video face recognition are designed using faces of individuals captured from video sequences, and seek to recognize them when they appear in archived or live videos captured over a network of video cameras. Video-based face recognition applications encounter challenges due to variations in capture conditions such as pose, illumination etc. Other challenges in this application are twofold; 1) the imbalanced data distributions between the face captures of the individuals to be re-identified and those of other individuals 2) varying degree of imbalance during operations w.r.t. the design data. Learning from imbalanced data is challenging in general due in part to the bias of performance in most two-class classification systems towards correct classification of the majority (negative, or non-target) class (face images/frames captured from the individuals in not to be re-identified) better than the minority (positive, or target) class (face images/frames captured from the individual to be re-identified) because most two-class classification systems are intended to be used under balanced data condition. Several techniques have been proposed in the literature to learn from imbalanced data that either use data-level techniques to rebalance data (by under-sampling the majority class, up-sampling the minority class, or both) for training classifiers or use algorithm-level methods to guide the learning process (with or without cost sensitive approaches) such that the bias of performance towards correct classification of the majority class is neutralized. Ensemble techniques such as Bagging and Boosting algorithms have been shown to efficiently utilize these methods to address imbalance. However, there are issues faced by these techniques in the literature: (1) some informative samples may be neglected by random under-sampling and adding synthetic positive samples through upsampling adds to training complexity, (2) cost factors must be pre-known or found, (3) classification systems are often optimized and compared using performance measurements (like accuracy) that are unsuitable for imbalance problem; (4) most learning algorithms are designed and tested on a fixed imbalance level of data, which may differ from operational scenarios; The objective of this thesis is to design specialized classifier ensembles to address the issue of imbalance in the face re-identification application and as sub-goals avoiding the abovementioned issues faced in the literature. In addition achieving an efficient classifier ensemble requires a learning algorithm to design and combine component classifiers that hold suitable diversity-accuracy trade off. To reach the objective of the thesis, four major contributions are made that are presented in three chapters summarized in the following. In Chapter 3, a new application-based sampling method is proposed to group samples for under-sampling in order to improve diversity-accuracy trade-off between classifiers of the ensemble. The proposed sampling method takes the advantage of the fact that in face re-identification applications, facial regions of a same person appearing in a camera field of view may be regrouped based on their trajectories found by face tracker. A partitional Bagging ensemble method is proposed that accounts for possible variations in imbalance level of the operational data by combining classifiers that are trained on different imbalance levels. In this method, all samples are used for training classifiers and information loss is therefore avoided. In Chapter 4, a new ensemble learning algorithm called Progressive Boosting (PBoost) is proposed that progressively inserts uncorrelated groups of samples into a Boosting procedure to avoid loosing information while generating a diverse pool of classifiers. From one iteration to the next, the PBoost algorithm accumulates these uncorrelated groups of samples into a set that grows gradually in size and imbalance. This algorithm is more sophisticated than the one proposed in Chapter 3 because instead of training the base classifiers on this set, the base classifiers are trained on balanced subsets sampled from this set and validated on the whole set. Therefore, the base classifiers are more accurate while the robustness to imbalance is not jeopardized. In addition, the sample selection is based on the weights that are assigned to samples which correspond to their importance. In addition, the computation complexity of PBoost is lower than Boosting ensemble techniques in the literature for learning from imbalanced data because not all of the base classifiers are validated on all negative samples. A new loss factor is also proposed to be used in PBoost to avoid biasing performance towards the negative class. Using this loss factor, the weight update of samples and classifier contribution in final predictions are set according to the ability of classifiers to recognize both classes. In comparing the performance of the classifier systems in Chapter 3 and 4, a need is faced for an evaluation space that compares classifiers in terms of a suitable performance metric over all of their decision thresholds, different imbalance levels of test data, and different preference between classes. The F-measure is often used to evaluate two-class classifiers on imbalanced data, and no global evaluation space was available in the literature for this measure. Therefore, in Chapter 5, a new global evaluation space for the F-measure is proposed that is analogous to the cost curves for expected cost. In this space, a classifier is represented as a curve that shows its performance over all of its decision thresholds and a range of possible imbalance levels for the desired preference of true positive rate to precision. These properties are missing in ROC and precision-recall spaces. This space also allows us to empirically improve the performance of specialized ensemble learning methods for imbalance under a given operating condition. Through a validation, the base classifiers are combined using a modified version of the iterative Boolean combination algorithm such that the selection criterion in this algorithm is replaced by F-measure instead of AUC, and the combination is carried out for each operating condition. The proposed approaches in this thesis were validated and compared using synthetic data and videos from the Faces In Action, and COX datasets that emulate face re-identification applications. Results show that the proposed techniques outperforms state of the art techniques over different levels of imbalance and overlap between classes

    Optimization of furnace residence time and ingots positioning during the heat treatment process of large size forged ingots

    Get PDF
    High-strength large size forgings which are widely used in the energy and transportation industries (e.g., turbine shaft, landing gears etc.) acquire significant mechanical properties (e.g., hardness) through a sequence of heat treatment processes, called Quench and Temper (Q&T). The heating process (tempering) that takes place inside gas-fired furnaces has a direct impact on the final properties of the product due to several major microstructural changes taking place at this step. Therefore, material properties are usually optimized by controlling the tempering process parameters such as time and temperature. A non-uniform temperature distribution around parts, as a result of thermal interactions inside the furnace or loading pattern, may result in the parts property variations from one end to another, changes in microstructure or even cracking. On the other hand, improvement of large products residence time inside the heat treatment furnace can minimize energy consumption and avoid undesirable microstructural changes. However, at the present time, the industrial production is mainly based on available empirical correlations which are costly and not always reliable. Accurate time-dependent temperature prediction of the large size forgings within gas-fired heat treatment furnaces requires a comprehensive quantitative examination of the heating process and an in-depth understanding of complex conjugate thermal interactions inside the furnace. Limitations in analytical studies and complexity and cost of experimentations have made numerical simulations such as computational fluid dynamics (CFD), effective methods in this field of study. However, among the rarely found studies on gas-fired furnaces, smallscale furnaces or those with shorter operation times were mainly considered (using different simplifications like steady-state calculations) because of complexity of the phenomena and large calculation times. Subsequently, there are very few studies on the improvement of the loading patterns of large-size steel parts inside the gas-fired furnaces and their relevant residence time optimization. Moreover, the limitation and strength of different numerical approaches to calculate thermal interactions in the turbulent reactive flow of the large size gas-fired batch type furnaces were addressed by few researchers in the literature. In this regard, the main objective of the present thesis is to provide a comprehensive quantitative analysis of transient heating and an understanding of thermal interactions inside the furnace so as to optimize the residence time and temperature uniformity of large size products during the heat treatment process. To attain this objective, the following milestones are pursued. The first part of this study provides a comprehensive unsteady analysis of large size forgings heating characteristics in a gas-fired heat treatment furnace employing experimentally measured temperatures and CFD simulations. A three-dimensional CFD model of the gasfired furnace, including heat treating chamber and high momentum natural gas burners, was generated. The interactions between heat and fluid flow consisting of turbulence, combustion and radiation were simultaneously considered using the k -ε , EDM and DO models, respectively. The applicability of S2S radiation model to quantify the effect of participating medium and radiation view factor in the radiation heat transfer was also assessed. Temperature measurements at several locations of an instrumented large size forged block and within the heating chamber of the furnace were performed for experimental analysis of the heating process and validation of the CFD model. Good agreement with a maximum deviation of about 7% was obtained between the numerical predictions and the experimental measurements. The results showed that despite the temperature uniformity of the unloaded furnace, each surface of the product experienced different heating rates after loading (single loading) resulting in temperature differences of up to 200 K. Analysis of the results also revealed the reliability of the S2S model and highlighted the importance of radiation view factor for the optimization purposes in this application. Findings were correlated with the geometry of the furnace, formation of vortical structures and fluid flow circulations around the workpiece. The experimental data and CFD model predictions could directly be employed for optimization of the heat treatment process of large size steel components. The second part of this study aims to determine the effect of loading pattern (in the multiple loading configurations) on the temperature distribution of large size forgings during the heat treatment process within a gas-fired furnace to attain more temperature uniformity and consequently homogenous mechanical properties. This part also focuses on the improvement of residence time of large size forged ingots within a tempering furnace proposing a novel hybrid methodology combining CFD numerical simulations and a series of experimental measurements with high-resolution dilatometer. Transient 3D CFD simulations validated by experimental temperature measurements were employed to assess the impact of loading patterns and skids on the temperature uniformity and residence time of heavy forgings within the furnace. Comprehensive transient analysis of forgings heating characteristics (including heat transfer modes analysis) at four different loading patterns allowed quantifying the impact of skids and their dimensions on the temperature distribution uniformity as well as products residence time. Results showed that temperature non-uniformities of up to 331 K persist for non-optimum conventional loading pattern. The positive influence of skids and spacers applications was approved and quantified using the developed approach. It was possible to reduce the identified non-uniformities of up to 32 % through changing the loading pattern inside the heat treatment furnace. This hybrid approach allowed to determine an optimum residence time of large size slabs improving by almost 15.5 % in comparison with the conventional non-optimized configuration. This approach was validated and it could be directly applied to the optimization of different heat treatment cycles of large size forgings. The third part of the study addresses the details of the numerical simulation of heat treatment process of large size forgings within real scale gas-fired furnaces. Specifically, assessment of chemical equilibrium non-premix combustion model for accurate temperature prediction of heavy forgings, as well as performance of six different RANS based turbulence models for predictions of turbulent phenomenon were discussed in this context. In this regard, thermal interactions at different locations of the forged block as well as critical regions such as burner area, stagnation and wake region were performed using a one-third periodic 3D model of the furnace and validated by experimental measurements. Results showed that the one-third periodic model with chemical equilibrium non-premix combustion is reliable for the thermal analysis of the heat treatment process with a maximum deviation of about 3% with respect to the experimental measurements. It was also revealed that the choice of the turbulence model has a significant effect on the prediction of combustion and heat transfer around the block. Prediction of ɛ/k ratio by different turbulence models showed a significant relation to the turbulent combustion (such as burner flame length) and block temperature predictions, around the stagnation region. Standard and realizable k - ɛ models, due to an unrealistic over prediction of turbulence kinetic energy (under-prediction of ɛ/k ratio), resulted in shorter flame length and under-prediction on the temperature of the forged block around the stagnation region; While, SST k - w model showed reasonable predictions in this region. RSM model was found as the most reliable turbulence model compared to the experimental measurements. Meanwhile, realizable k − ɛ model apart from some under-prediction on the stagnation region and flame length could effectively predict the overall temperature of the heavy forgings with reasonable accuracy with respect to the experimental data and RSM predictions

    Étude du recharchement hétérogène avant et après martelage de l'acier E2209 déposé sur l'acier S41500

    Get PDF
    Ce mémoire présente une étude expérimentale de la compatibilité métallurgique et mécanique d’un rechargement hétérogène réalisé par soudage en vue de développer de nouvelles méthodes de réparation des roues de turbine hydrauliques endommagées par fatigue. Le rechargement hétérogène étudié est constitué de l’acier inoxydable duplex E2209 déposé sur l’acier inoxydable martensitique à bas carbone S41500. Le second objectif de cette étude expérimentale porte sur l’effet du martelage de ce rechargement hétérogène sur les contraintes résiduelles de tension générées lors des opérations de rechargement par soudage. Cette étude est motivée par la problématique de réparer les dommages causés par la fatigue sans recourir au traitement thermique de revenu après soudage. En effet, l’acier inoxydable martensitique à bas carbone UNS S41500 et sa version coulée CA6NM, utilisé pour la fabrication de plusieurs turbines hydrauliques, nécessite un traitement thermique de revenu après soudage afin d’adoucir la martensite nouvellement formée. Les réparations devant être réalisées dans le puits de turbine pour limiter le temps d’arrêt de production d’électricité, rendent difficile la réalisation d’un traitement thermique après soudage (TTAS), donc le recours à un alliage d’apport homogène pour les réparations est proscrit. Différentes plaques simulant la réparation d’une fissure ont été rechargées en variant l’énergie de soudage ou la composition chimique du gaz de protection afin d’étudier l’effet de ces deux variables sur la microstructure et les propriétés mécaniques du rechargement. De ces plaques, des essais de traction, de résilience et des profils de microduretés ont été réalisés afin d’en caractériser le comportement mécanique. La méthode du contour a été utilisée afin d’évaluer le niveau de contraintes résiduelles longitudinales de la région rechargée. Elle a par ailleurs permis d’évaluer l’effet du parachèvement par martelage sur le niveau de contraintes résiduelles. Des échantillons métallographiques ont été prélevés des différentes combinaisons de paramètres afin de caractériser la microstructure de la zone fondue (ZF), mais aussi de la zone de liaison (ZL) et de la zone affectée thermiquement du métal de base (ZAT). La proportion d’austénite et de ferrite constituant l’alliage duplex a été évaluée par analyse d’image et par analyse de diffraction d’électron rétrodiffusés (EBSD). Les images obtenues par microscopie optique ont permis de confirmer la solidification ferritique de l’alliage duplex et la transformation à l’état solide de la ferrite en austénite sous différentes morphologies au cours du refroidissement. La présence de nitrures au sein de la matrice ferritique a aussi été observée. Le microscope électronique à balayage (MEB) a été utilisé afin de rechercher la présence de composés intermétalliques dans la zone fondue. Des précipités ont effectivement été observés aux interfaces austénite-ferrite, mais l’analyse de leur composition chimique à l’aide de la spectrométrie par fluorescence des rayons X (EDS) n’a pas permis de confirmer qu’il s’agissait de phase σ

    Cellular automaton development for the study of the neighborhood effect within polycrystals stress-fields

    Get PDF
    The objective of this Ph.D. project was to develop an analytical model able to predict the heterogeneous micromechanical fields within polycrystals for a very low computational cost in order to evaluate a material fatigue life probability. Many analytical models already exist for that matter, but they have disadvantages: either they are not efficient enough to rapidly generate a large database and perform a static analysis, or the impacts of certain heterogeneities on the stress fields, such as the neighborhood effect, are neglected. The mechanisms underlying the neighborhood effect, which is the grain stress variations due to a given close environment, are unheralded or misunderstood. A finite element analysis has been carried out on this question in the case of polycrystals oriented randomly with a single phase submitted to an elastic loading. The study revealed that a grain stress level is as much dependent on the crystallographic orientation of the grain as the neighborhood effect. Approximations were drawn from this analysis leading to the development of an analytical model, the cellular automaton. The model applies to regular polycrystalline structures with spherical grains and its development was conducted in two steps: first in elasticity then in elasto-plasticity. In elasticity, the model showed excellent predictions of micromechanical in comparison to the finite element predictions. The model was then used to evaluate the worst grain-neighborhood configurations and their probability to occur. It has been shown in the case of the iron crystal that certain neighborhood configurations can increase by 2 times a grain stress level. In elasto-plasticity, the model underestimates the grains plasticity in comparison to the finite element predictions. Nonetheless, the model proved its capacity to identify the worst grain-neighborhood configurations leading important localized plasticity. It has been shown that grains elastic behaviors determine the location and the level of plasticity within polycrystals in the context of high cycle fatigue regime. The grains undergoing the highest resolved shear stress in elasticity are the grains plastifying the most in high cycle fatigue regime. A statistical study of the neighborhood effect was conducted to evaluate the probability of the true yield stress (stress level applied to the material for which the first sign of plasticity would occur in a grain). The study revealed, in the case of the 316L steel, a significant difference between the true elastic limit at 99% and 1% probability, which could be one of the causes of the fatigue life scatter often observed experimentally in high cycle fatigue regime. Further studies on the effect of a free surface and the morphology of the grains were carried out. The study showed that a free surface have the effect to spread even more the grains stress levels distributions. The neighborhood effect approximations used in the developed model were unaffected by a free area. The grains morphology also has shown to have a significant impact on the stress fields. It has been shown that in the case of a high morphology ratio, the stress variations induced by the morphology of the grains are as important as those induced by the neighborhood effect

    Extraction automatique d'indices géométriques pour la préhension d'outils en ergonomie virtuelle

    Get PDF
    DELMIA est une brand de Dassault Systèmes spécialisé dans la simulation des procédés industriels. Ce module permet notamment la modélisation de tâches de travail dans des environnements manufacturiers simulés en 3D afin d’en analyser l’ergonomie. Cependant, la manipulation du mannequin virtuel se fait manuellement par des utilisateurs experts du domaine. Afin de démocratiser l’accès à l’ergonomie virtuelle, Dassault Systèmes a lancé un programme visant à positionner automatiquement le mannequin au sein de la maquette virtuelle à l’aide d’un nouveau moteur de positionnement nommé « Smart Posturing Engine (SPE) ». Le placement automatique des mains sur des outils constitue un des enjeux de ce projet. L’objectif général poursuivi dans ce mémoire consiste à proposer une méthode d’extraction automatique d’indices de préhension, servant de guide pour la saisie des outils, à partir de leurs modèles géométriques tridimensionnels. Cette méthode est basée sur l’affordance naturelle des outils disponibles de manière usuelle dans un environnement manufacturier. La méthode empirique présentée dans cette étude s’intéresse donc aux outils usuels tenus à une seule main. La méthode suppose que l’appartenance à une famille (maillets, pinces, etc.) de l’outil à analyser est initialement connue, ce qui permet de présumer de l’affordance de la géométrie à analyser. La méthode proposée comporte plusieurs étapes. Dans un premier temps, un balayage est mené sur la géométrie 3D de l’outil afin d’en extraire une série de sections. Des propriétés sont alors extraites pour chaque section de manière à reconstruire un modèle d’étude simplifié. Basé sur les variations des propriétés, l’outil est segmenté successivement en tronçons, segments et régions. Des indices de préhension sont finalement extraits des régions identifiées, y compris la tête de l'outil qui fournit une direction de travail liée à la tâche, de même que le manche ou la gâchette, le cas échéant. Ces indices de préhension sont finalement transmis au SPE afin de générer des préhensions orientées tâches. La solution proposée a été testée sur une cinquantaine d’outils tenus à une main appartenant aux familles des maillets, tournevis, pinces, visseuses droites et visseuses pistolets. Les modèles 3D des outils ont été récupérés du site en ligne « Part Supply » de Dassault Systèmes. La méthode proposée devrait être aisément transposable à d’autres familles d’outils

    Optimizing total cost of ownership (TCO) for 5G multi-tenant mobile backhaul (MBH) optical transport networks

    Get PDF
    Legacy network elements are reaching end-of-life and packet-based transport networks are not efficiently optimized. In particular, high density cell architecture in future 5G networks will face big technical and financial challenges due to avalanche of traffic volume and massive growth in connected devices. Raising density and ever-increasing traffic demand within future 5G Heterogeneous Networks (HetNets) will result in huge deployment, expansion and operating costs for upcoming Mobile BackHaul (MBH) networks with flat revenue generation. Thus, the goal of this dissertation is to provide an efficient physical network planning mechanism and an optimized resource engineering tool in order to reduce the Total Cost of Ownership (TCO) and increase the generated revenues. This will help Service Providers (SPs) and Mobile Network Operators (MNOs) to improve their network scalability and maintain positive Project Profit Margins (PPM). In order to meet this goal, three key issues are required to be addressed in our framework and are summarized as follows: i) how to design and migrate to a scalable and reliable MBH network in an optimal cost?, ii) how to control the deployment and activation of the network resources in such MBH based on required traffic demand in an efficient and cost-effective way?, and iii) how to enhance the resource sharing in such network and maximize the profit margins in an efficient way? As part of our contributions to address the first issue highlighted above and to plan the MBH with reduced network TCO and improved scalability, we propose a comprehensive migration plan towards an End-to-End Integrated-Optical-Packet-Network (E2-IOPN) for SP optical transport networks. We review various empirical challenges faced by a real SP during the transformation process towards E2-IOPN as well as the implementation of an as-built plan and a high-level design (HLD) for migrating towards lower cost-per-bit GPON, MPLS-TP, OTN and next-generation DWDM technologies. Then, we propose a longer-term strategy based on SDN and NFV approach that will offer rapid end-to-end service provisioning with costefficient centralized network control. We define CapEx and OpEx cost models and drive a cost comparative study that shows the benefit and financial impact of introducing new low-cost packet-based technologies to carry traffic from legacy and new services. To address the second issue, we first introduce an algorithm based on a stochastic geometry model (Voronoi Tessellation) to more precisely define MBH zones within a geographical area and more accurately calculate required traffic demands and related MBH infrastructure. In order to optimize the deployment and activation of the network resources in the MBH in an efficient and cost-effective way, we propose a novel method called BackHauling-as-a-Service (BHaaS) for network planning and Total Cost of Ownership (TCO) analysis based on required traffic demand and a "You-pay-only-for-what-you-use" approach. Furthermore, we enhance BHaaS performance by introducing a more service-aware method called Traffic-Profile-asa- Service (TPaaS) to further drive down the costs based on yearly activated traffic profiles. Results show that BHaaS and TPaaS may enhance by 22% the project benefit compared to traditional TCO model. Finally, we introduce a new cost (CapEx and OpEx) models for 5G multi-tenant Virtualized MBH (V-MBH) as part of our contribution to address the third issue. In fact, in order to enhance the resource sharing and maximize the network profits, we drive a novel pay-as-yougrow and optimization model for the V-MBH called Virtual-Backhaul-as-a-Service (VBaaS). VBaaS can serve as a planning tool to optimize the Project Profit Margin (PPM) while considering the TCO and the yearly generated Return-on-Investment (ROI). We formulate an MNO Pricing Game (MPG) for TCO optimization to calculate the optimal Pareto-Equilibrium pricing strategy for offered Tenant Service Instances (TSI). Then, we compare CapEx, OpEx, TCO, ROI and PPM for a specific use-case known in the industry as CORD project using Traditional MBH (T-MBH) versus Virtualized MBH (V-MBH) as well as using randomized versus Pareto-Equilibrium pricing strategies. The results of our framework offer SPs and MNOs a more precise estimation of traffic demand, an optimized infrastructure planning and yearly resource deployment as well as an optimized TCO analysis (CapEx and OpEx) with enhanced pricing strategy and generated ROI. Numerical results show more than three times increase in network profitability using our proposed solutions compared with Traditional MBH (T-MBH) methods

    1,762

    full texts

    1,762

    metadata records
    Updated in last 30 days.
    Espace ÉTS is based in Canada
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇