11 research outputs found

    Reproducible radiomics through automated machine learning validated on twelve clinical applications

    Get PDF
    Radiomics uses quantitative medical imaging features to predict clinical outcomes. Currently, in a new clinical application, findingthe optimal radiomics method out of the wide range of available options has to be done manually through a heuristic trial-anderror process. In this study we propose a framework for automatically optimizing the construction of radiomics workflows perapplication. To this end, we formulate radiomics as a modular workflow and include a large collection of common algorithms foreach component. To optimize the workflow per application, we employ automated machine learning using a random search andensembling. We evaluate our method in twelve different clinical applications, resulting in the following area under the curves: 1)liposarcoma (0.83); 2) desmoid-type fibromatosis (0.82); 3) primary liver tumors (0.80); 4) gastrointestinal stromal tumors (0.77);5) colorectal liver metastases (0.61); 6) melanoma metastases (0.45); 7) hepatocellular carcinoma (0.75); 8) mesenteric fibrosis(0.80); 9) prostate cancer (0.72); 10) glioma (0.71); 11) Alzheimer’s disease (0.87); and 12) head and neck cancer (0.84). Weshow that our framework has a competitive performance compared human experts, outperforms a radiomics baseline, and performssimilar or superior to Bayesian optimization and more advanced ensemble approaches. Concluding, our method fully automaticallyoptimizes the construction of radiomics workflows, thereby streamlining the search for radiomics biomarkers in new applications.To facilitate reproducibility and future research, we publicly release six datasets, the software implementation of our framework,and the code to reproduce this study

    Deep Learning Based Medical Image Analysis with Limited Data

    Full text link
    Deep Learning Methods have shown its great effort in the area of Computer Vision. However, when solving the problems of medical imaging, deep learning’s power is confined by limited data available. We present a series of novel methodologies for solving medical imaging analysis problems with limited Computed tomography (CT) scans available. Our method, based on deep learning, with different strategies, including using Generative Adversar- ial Networks, two-stage training, infusing the expert knowledge, voting based or converting to other space, solves the data set limitation issue for the cur- rent medical imaging problems, specifically cancer detection and diagnosis, and shows very good performance and outperforms the state-of-art results in the literature. With the self-learned features, deep learning based techniques start to be applied to the biomedical imaging problems and various structures have been designed. In spite of its simplity and anticipated good performance, the deep learning based techniques can not perform to its best extent due to the limited size of data sets for the medical imaging problems. On the other side, the traditional hand-engineered features based methods have been studied in the past decades and a lot of useful features have been found by these research for the task of detecting and diagnosing the pulmonary nod- ules on CT scans, but these methods are usually performed through a series of complicated procedures with manually empirical parameter adjustments. Our method significantly reduces the complications of the traditional proce- dures for pulmonary nodules detection, while retaining and even outperforming the state-of-art accuracy. Besides, we make contribution on how to convert low-dose CT image to full-dose CT so as to adapting current models on the newly-emerged low-dose CT data

    Complexity Reduction in Image-Based Breast Cancer Care

    Get PDF
    The diversity of malignancies of the breast requires personalized diagnostic and therapeutic decision making in a complex situation. This thesis contributes in three clinical areas: (1) For clinical diagnostic image evaluation, computer-aided detection and diagnosis of mass and non-mass lesions in breast MRI is developed. 4D texture features characterize mass lesions. For non-mass lesions, a combined detection/characterisation method utilizes the bilateral symmetry of the breast s contrast agent uptake. (2) To improve clinical workflows, a breast MRI reading paradigm is proposed, exemplified by a breast MRI reading workstation prototype. Instead of mouse and keyboard, it is operated using multi-touch gestures. The concept is extended to mammography screening, introducing efficient navigation aids. (3) Contributions to finite element modeling of breast tissue deformations tackle two clinical problems: surgery planning and the prediction of the breast deformation in a MRI biopsy device

    Computer vision for sequential non-invasive microscopy imaging cytometry with applications in embryology

    Get PDF
    Many in vitro cytometric methods requires the sample to be destroyed in the process. Using image analysis of non-invasive microscopy techniques it is possible to monitor samples undisturbed in their natural environment, providing new insights into cell development, morphology and health. As the effect on the sample is minimized, imaging can be sustained for long un-interrupted periods of time, making it possible to study temporal events as well as individual cells over time. These methods are applicable in a number of fields, and are of particular importance in embryological studies, where no sample interference is acceptable. Using long term image capture and digital image cytometry of growing embryos it is possible to perform morphokinetic screening, automated analysis and annotation using proper software tools. By literature reference, one such framework is suggested and the required methods are developed and evaluated. Results are shown in tracking embryos, embryo cell segmentation, analysis of internal cell structures and profiling of cell growth and activity. Two related extensions of the framework into three dimensional embryo analysis and adherent cell monitoring are described

    Segmentation and Characterization of Small Retinal Vessels in Fundus Images Using the Tensor Voting Approach

    Get PDF
    RÉSUMÉ La rĂ©tine permet de visualiser facilement une partie du rĂ©seau vasculaire humain. Elle offre ainsi un aperçu direct sur le dĂ©veloppement et le rĂ©sultat de certaines maladies liĂ©es au rĂ©seau vasculaire dans son entier. Chaque complication visible sur la rĂ©tine peut avoir un impact sur la capacitĂ© visuelle du patient. Les plus petits vaisseaux sanguins sont parmi les premiĂšres structures anatomiques affectĂ©es par la progression d’une maladie, ĂȘtre capable de les analyser est donc crucial. Les changements dans l’état, l’aspect, la morphologie, la fonctionnalitĂ©, ou mĂȘme la croissance des petits vaisseaux indiquent la gravitĂ© des maladies. Le diabĂšte est une maladie mĂ©tabolique qui affecte des millions de personnes autour du monde. Cette maladie affecte le taux de glucose dans le sang et cause des changements pathologiques dans diffĂ©rents organes du corps humain. La rĂ©tinopathie diabĂ©tique dĂ©crit l’en- semble des conditions et consĂ©quences du diabĂšte au niveau de la rĂ©tine. Les petits vaisseaux jouent un rĂŽle dans le dĂ©clenchement, le dĂ©veloppement et les consĂ©quences de la rĂ©tinopa- thie. Dans les derniĂšres Ă©tapes de cette maladie, la croissance des nouveaux petits vaisseaux, appelĂ©e nĂ©ovascularisation, prĂ©sente un risque important de provoquer la cĂ©citĂ©. Il est donc crucial de dĂ©tecter tous les changements qui ont lieu dans les petits vaisseaux de la rĂ©tine dans le but de caractĂ©riser les vaisseaux sains et les vaisseaux anormaux. La caractĂ©risation en elle-mĂȘme peut faciliter la dĂ©tection locale d’une rĂ©tinopathie spĂ©cifique. La segmentation automatique des structures anatomiques comme le rĂ©seau vasculaire est une Ă©tape cruciale. Ces informations peuvent ĂȘtre fournies Ă  un mĂ©decin pour qu’elles soient considĂ©rĂ©es lors de son diagnostic. Dans les systĂšmes automatiques d’aide au diagnostic, le rĂŽle des petits vaisseaux est significatif. Ne pas rĂ©ussir Ă  les dĂ©tecter automatiquement peut conduire Ă  une sur-segmentation du taux de faux positifs des lĂ©sions rouges dans les Ă©tapes ultĂ©rieures. Les efforts de recherche se sont concentrĂ©s jusqu’à prĂ©sent sur la localisation prĂ©cise des vaisseaux de taille moyenne. Les modĂšles existants ont beaucoup plus de difficultĂ©s Ă  extraire les petits vaisseaux sanguins. Les modĂšles existants ne sont pas robustes Ă  la grande variance d’apparence des vaisseaux ainsi qu’à l’interfĂ©rence avec l’arriĂšre-plan. Les modĂšles de la littĂ©rature existante supposent une forme gĂ©nĂ©rale qui n’est pas suffisante pour s’adapter Ă  la largeur Ă©troite et la courbure qui caractĂ©risent les petits vaisseaux sanguins. De plus, le contraste avec l’arriĂšre-plan dans les rĂ©gions des petits vaisseaux est trĂšs faible. Les mĂ©thodes de segmentation ou de suivi produisent des rĂ©sultats fragmentĂ©s ou discontinus. Par ailleurs, la segmentation des petits vaisseaux est gĂ©nĂ©ralement faite aux dĂ©pends de l’amplification du bruit. Les modĂšles dĂ©formables sont inadĂ©quats pour segmenter les petits vaisseaux. Les forces utilisĂ©es ne sont pas assez flexibles pour compenser le faible contraste, la largeur, et vii la variance des vaisseaux. Enfin, les approches de type apprentissage machine nĂ©cessitent un entraĂźnement avec une base de donnĂ©es Ă©tiquetĂ©e. Il est trĂšs difficile d’obtenir ces bases de donnĂ©es dans le cas des petits vaisseaux. Cette thĂšse Ă©tend les travaux de recherche antĂ©rieurs en fournissant une nouvelle mĂ©- thode de segmentation des petits vaisseaux rĂ©tiniens. La dĂ©tection de ligne Ă  Ă©chelles multiples (MSLD) est une mĂ©thode rĂ©cente qui dĂ©montre une bonne performance de segmentation dans les images de la rĂ©tine, tandis que le vote tensoriel est une mĂ©thode proposĂ©e pour reconnecter les pixels. Une approche combinant un algorithme de dĂ©tection de ligne et de vote tensoriel est proposĂ©e. L’application des dĂ©tecteurs de lignes a prouvĂ© son efficacitĂ© Ă  segmenter les vais- seaux de tailles moyennes. De plus, les approches d’organisation perceptuelle comme le vote tensoriel ont dĂ©montrĂ© une meilleure robustesse en combinant les informations voisines d’une maniĂšre hiĂ©rarchique. La mĂ©thode de vote tensoriel est plus proche de la perception humain que d’autres modĂšles standards. Comme dĂ©montrĂ© dans ce manuscrit, c’est un outil pour segmenter les petits vaisseaux plus puissant que les mĂ©thodes existantes. Cette combinaison spĂ©cifique nous permet de surmonter les dĂ©fis de fragmentation Ă©prouvĂ©s par les mĂ©thodes de type modĂšle dĂ©formable au niveau des petits vaisseaux. Nous proposons Ă©galement d’utiliser un seuil adaptatif sur la rĂ©ponse de l’algorithme de dĂ©tection de ligne pour ĂȘtre plus robuste aux images non-uniformes. Nous illustrons Ă©galement comment une combinaison des deux mĂ©thodes individuelles, Ă  plusieurs Ă©chelles, est capable de reconnecter les vaisseaux sur des distances variables. Un algorithme de reconstruction des vaisseaux est Ă©galement proposĂ©. Cette derniĂšre Ă©tape est nĂ©cessaire car l’information gĂ©omĂ©trique complĂšte est requise pour pouvoir utiliser la segmentation dans un systĂšme d’aide au diagnostic. La segmentation a Ă©tĂ© validĂ©e sur une base de donnĂ©es d’images de fond d’oeil Ă  haute rĂ©solution. Cette base contient des images manifestant une rĂ©tinopathie diabĂ©tique. La seg- mentation emploie des mesures de dĂ©saccord standards et aussi des mesures basĂ©es sur la perception. En considĂ©rant juste les petits vaisseaux dans les images de la base de donnĂ©es, l’amĂ©lioration dans le taux de sensibilitĂ© que notre mĂ©thode apporte par rapport Ă  la mĂ©thode standard de dĂ©tection multi-niveaux de lignes est de 6.47%. En utilisant les mesures basĂ©es sur la perception, l’amĂ©lioration est de 7.8%. Dans une seconde partie du manuscrit, nous proposons Ă©galement une mĂ©thode pour caractĂ©riser les rĂ©tines saines ou anormales. Certaines images contiennent de la nĂ©ovascula- risation. La caractĂ©risation des vaisseaux en bonne santĂ© ou anormale constitue une Ă©tape essentielle pour le dĂ©veloppement d’un systĂšme d’aide au diagnostic. En plus des dĂ©fis que posent les petits vaisseaux sains, les nĂ©ovaisseaux dĂ©montrent eux un degrĂ© de complexitĂ© encore plus Ă©levĂ©. Ceux-ci forment en effet des rĂ©seaux de vaisseaux Ă  la morphologie com- plexe et inhabituelle, souvent minces et Ă  fortes courbures. Les travaux existants se limitent viii Ă  l’utilisation de caractĂ©ristiques de premier ordre extraites des petits vaisseaux segmentĂ©s. Notre contribution est d’utiliser le vote tensoriel pour isoler les jonctions vasculaires et d’uti- liser ces jonctions comme points d’intĂ©rĂȘts. Nous utilisons ensuite une statistique spatiale de second ordre calculĂ©e sur les jonctions pour caractĂ©riser les vaisseaux comme Ă©tant sains ou pathologiques. Notre mĂ©thode amĂ©liore la sensibilitĂ© de la caractĂ©risation de 9.09% par rapport Ă  une mĂ©thode de l’état de l’art. La mĂ©thode dĂ©veloppĂ©e s’est rĂ©vĂ©lĂ©e efficace pour la segmentation des vaisseaux rĂ©ti- niens. Des tenseurs d’ordre supĂ©rieur ainsi que la mise en Ɠuvre d’un vote par tenseur via un filtrage orientable pourraient ĂȘtre Ă©tudiĂ©s pour rĂ©duire davantage le temps d’exĂ©cution et rĂ©soudre les dĂ©fis encore prĂ©sents au niveau des jonctions vasculaires. De plus, la caractĂ©ri- sation pourrait ĂȘtre amĂ©liorĂ©e pour la dĂ©tection de la rĂ©tinopathie prolifĂ©rative en utilisant un apprentissage supervisĂ© incluant des cas de rĂ©tinopathie diabĂ©tique non prolifĂ©rative ou d’autres pathologies. Finalement, l’incorporation des mĂ©thodes proposĂ©es dans des systĂšmes d’aide au diagnostic pourrait favoriser le dĂ©pistage rĂ©gulier pour une dĂ©tection prĂ©coce des rĂ©tinopathies et d’autres pathologies oculaires dans le but de rĂ©duire la cessitĂ© au sein de la population.----------ABSTRACT As an easily accessible site for the direct observation of the circulation system, human retina can offer a unique insight into diseases development or outcome. Retinal vessels are repre- sentative of the general condition of the whole systematic circulation, and thus can act as a "window" to the status of the vascular network in the whole body. Each complication on the retina can have an adverse impact on the patient’s sight. In this direction, small vessels’ relevance is very high as they are among the first anatomical structures that get affected as diseases progress. Moreover, changes in the small vessels’ state, appearance, morphology, functionality, or even growth indicate the severity of the diseases. This thesis will focus on the retinal lesions due to diabetes, a serious metabolic disease affecting millions of people around the world. This disorder disturbs the natural blood glucose levels causing various pathophysiological changes in different systems across the human body. Diabetic retinopathy is the medical term that describes the condition when the fundus and the retinal vessels are affected by diabetes. As in other diseases, small vessels play a crucial role in the onset, the development, and the outcome of the retinopathy. More importantly, at the latest stage, new small vessels, or neovascularizations, growth constitutes a factor of significant risk for blindness. Therefore, there is a need to detect all the changes that occur in the small retinal vessels with the aim of characterizing the vessels to healthy or abnormal. The characterization, in turn, can facilitate the detection of a specific retinopathy locally, like the sight-threatening proliferative diabetic retinopathy. Segmentation techniques can automatically isolate important anatomical structures like the vessels, and provide this information to the physician to assist him in the final decision. In comprehensive systems for the automatization of DR detection, small vessels role is significant as missing them early in a CAD pipeline might lead to an increase in the false positive rate of red lesions in subsequent steps. So far, the efforts have been concentrated mostly on the accurate localization of the medium range vessels. In contrast, the existing models are weak in case of the small vessels. The required generalization to adapt an existing model does not allow the approaches to be flexible, yet robust to compensate for the increased variability in the appearance as well as the interference with the background. So far, the current template models (matched filtering, line detection, and morphological processing) assume a general shape for the vessels that is not enough to approximate the narrow, curved, characteristics of the small vessels. Additionally, due to the weak contrast in the small vessel regions, the current segmentation and the tracking methods produce fragmented or discontinued results. Alternatively, the small vessel segmentation can be accomplished at the expense of x background noise magnification, in the case of using thresholding or the image derivatives methods. Furthermore, the proposed deformable models are not able to propagate a contour to the full extent of the vasculature in order to enclose all the small vessels. The deformable model external forces are ineffective to compensate for the low contrast, the low width, the high variability in the small vessel appearance, as well as the discontinuities. Internal forces, also, are not able to impose a global shape constraint to the contour that could be able to approximate the variability in the appearance of the vasculature in different categories of vessels. Finally, machine learning approaches require the training of a classifier on a labelled set. Those sets are difficult to be obtained, especially in the case of the smallest vessels. In the case of the unsupervised methods, the user has to predefine the number of clusters and perform an effective initialization of the cluster centers in order to converge to the global minimum. This dissertation expanded the previous research work and provides a new segmentation method for the smallest retinal vessels. Multi-scale line detection (MSLD) is a recent method that demonstrates good segmentation performance in the retinal images, while tensor voting is a method first proposed for reconnecting pixels. For the first time, we combined the line detection with the tensor voting framework. The application of the line detectors has been proved an effective way to segment medium-sized vessels. Additionally, perceptual organization approaches like tensor voting, demonstrate increased robustness by combining information coming from the neighborhood in a hierarchical way. Tensor voting is closer than standard models to the way human perception functions. As we show, it is a more powerful tool to segment small vessels than the existing methods. This specific combination allows us to overcome the apparent fragmentation challenge of the template methods at the smallest vessels. Moreover, we thresholded the line detection response adaptively to compensate for non-uniform images. We also combined the two individual methods in a multi-scale scheme in order to reconnect vessels at variable distances. Finally, we reconstructed the vessels from their extracted centerlines based on pixel painting as complete geometric information is required to be able to utilize the segmentation in a CAD system. The segmentation was validated on a high-resolution fundus image database that in- cludes diabetic retinopathy images of varying stages, using standard discrepancy as well as perceptual-based measures. When only the smallest vessels are considered, the improve- ments in the sensitivity rate for the database against the standard multi-scale line detection method is 6.47%. For the perceptual-based measure, the improvement is 7.8% against the basic method. The second objective of the thesis was to implement a method for the characterization of isolated retinal areas into healthy or abnormal cases. Some of the original images, from which xi these patches are extracted, contain neovascularizations. Investigation of image features for the vessels characterization to healthy or abnormal constitutes an essential step in the direction of developing CAD system for the automatization of DR screening. Given that the amount of data will significantly increase under CAD systems, the focus on this category of vessels can facilitate the referral of sight-threatening cases to early treatment. In addition to the challenges that small healthy vessels pose, neovessels demonstrate an even higher degree of complexity as they form networks of convolved, twisted, looped thin vessels. The existing work is limited to the use of first-order characteristics extracted from the small segmented vessels that limits the study of patterns. Our contribution is in using the tensor voting framework to isolate the retinal vascular junctions and in turn using those junctions as points of interests. Second, we exploited second-order statistics computed on the junction spatial distribution to characterize the vessels as healthy or neovascularizations. In fact, the second-order spatial statistics extracted from the junction distribution are combined with widely used features to improve the characterization sensitivity by 9.09% over the state of art. The developed method proved effective for the segmentation of the retinal vessels. Higher order tensors along with the implementation of tensor voting via steerable filtering could be employed to further reduce the execution time, and resolve the challenges at vascular junctions. Moreover, the characterization could be advanced to the detection of prolifera- tive retinopathy by extending the supervised learning to include non-proliferative diabetic retinopathy cases or other pathologies. Ultimately, the incorporation of the methods into CAD systems could facilitate screening for the effective reduction of the vision-threatening diabetic retinopathy rates, or the early detection of other than ocular pathologies

    A perceptual learning model to discover the hierarchical latent structure of image collections

    Get PDF
    Biology has been an unparalleled source of inspiration for the work of researchers in several scientific and engineering fields including computer vision. The starting point of this thesis is the neurophysiological properties of the human early visual system, in particular, the cortical mechanism that mediates learning by exploiting information about stimuli repetition. Repetition has long been considered a fundamental correlate of skill acquisition andmemory formation in biological aswell as computational learning models. However, recent studies have shown that biological neural networks have differentways of exploiting repetition in forming memory maps. The thesis focuses on a perceptual learning mechanism called repetition suppression, which exploits the temporal distribution of neural activations to drive an efficient neural allocation for a set of stimuli. This explores the neurophysiological hypothesis that repetition suppression serves as an unsupervised perceptual learning mechanism that can drive efficient memory formation by reducing the overall size of stimuli representation while strengthening the responses of the most selective neurons. This interpretation of repetition is different from its traditional role in computational learning models mainly to induce convergence and reach training stability, without using this information to provide focus for the neural representations of the data. The first part of the thesis introduces a novel computational model with repetition suppression, which forms an unsupervised competitive systemtermed CoRe, for Competitive Repetition-suppression learning. The model is applied to generalproblems in the fields of computational intelligence and machine learning. Particular emphasis is placed on validating the model as an effective tool for the unsupervised exploration of bio-medical data. In particular, it is shown that the repetition suppression mechanism efficiently addresses the issues of automatically estimating the number of clusters within the data, as well as filtering noise and irrelevant input components in highly dimensional data, e.g. gene expression levels from DNA Microarrays. The CoRe model produces relevance estimates for the each covariate which is useful, for instance, to discover the best discriminating bio-markers. The description of the model includes a theoretical analysis using Huber’s robust statistics to show that the model is robust to outliers and noise in the data. The convergence properties of themodel also studied. It is shown that, besides its biological underpinning, the CoRe model has useful properties in terms of asymptotic behavior. By exploiting a kernel-based formulation for the CoRe learning error, a theoretically sound motivation is provided for the model’s ability to avoid local minima of its loss function. To do this a necessary and sufficient condition for global error minimization in vector quantization is generalized by extending it to distance metrics in generic Hilbert spaces. This leads to the derivation of a family of kernel-based algorithms that address the local minima issue of unsupervised vector quantization in a principled way. The experimental results show that the algorithm can achieve a consistent performance gain compared with state-of-the-art learning vector quantizers, while retaining a lower computational complexity (linear with respect to the dataset size). Bridging the gap between the low level representation of the visual content and the underlying high-level semantics is a major research issue of current interest. The second part of the thesis focuses on this problem by introducing a hierarchical and multi-resolution approach to visual content understanding. On a spatial level, CoRe learning is used to pool together the local visual patches by organizing them into perceptually meaningful intermediate structures. On the semantical level, it provides an extension of the probabilistic Latent Semantic Analysis (pLSA) model that allows discovery and organization of the visual topics into a hierarchy of aspects. The proposed hierarchical pLSA model is shown to effectively address the unsupervised discovery of relevant visual classes from pictorial collections, at the same time learning to segment the image regions containing the discovered classes. Furthermore, by drawing on a recent pLSA-based image annotation system, the hierarchical pLSA model is extended to process and representmulti-modal collections comprising textual and visual data. The results of the experimental evaluation show that the proposed model learns to attach textual labels (available only at the level of the whole image) to the discovered image regions, while increasing the precision/ recall performance with respect to flat, pLSA annotation model

    Stratégies efficaces en caractérisation des matériaux et calibration de modÚles mécaniques pour la conception virtuelle des tÎles métalliques

    Get PDF
    The mechanical design of sheet metal forming parts tends to be more virtual, reducing delays and manufacturing costs. Reliable numerical simulations can also lead to optimized metallic parts using accurately calibrated advanced constitutive models. Thus, the aim of this thesis is to improve the representation of the mechanical behavior of the material in the numerical model, by developing efficient and accurate methodologies to calibrate advanced constitutive models. A recent trend in material characterization is the use of a limited number of heterogeneous mechanical tests, which provide more valuable data than classical quasi-homogeneous tests. Yet, the design of the most suitable tests is still an open question. To that extent, an overview of heterogeneous mechanical tests for metallic sheets is provided. However, no standards exist for such tests, so specific metrics to analyze the achieved mechanical states are suggested and applied to four tests. Results show that the use of various metrics provides a good basis to qualitatively and quantitatively evaluate heterogeneous mechanical tests. Due to the development of full-field measurement techniques, it is possible to use heterogeneous mechanical tests to characterize the behavior of materials. However, no analytical solution exists between the measured fields and the material parameters. Inverse methodologies are required to calibrate constitutive models using an optimization algorithm to find the best material parameters. Most applications tend to use a gradient-based algorithm without exploring other possibilities. The performance of gradient-based and -free algorithms in the calibration of a thermoelastoviscoplastic model is discussed in terms of efficiency and robustness of the optimization process. Often, plane stress conditions are assumed in the calibration of constitutive models. Nevertheless, it is still unclear whether these are acceptable when dealing with large deformations. To further understand these limitations, the calibration of constitutive models is compared using the virtual fields method implemented in 2D and 3D frameworks. However, the 3D framework requires volumetric information of the kinematic fields, which is experimentally difficult to obtain. To address this constraint, an already existing volume reconstruction method, named internal mesh generation, is further improved to take into account strain gradients in the thickness. The uncertainty of the method is quantified through virtual experiments and synthetic images. Overall, the impact of this thesis is related to (i) the importance of establishing standard metrics in the selection and design of heterogeneous mechanical tests, and (ii) enhancing the calibration of advanced constitutive models from a 2D to a 3D framework.O projeto mecĂąnico de peças por conformação de chapas metĂĄlicas tende a ser mais virtual, reduzindo atrasos e custos de produção. SimulaçÔes numĂ©ricas confiĂĄveis tambĂ©m podem levar a peças optimizadas usando modelos constitutivos avançados calibrados com precisĂŁo. Assim, o objetivo desta tese Ă© melhorar a representação do comportamento mecĂąnico do material no modelo numĂ©rico, atravĂ©s do desenvolvimento de metodologias eficientes e precisas para a calibração de modelos constitutivos avançados. Uma tendĂȘncia recente na caracterização de materiais Ă© o uso de um nĂșmero limitado de ensaios mecĂąnicos heterogĂ©neos, que fornecem dados mais valiosos do que os ensaios clĂĄssicos quase-homogĂ©neos. No entanto, a concepção de ensaios mais adequados ainda Ă© uma questĂŁo em aberto. Este trabalho detalha os ensaios mecĂąnicos heterogĂȘneos para chapas metĂĄlicas. No entanto, nĂŁo existem ainda normas para estes ensaios, pelo que mĂ©tricas especĂ­ficas para analisar os estados mecĂąnicos sĂŁo sugeridas e aplicadas a quatro ensaios. Os resultados mostram que o uso de vĂĄrias mĂ©tricas disponibiliza uma boa base para avaliar ensaios mecĂąnicos heterogĂ©neos. Devido ao desenvolvimento de tĂ©cnicas de medição de campo total, Ă© possĂ­vel utilizar ensaios mecĂąnicos heterogĂ©neos para caracterizar o comportamento dos materiais. No entanto, nĂŁo existe uma solução analĂ­tica entre os campos medidos e os parĂąmetros do material. Metodologias inversas sĂŁo necessĂĄrias para calibrar os modelos constitutivos usando um algoritmo de otimização para encontrar os melhores parĂąmetros do material. A maioria das aplicaçÔes tende a usar um algoritmo baseado em gradiente sem explorar outras possibilidades. O desempenho de vĂĄrios algoritmos na calibração de um modelo termoelastoviscoplĂĄstico Ă© discutido em termos de eficiĂȘncia e robustez do processo de otimização. Frequentemente, sĂŁo utilizadas condiçÔes de estado plano de tensĂŁo na calibração de modelos constitutivos, hipĂłtese que Ă© questionada quando se trata de grandes deformaçÔes. A calibração de modelos constitutivos Ă© comparada usando o mĂ©todo de campos virtuais implementado em 2D e 3D. No entanto, a implementação 3D requer informaçÔes volumĂ©tricas dos campos cinemĂĄticos, o que Ă© experimentalmente difĂ­cil de obter. Um mĂ©todo de reconstrução volĂșmica jĂĄ existente Ă© melhorado para considerar os gradientes de deformação ao longo da espessura. A incerteza do mĂ©todo Ă© quantificada atravĂ©s de ensaios virtuais e imagens sintĂ©ticas. No geral, o impacto desta tese estĂĄ relacionado com (i) a importĂąncia de estabelecer mĂ©tricas na seleção e concepção de ensaios mecĂąnicos heterogĂ©neos, e (ii) promover desenvolvimentos na calibração de modelos constitutivos avançados de 2D para 3D.La conception mĂ©canique des piĂšces mĂ©talliques tend Ă  ĂȘtre plus virtuelle, rĂ©duisant les dĂ©lais et les coĂ»ts de fabrication. Des simulations numĂ©riques fiables peuvent conduire Ă  des piĂšces optimisĂ©es en utilisant des modĂšles mĂ©caniques avancĂ©s calibrĂ©s avec prĂ©cision. Ainsi, l’objectif de cette thĂšse est d’amĂ©liorer la reprĂ©sentation du comportement mĂ©canique du matĂ©riau dans le modĂšle numĂ©rique, en dĂ©veloppant des mĂ©thodologies efficaces et prĂ©cises pour calibrer des modĂšles de comportement avancĂ©s. Une tendance rĂ©cente dans la caractĂ©risation des matĂ©riaux est l’utilisation d’un nombre limitĂ© d’essais mĂ©caniques hĂ©tĂ©rogĂšnes, qui fournissent des donnĂ©es plus riches que les essais classiques quasi-homogĂšnes. Pourtant, la conception des tests les plus adaptĂ©s reste une question ouverte. Ce tra- vail dĂ©taille les essais mĂ©caniques hĂ©tĂ©rogĂšnes pour les tĂŽles mĂ©talliques. Cependant, aucune norme n’existe pour de tels tests, ainsi des mĂ©triques spĂ©cifiques pour analyser les Ă©tats mĂ©caniques sont suggĂ©rĂ©es et appliquĂ©es Ă  quatre tests. Les rĂ©sultats montrent que l’utilisation de diverses mĂ©triques fournit une bonne base pour Ă©valuer des essais mĂ©caniques hĂ©tĂ©rogĂšnes. L’utilisation des essais mĂ©caniques hĂ©tĂ©rogĂšnes pour caractĂ©riser le com- portement des matĂ©riaux est rendue possible par des mesures de champ. Cependant, aucune solution analytique n’existe entre les champs mesurĂ©s et les paramĂštres du matĂ©riau. Des mĂ©thodologies inverses sont nĂ©cessaires pour calibrer les modĂšles de comportement Ă  l’aide d’un algorithme d’optimi- sation afin de dĂ©terminer les meilleurs paramĂštres de matĂ©riau. Un algorithme basĂ© sur le gradient est trĂšs frĂ©quemment utilisĂ©, sans explorer d’autres pos- sibilitĂ©s. La performance de plusieurs algorithmes dans la calibration d’un modĂšle thermoĂ©lastoviscoplastique est discutĂ©e en termes d’efficacitĂ© et de robustesse du processus d’optimisation. Souvent, des conditions de contraintes planes sont supposĂ©es dans la cal- ibration des modĂšles, hypothĂšse qui est remise en cause dans le cas de forte localisation des dĂ©formations. La calibration de modĂšles de comporte- ment est comparĂ©e Ă  l’aide de la mĂ©thode des champs virtuels dĂ©veloppĂ©e dans les cadres 2D et 3D. Cependant, le cadre 3D nĂ©cessite des informations volumĂ©triques des champs cinĂ©matiques, ce qui est expĂ©rimentalement dif- ficile Ă  obtenir. Une mĂ©thode de reconstruction volumique dĂ©jĂ  existante est encore amĂ©liorĂ©e pour prendre en compte les gradients de dĂ©formation dans l’épaisseur. L’incertitude de la mĂ©thode est quantifiĂ©e par des expĂ©riences virtuelles, Ă  l’aide d’images de synthĂšse. Dans l’ensemble, l’impact de cette thĂšse est liĂ© Ă  (i) l’importance d’établir des mĂ©triques dans la sĂ©lection et la conception d’essais mĂ©caniques hĂ©tĂ©rogĂšnes, et (ii) Ă  faire progresser la calibration de modĂšles de comportement avancĂ©s d’un cadre 2D Ă  un cadre 3D.Programa Doutoral em Engenharia MecĂąnic
    corecore