11 research outputs found
Reproducible radiomics through automated machine learning validated on twelve clinical applications
Radiomics uses quantitative medical imaging features to predict clinical outcomes. Currently, in a new clinical application, findingthe optimal radiomics method out of the wide range of available options has to be done manually through a heuristic trial-anderror process. In this study we propose a framework for automatically optimizing the construction of radiomics workflows perapplication. To this end, we formulate radiomics as a modular workflow and include a large collection of common algorithms foreach component. To optimize the workflow per application, we employ automated machine learning using a random search andensembling. We evaluate our method in twelve different clinical applications, resulting in the following area under the curves: 1)liposarcoma (0.83); 2) desmoid-type fibromatosis (0.82); 3) primary liver tumors (0.80); 4) gastrointestinal stromal tumors (0.77);5) colorectal liver metastases (0.61); 6) melanoma metastases (0.45); 7) hepatocellular carcinoma (0.75); 8) mesenteric fibrosis(0.80); 9) prostate cancer (0.72); 10) glioma (0.71); 11) Alzheimerâs disease (0.87); and 12) head and neck cancer (0.84). Weshow that our framework has a competitive performance compared human experts, outperforms a radiomics baseline, and performssimilar or superior to Bayesian optimization and more advanced ensemble approaches. Concluding, our method fully automaticallyoptimizes the construction of radiomics workflows, thereby streamlining the search for radiomics biomarkers in new applications.To facilitate reproducibility and future research, we publicly release six datasets, the software implementation of our framework,and the code to reproduce this study
Deep Learning Based Medical Image Analysis with Limited Data
Deep Learning Methods have shown its great effort in the area of Computer Vision. However, when solving the problems of medical imaging, deep learningâs power is confined by limited data available. We present a series of novel methodologies for solving medical imaging analysis problems with limited Computed tomography (CT) scans available. Our method, based on deep learning, with different strategies, including using Generative Adversar- ial Networks, two-stage training, infusing the expert knowledge, voting based or converting to other space, solves the data set limitation issue for the cur- rent medical imaging problems, specifically cancer detection and diagnosis, and shows very good performance and outperforms the state-of-art results in the literature. With the self-learned features, deep learning based techniques start to be applied to the biomedical imaging problems and various structures have been designed. In spite of its simplity and anticipated good performance,
the deep learning based techniques can not perform to its best extent due to the limited size of data sets for the medical imaging problems. On the other side, the traditional hand-engineered features based methods have been studied in the past decades and a lot of useful features have been found by these research for the task of detecting and diagnosing the pulmonary nod- ules on CT scans, but these methods are usually performed through a series of complicated procedures with manually empirical parameter adjustments. Our method significantly reduces the complications of the traditional proce- dures for pulmonary nodules detection, while retaining and even outperforming the state-of-art accuracy. Besides, we make contribution on how to convert low-dose CT image to full-dose CT so as to adapting current models on the newly-emerged low-dose CT data
Complexity Reduction in Image-Based Breast Cancer Care
The diversity of malignancies of the breast requires personalized diagnostic and therapeutic decision making in a complex situation. This thesis contributes in three clinical areas: (1) For clinical diagnostic image evaluation, computer-aided detection and diagnosis of mass and non-mass lesions in breast MRI is developed. 4D texture features characterize mass lesions. For non-mass lesions, a combined detection/characterisation method utilizes the bilateral symmetry of the breast s contrast agent uptake. (2) To improve clinical workflows, a breast MRI reading paradigm is proposed, exemplified by a breast MRI reading workstation prototype. Instead of mouse and keyboard, it is operated using multi-touch gestures. The concept is extended to mammography screening, introducing efficient navigation aids. (3) Contributions to finite element modeling of breast tissue deformations tackle two clinical problems: surgery planning and the prediction of the breast deformation in a MRI biopsy device
Computer vision for sequential non-invasive microscopy imaging cytometry with applications in embryology
Many in vitro cytometric methods requires the sample to be destroyed in the process.
Using image analysis of non-invasive microscopy techniques it is possible to monitor
samples undisturbed in their natural environment, providing new insights into cell development,
morphology and health. As the effect on the sample is minimized, imaging
can be sustained for long un-interrupted periods of time, making it possible to study
temporal events as well as individual cells over time. These methods are applicable in
a number of fields, and are of particular importance in embryological studies, where no
sample interference is acceptable.
Using long term image capture and digital image cytometry of growing embryos it is
possible to perform morphokinetic screening, automated analysis and annotation using
proper software tools. By literature reference, one such framework is suggested and the
required methods are developed and evaluated. Results are shown in tracking embryos,
embryo cell segmentation, analysis of internal cell structures and profiling of cell growth
and activity. Two related extensions of the framework into three dimensional embryo
analysis and adherent cell monitoring are described
Segmentation and Characterization of Small Retinal Vessels in Fundus Images Using the Tensor Voting Approach
RĂSUMĂ
La rétine permet de visualiser facilement une partie du réseau vasculaire humain. Elle offre
ainsi un aperçu direct sur le développement et le résultat de certaines maladies liées au réseau
vasculaire dans son entier. Chaque complication visible sur la rétine peut avoir un impact sur
la capacité visuelle du patient. Les plus petits vaisseaux sanguins sont parmi les premiÚres
structures anatomiques affectĂ©es par la progression dâune maladie, ĂȘtre capable de les analyser
est donc crucial. Les changements dans lâĂ©tat, lâaspect, la morphologie, la fonctionnalitĂ©, ou
mĂȘme la croissance des petits vaisseaux indiquent la gravitĂ© des maladies.
Le diabÚte est une maladie métabolique qui affecte des millions de personnes autour
du monde. Cette maladie affecte le taux de glucose dans le sang et cause des changements
pathologiques dans diffĂ©rents organes du corps humain. La rĂ©tinopathie diabĂ©tique dĂ©crit lâen-
semble des conditions et conséquences du diabÚte au niveau de la rétine. Les petits vaisseaux
jouent un rÎle dans le déclenchement, le développement et les conséquences de la rétinopa-
thie. Dans les derniĂšres Ă©tapes de cette maladie, la croissance des nouveaux petits vaisseaux,
appelée néovascularisation, présente un risque important de provoquer la cécité. Il est donc
crucial de détecter tous les changements qui ont lieu dans les petits vaisseaux de la rétine
dans le but de caractériser les vaisseaux sains et les vaisseaux anormaux. La caractérisation
en elle-mĂȘme peut faciliter la dĂ©tection locale dâune rĂ©tinopathie spĂ©cifique.
La segmentation automatique des structures anatomiques comme le réseau vasculaire est
une Ă©tape cruciale. Ces informations peuvent ĂȘtre fournies Ă un mĂ©decin pour quâelles soient
considĂ©rĂ©es lors de son diagnostic. Dans les systĂšmes automatiques dâaide au diagnostic, le
rÎle des petits vaisseaux est significatif. Ne pas réussir à les détecter automatiquement peut
conduire à une sur-segmentation du taux de faux positifs des lésions rouges dans les étapes
ultĂ©rieures. Les efforts de recherche se sont concentrĂ©s jusquâĂ prĂ©sent sur la localisation
précise des vaisseaux de taille moyenne. Les modÚles existants ont beaucoup plus de difficultés
Ă extraire les petits vaisseaux sanguins. Les modĂšles existants ne sont pas robustes Ă la grande
variance dâapparence des vaisseaux ainsi quâĂ lâinterfĂ©rence avec lâarriĂšre-plan. Les modĂšles de
la littĂ©rature existante supposent une forme gĂ©nĂ©rale qui nâest pas suffisante pour sâadapter
à la largeur étroite et la courbure qui caractérisent les petits vaisseaux sanguins. De plus, le
contraste avec lâarriĂšre-plan dans les rĂ©gions des petits vaisseaux est trĂšs faible. Les mĂ©thodes
de segmentation ou de suivi produisent des résultats fragmentés ou discontinus. Par ailleurs,
la segmentation des petits vaisseaux est gĂ©nĂ©ralement faite aux dĂ©pends de lâamplification
du bruit. Les modÚles déformables sont inadéquats pour segmenter les petits vaisseaux. Les
forces utilisées ne sont pas assez flexibles pour compenser le faible contraste, la largeur, et
vii
la variance des vaisseaux. Enfin, les approches de type apprentissage machine nécessitent un
entraĂźnement avec une base de donnĂ©es Ă©tiquetĂ©e. Il est trĂšs difficile dâobtenir ces bases de
données dans le cas des petits vaisseaux.
Cette thÚse étend les travaux de recherche antérieurs en fournissant une nouvelle mé-
thode de segmentation des petits vaisseaux rétiniens. La détection de ligne à échelles multiples
(MSLD) est une méthode récente qui démontre une bonne performance de segmentation dans
les images de la rétine, tandis que le vote tensoriel est une méthode proposée pour reconnecter
les pixels. Une approche combinant un algorithme de détection de ligne et de vote tensoriel est
proposĂ©e. Lâapplication des dĂ©tecteurs de lignes a prouvĂ© son efficacitĂ© Ă segmenter les vais-
seaux de tailles moyennes. De plus, les approches dâorganisation perceptuelle comme le vote
tensoriel ont dĂ©montrĂ© une meilleure robustesse en combinant les informations voisines dâune
maniÚre hiérarchique. La méthode de vote tensoriel est plus proche de la perception humain
que dâautres modĂšles standards. Comme dĂ©montrĂ© dans ce manuscrit, câest un outil pour
segmenter les petits vaisseaux plus puissant que les méthodes existantes. Cette combinaison
spécifique nous permet de surmonter les défis de fragmentation éprouvés par les méthodes de
type modĂšle dĂ©formable au niveau des petits vaisseaux. Nous proposons Ă©galement dâutiliser
un seuil adaptatif sur la rĂ©ponse de lâalgorithme de dĂ©tection de ligne pour ĂȘtre plus robuste
aux images non-uniformes. Nous illustrons Ă©galement comment une combinaison des deux
méthodes individuelles, à plusieurs échelles, est capable de reconnecter les vaisseaux sur des
distances variables. Un algorithme de reconstruction des vaisseaux est également proposé.
Cette derniĂšre Ă©tape est nĂ©cessaire car lâinformation gĂ©omĂ©trique complĂšte est requise pour
pouvoir utiliser la segmentation dans un systĂšme dâaide au diagnostic.
La segmentation a Ă©tĂ© validĂ©e sur une base de donnĂ©es dâimages de fond dâoeil Ă haute
résolution. Cette base contient des images manifestant une rétinopathie diabétique. La seg-
mentation emploie des mesures de désaccord standards et aussi des mesures basées sur la
perception. En considérant juste les petits vaisseaux dans les images de la base de données,
lâamĂ©lioration dans le taux de sensibilitĂ© que notre mĂ©thode apporte par rapport Ă la mĂ©thode
standard de détection multi-niveaux de lignes est de 6.47%. En utilisant les mesures basées
sur la perception, lâamĂ©lioration est de 7.8%.
Dans une seconde partie du manuscrit, nous proposons également une méthode pour
caractériser les rétines saines ou anormales. Certaines images contiennent de la néovascula-
risation. La caractérisation des vaisseaux en bonne santé ou anormale constitue une étape
essentielle pour le dĂ©veloppement dâun systĂšme dâaide au diagnostic. En plus des dĂ©fis que
posent les petits vaisseaux sains, les néovaisseaux démontrent eux un degré de complexité
encore plus élevé. Ceux-ci forment en effet des réseaux de vaisseaux à la morphologie com-
plexe et inhabituelle, souvent minces et Ă fortes courbures. Les travaux existants se limitent
viii
Ă lâutilisation de caractĂ©ristiques de premier ordre extraites des petits vaisseaux segmentĂ©s.
Notre contribution est dâutiliser le vote tensoriel pour isoler les jonctions vasculaires et dâuti-
liser ces jonctions comme points dâintĂ©rĂȘts. Nous utilisons ensuite une statistique spatiale
de second ordre calculée sur les jonctions pour caractériser les vaisseaux comme étant sains
ou pathologiques. Notre méthode améliore la sensibilité de la caractérisation de 9.09% par
rapport Ă une mĂ©thode de lâĂ©tat de lâart.
La mĂ©thode dĂ©veloppĂ©e sâest rĂ©vĂ©lĂ©e efficace pour la segmentation des vaisseaux rĂ©ti-
niens. Des tenseurs dâordre supĂ©rieur ainsi que la mise en Ćuvre dâun vote par tenseur via
un filtrage orientable pourraient ĂȘtre Ă©tudiĂ©s pour rĂ©duire davantage le temps dâexĂ©cution et
résoudre les défis encore présents au niveau des jonctions vasculaires. De plus, la caractéri-
sation pourrait ĂȘtre amĂ©liorĂ©e pour la dĂ©tection de la rĂ©tinopathie prolifĂ©rative en utilisant
un apprentissage supervisé incluant des cas de rétinopathie diabétique non proliférative ou
dâautres pathologies. Finalement, lâincorporation des mĂ©thodes proposĂ©es dans des systĂšmes
dâaide au diagnostic pourrait favoriser le dĂ©pistage rĂ©gulier pour une dĂ©tection prĂ©coce des
rĂ©tinopathies et dâautres pathologies oculaires dans le but de rĂ©duire la cessitĂ© au sein de la
population.----------ABSTRACT
As an easily accessible site for the direct observation of the circulation system, human retina
can offer a unique insight into diseases development or outcome. Retinal vessels are repre-
sentative of the general condition of the whole systematic circulation, and thus can act as
a "window" to the status of the vascular network in the whole body. Each complication on
the retina can have an adverse impact on the patientâs sight. In this direction, small vesselsâ
relevance is very high as they are among the first anatomical structures that get affected
as diseases progress. Moreover, changes in the small vesselsâ state, appearance, morphology,
functionality, or even growth indicate the severity of the diseases.
This thesis will focus on the retinal lesions due to diabetes, a serious metabolic disease
affecting millions of people around the world. This disorder disturbs the natural blood glucose
levels causing various pathophysiological changes in different systems across the human body.
Diabetic retinopathy is the medical term that describes the condition when the fundus and
the retinal vessels are affected by diabetes. As in other diseases, small vessels play a crucial
role in the onset, the development, and the outcome of the retinopathy. More importantly,
at the latest stage, new small vessels, or neovascularizations, growth constitutes a factor of
significant risk for blindness. Therefore, there is a need to detect all the changes that occur
in the small retinal vessels with the aim of characterizing the vessels to healthy or abnormal.
The characterization, in turn, can facilitate the detection of a specific retinopathy locally,
like the sight-threatening proliferative diabetic retinopathy.
Segmentation techniques can automatically isolate important anatomical structures like
the vessels, and provide this information to the physician to assist him in the final decision. In
comprehensive systems for the automatization of DR detection, small vessels role is significant
as missing them early in a CAD pipeline might lead to an increase in the false positive rate
of red lesions in subsequent steps. So far, the efforts have been concentrated mostly on the
accurate localization of the medium range vessels. In contrast, the existing models are weak
in case of the small vessels. The required generalization to adapt an existing model does not
allow the approaches to be flexible, yet robust to compensate for the increased variability in
the appearance as well as the interference with the background. So far, the current template
models (matched filtering, line detection, and morphological processing) assume a general
shape for the vessels that is not enough to approximate the narrow, curved, characteristics
of the small vessels. Additionally, due to the weak contrast in the small vessel regions,
the current segmentation and the tracking methods produce fragmented or discontinued
results. Alternatively, the small vessel segmentation can be accomplished at the expense of
x
background noise magnification, in the case of using thresholding or the image derivatives
methods. Furthermore, the proposed deformable models are not able to propagate a contour
to the full extent of the vasculature in order to enclose all the small vessels. The deformable
model external forces are ineffective to compensate for the low contrast, the low width, the
high variability in the small vessel appearance, as well as the discontinuities. Internal forces,
also, are not able to impose a global shape constraint to the contour that could be able to
approximate the variability in the appearance of the vasculature in different categories of
vessels. Finally, machine learning approaches require the training of a classifier on a labelled
set. Those sets are difficult to be obtained, especially in the case of the smallest vessels. In
the case of the unsupervised methods, the user has to predefine the number of clusters and
perform an effective initialization of the cluster centers in order to converge to the global
minimum.
This dissertation expanded the previous research work and provides a new segmentation
method for the smallest retinal vessels. Multi-scale line detection (MSLD) is a recent method
that demonstrates good segmentation performance in the retinal images, while tensor voting
is a method first proposed for reconnecting pixels. For the first time, we combined the
line detection with the tensor voting framework. The application of the line detectors has
been proved an effective way to segment medium-sized vessels. Additionally, perceptual
organization approaches like tensor voting, demonstrate increased robustness by combining
information coming from the neighborhood in a hierarchical way. Tensor voting is closer than
standard models to the way human perception functions. As we show, it is a more powerful
tool to segment small vessels than the existing methods. This specific combination allows us
to overcome the apparent fragmentation challenge of the template methods at the smallest
vessels. Moreover, we thresholded the line detection response adaptively to compensate for
non-uniform images. We also combined the two individual methods in a multi-scale scheme
in order to reconnect vessels at variable distances. Finally, we reconstructed the vessels
from their extracted centerlines based on pixel painting as complete geometric information
is required to be able to utilize the segmentation in a CAD system.
The segmentation was validated on a high-resolution fundus image database that in-
cludes diabetic retinopathy images of varying stages, using standard discrepancy as well as
perceptual-based measures. When only the smallest vessels are considered, the improve-
ments in the sensitivity rate for the database against the standard multi-scale line detection
method is 6.47%. For the perceptual-based measure, the improvement is 7.8% against the
basic method.
The second objective of the thesis was to implement a method for the characterization of
isolated retinal areas into healthy or abnormal cases. Some of the original images, from which
xi
these patches are extracted, contain neovascularizations. Investigation of image features
for the vessels characterization to healthy or abnormal constitutes an essential step in the
direction of developing CAD system for the automatization of DR screening. Given that the
amount of data will significantly increase under CAD systems, the focus on this category of
vessels can facilitate the referral of sight-threatening cases to early treatment. In addition
to the challenges that small healthy vessels pose, neovessels demonstrate an even higher
degree of complexity as they form networks of convolved, twisted, looped thin vessels. The
existing work is limited to the use of first-order characteristics extracted from the small
segmented vessels that limits the study of patterns. Our contribution is in using the tensor
voting framework to isolate the retinal vascular junctions and in turn using those junctions
as points of interests. Second, we exploited second-order statistics computed on the junction
spatial distribution to characterize the vessels as healthy or neovascularizations. In fact, the
second-order spatial statistics extracted from the junction distribution are combined with
widely used features to improve the characterization sensitivity by 9.09% over the state of
art.
The developed method proved effective for the segmentation of the retinal vessels. Higher
order tensors along with the implementation of tensor voting via steerable filtering could
be employed to further reduce the execution time, and resolve the challenges at vascular
junctions. Moreover, the characterization could be advanced to the detection of prolifera-
tive retinopathy by extending the supervised learning to include non-proliferative diabetic
retinopathy cases or other pathologies. Ultimately, the incorporation of the methods into
CAD systems could facilitate screening for the effective reduction of the vision-threatening
diabetic retinopathy rates, or the early detection of other than ocular pathologies
A perceptual learning model to discover the hierarchical latent structure of image collections
Biology has been an unparalleled source of inspiration for the work of researchers in several scientific and engineering fields including computer vision. The starting point of this thesis is the neurophysiological properties of the human early visual system, in particular, the cortical mechanism that mediates learning by exploiting information about stimuli repetition. Repetition has long been considered a fundamental correlate of skill acquisition andmemory formation in biological aswell
as computational learning models. However, recent studies
have shown that biological neural networks have differentways of exploiting repetition in forming memory maps. The thesis focuses on a perceptual learning mechanism called repetition suppression, which exploits the temporal distribution of neural activations to drive an efficient neural allocation for a set of stimuli. This explores the neurophysiological hypothesis that repetition suppression serves as an unsupervised perceptual learning mechanism that can drive efficient memory formation by reducing the overall size of stimuli representation while strengthening the responses of the most selective neurons.
This interpretation of repetition is different from its traditional role in computational learning models mainly to induce convergence and reach training stability, without using this information to provide focus for the neural representations of the data.
The first part of the thesis introduces a novel computational model with repetition suppression, which forms an unsupervised competitive systemtermed CoRe, for Competitive Repetition-suppression learning. The model is applied to generalproblems in the fields of computational intelligence and machine learning. Particular emphasis is placed on validating the model as an effective tool for the unsupervised exploration of bio-medical data. In particular, it is shown that the repetition suppression mechanism efficiently addresses the issues of automatically estimating the number of clusters within the data, as well as filtering noise and irrelevant input components
in highly dimensional data, e.g. gene expression levels
from DNA Microarrays. The CoRe model produces relevance
estimates for the each covariate which is useful, for instance, to discover the best discriminating bio-markers.
The description of the model includes a theoretical analysis
using Huberâs robust statistics to show that the model is robust to outliers and noise in the data. The convergence properties of themodel also studied. It is shown that, besides its biological underpinning, the CoRe model has useful properties in terms of asymptotic behavior. By exploiting a kernel-based formulation for the CoRe learning error, a theoretically sound motivation is provided for the modelâs ability to avoid local minima of its loss function. To do this a necessary and sufficient condition for global error minimization in vector quantization is generalized by extending it to distance metrics in generic Hilbert spaces. This leads to the derivation of a family of kernel-based algorithms that address the local minima issue of unsupervised vector quantization in a principled way.
The experimental results show that the algorithm can achieve
a consistent performance gain compared with state-of-the-art
learning vector quantizers, while retaining a lower computational complexity (linear with respect to the dataset size).
Bridging the gap between the low level representation of the
visual content and the underlying high-level semantics is a
major research issue of current interest. The second part of
the thesis focuses on this problem by introducing a hierarchical and multi-resolution approach to visual content understanding. On a spatial level, CoRe learning is used to pool together the local visual patches by organizing them into perceptually meaningful intermediate structures. On the semantical level, it provides an extension of the probabilistic Latent Semantic Analysis (pLSA) model that allows discovery and organization of the visual topics into a hierarchy of aspects.
The proposed hierarchical pLSA model is shown to effectively
address the unsupervised discovery of relevant visual
classes from pictorial collections, at the same time learning to segment the image regions containing the discovered classes. Furthermore, by drawing on a recent pLSA-based image annotation system, the hierarchical pLSA model is extended to process and representmulti-modal collections comprising textual and visual data. The results of the experimental evaluation show that the proposed model learns to attach textual labels (available only at the level of the whole image) to the discovered image regions, while increasing the precision/ recall performance with respect to flat, pLSA annotation model
Stratégies efficaces en caractérisation des matériaux et calibration de modÚles mécaniques pour la conception virtuelle des tÎles métalliques
The mechanical design of sheet metal forming parts tends to be more virtual,
reducing delays and manufacturing costs. Reliable numerical simulations can
also lead to optimized metallic parts using accurately calibrated advanced
constitutive models. Thus, the aim of this thesis is to improve the representation
of the mechanical behavior of the material in the numerical model, by
developing efficient and accurate methodologies to calibrate advanced constitutive
models.
A recent trend in material characterization is the use of a limited number of
heterogeneous mechanical tests, which provide more valuable data than classical
quasi-homogeneous tests. Yet, the design of the most suitable tests is
still an open question. To that extent, an overview of heterogeneous mechanical
tests for metallic sheets is provided. However, no standards exist for such
tests, so specific metrics to analyze the achieved mechanical states are suggested
and applied to four tests. Results show that the use of various metrics
provides a good basis to qualitatively and quantitatively evaluate heterogeneous
mechanical tests.
Due to the development of full-field measurement techniques, it is possible
to use heterogeneous mechanical tests to characterize the behavior of materials.
However, no analytical solution exists between the measured fields
and the material parameters. Inverse methodologies are required to calibrate
constitutive models using an optimization algorithm to find the best material
parameters. Most applications tend to use a gradient-based algorithm without
exploring other possibilities. The performance of gradient-based and -free
algorithms in the calibration of a thermoelastoviscoplastic model is discussed
in terms of efficiency and robustness of the optimization process.
Often, plane stress conditions are assumed in the calibration of constitutive
models. Nevertheless, it is still unclear whether these are acceptable when
dealing with large deformations. To further understand these limitations, the
calibration of constitutive models is compared using the virtual fields method
implemented in 2D and 3D frameworks. However, the 3D framework requires
volumetric information of the kinematic fields, which is experimentally difficult
to obtain. To address this constraint, an already existing volume reconstruction
method, named internal mesh generation, is further improved to take into
account strain gradients in the thickness. The uncertainty of the method is
quantified through virtual experiments and synthetic images.
Overall, the impact of this thesis is related to (i) the importance of establishing
standard metrics in the selection and design of heterogeneous mechanical
tests, and (ii) enhancing the calibration of advanced constitutive models from
a 2D to a 3D framework.O projeto mecùnico de peças por conformação de chapas metålicas tende a
ser mais virtual, reduzindo atrasos e custos de produção. SimulaçÔes numéricas
confiåveis também podem levar a peças optimizadas usando modelos
constitutivos avançados calibrados com precisão. Assim, o objetivo desta
tese é melhorar a representação do comportamento mecùnico do material no
modelo numérico, através do desenvolvimento de metodologias eficientes e
precisas para a calibração de modelos constitutivos avançados.
Uma tendĂȘncia recente na caracterização de materiais Ă© o uso de um nĂșmero
limitado de ensaios mecùnicos heterogéneos, que fornecem dados mais
valiosos do que os ensaios clåssicos quase-homogéneos. No entanto, a concepção
de ensaios mais adequados ainda Ă© uma questĂŁo em aberto. Este
trabalho detalha os ensaios mecĂąnicos heterogĂȘneos para chapas metĂĄlicas.
No entanto, não existem ainda normas para estes ensaios, pelo que métricas
especĂficas para analisar os estados mecĂąnicos sĂŁo sugeridas e aplicadas a
quatro ensaios. Os resultados mostram que o uso de vårias métricas disponibiliza
uma boa base para avaliar ensaios mecùnicos heterogéneos.
Devido ao desenvolvimento de técnicas de medição de campo total, é
possĂvel utilizar ensaios mecĂąnicos heterogĂ©neos para caracterizar o comportamento
dos materiais. No entanto, nĂŁo existe uma solução analĂtica entre
os campos medidos e os parĂąmetros do material. Metodologias inversas sĂŁo
necessĂĄrias para calibrar os modelos constitutivos usando um algoritmo de
otimização para encontrar os melhores parùmetros do material. A maioria
das aplicaçÔes tende a usar um algoritmo baseado em gradiente sem explorar
outras possibilidades. O desempenho de vårios algoritmos na calibração
de um modelo termoelastoviscoplĂĄstico Ă© discutido em termos de eficiĂȘncia
e robustez do processo de otimização.
Frequentemente, são utilizadas condiçÔes de estado plano de tensão na calibração
de modelos constitutivos, hipĂłtese que Ă© questionada quando se trata
de grandes deformaçÔes. A calibração de modelos constitutivos é comparada
usando o método de campos virtuais implementado em 2D e 3D. No
entanto, a implementação 3D requer informaçÔes volumétricas dos campos
cinemĂĄticos, o que Ă© experimentalmente difĂcil de obter. Um mĂ©todo de reconstrução
volĂșmica jĂĄ existente Ă© melhorado para considerar os gradientes
de deformação ao longo da espessura. A incerteza do método é quantificada
através de ensaios virtuais e imagens sintéticas.
No geral, o impacto desta tese estĂĄ relacionado com (i) a importĂąncia de
estabelecer métricas na seleção e concepção de ensaios mecùnicos heterogéneos,
e (ii) promover desenvolvimentos na calibração de modelos
constitutivos avançados de 2D para 3D.La conception mĂ©canique des piĂšces mĂ©talliques tend Ă ĂȘtre plus virtuelle,
réduisant les délais et les coûts de fabrication. Des simulations numériques
fiables peuvent conduire à des piÚces optimisées en utilisant des modÚles
mĂ©caniques avancĂ©s calibrĂ©s avec prĂ©cision. Ainsi, lâobjectif de cette thĂšse
est dâamĂ©liorer la reprĂ©sentation du comportement mĂ©canique du matĂ©riau
dans le modÚle numérique, en développant des méthodologies efficaces et
précises pour calibrer des modÚles de comportement avancés.
Une tendance rĂ©cente dans la caractĂ©risation des matĂ©riaux est lâutilisation
dâun nombre limitĂ© dâessais mĂ©caniques hĂ©tĂ©rogĂšnes, qui fournissent des
données plus riches que les essais classiques quasi-homogÚnes. Pourtant,
la conception des tests les plus adaptés reste une question ouverte. Ce tra-
vail détaille les essais mécaniques hétérogÚnes pour les tÎles métalliques.
Cependant, aucune norme nâexiste pour de tels tests, ainsi des mĂ©triques
spécifiques pour analyser les états mécaniques sont suggérées et appliquées
Ă quatre tests. Les rĂ©sultats montrent que lâutilisation de diverses mĂ©triques
fournit une bonne base pour évaluer des essais mécaniques hétérogÚnes.
Lâutilisation des essais mĂ©caniques hĂ©tĂ©rogĂšnes pour caractĂ©riser le com-
portement des matériaux est rendue possible par des mesures de champ.
Cependant, aucune solution analytique nâexiste entre les champs mesurĂ©s et
les paramÚtres du matériau. Des méthodologies inverses sont nécessaires
pour calibrer les modĂšles de comportement Ă lâaide dâun algorithme dâoptimi-
sation afin de déterminer les meilleurs paramÚtres de matériau. Un algorithme
basĂ© sur le gradient est trĂšs frĂ©quemment utilisĂ©, sans explorer dâautres pos-
sibilitĂ©s. La performance de plusieurs algorithmes dans la calibration dâun
modĂšle thermoĂ©lastoviscoplastique est discutĂ©e en termes dâefficacitĂ© et de
robustesse du processus dâoptimisation.
Souvent, des conditions de contraintes planes sont supposées dans la cal-
ibration des modĂšles, hypothĂšse qui est remise en cause dans le cas de
forte localisation des déformations. La calibration de modÚles de comporte-
ment est comparĂ©e Ă lâaide de la mĂ©thode des champs virtuels dĂ©veloppĂ©e
dans les cadres 2D et 3D. Cependant, le cadre 3D nécessite des informations
volumétriques des champs cinématiques, ce qui est expérimentalement dif-
ficile à obtenir. Une méthode de reconstruction volumique déjà existante est
encore améliorée pour prendre en compte les gradients de déformation dans
lâĂ©paisseur. Lâincertitude de la mĂ©thode est quantifiĂ©e par des expĂ©riences
virtuelles, Ă lâaide dâimages de synthĂšse.
Dans lâensemble, lâimpact de cette thĂšse est liĂ© Ă (i) lâimportance dâĂ©tablir
des mĂ©triques dans la sĂ©lection et la conception dâessais mĂ©caniques
hétérogÚnes, et (ii) à faire progresser la calibration de modÚles de
comportement avancĂ©s dâun cadre 2D Ă un cadre 3D.Programa Doutoral em Engenharia MecĂąnic