14,420 research outputs found
Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions
In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request
Recommended from our members
Ensuring Access to Safe and Nutritious Food for All Through the Transformation of Food Systems
Sensitivity analysis for ReaxFF reparameterization using the Hilbert-Schmidt independence criterion
We apply a global sensitivity method, the Hilbert-Schmidt independence
criterion (HSIC), to the reparameterization of a Zn/S/H ReaxFF force field to
identify the most appropriate parameters for reparameterization. Parameter
selection remains a challenge in this context as high dimensional optimizations
are prone to overfitting and take a long time, but selecting too few parameters
leads to poor quality force fields. We show that the HSIC correctly and quickly
identifies the most sensitive parameters, and that optimizations done using a
small number of sensitive parameters outperform those done using a higher
dimensional reasonable-user parameter selection. Optimizations using only
sensitive parameters: 1) converge faster, 2) have loss values comparable to
those found with the naive selection, 3) have similar accuracy in validation
tests, and 4) do not suffer from problems of overfitting. We demonstrate that
an HSIC global sensitivity is a cheap optimization pre-processing step that has
both qualitative and quantitative benefits which can substantially simplify and
speedup ReaxFF reparameterizations.Comment: author accepted manuscrip
Reinforcement Learning-based User-centric Handover Decision-making in 5G Vehicular Networks
The advancement of 5G technologies and Vehicular Networks open a new paradigm for Intelligent Transportation Systems (ITS) in safety and infotainment services in urban and highway scenarios. Connected vehicles are vital for enabling massive data sharing and supporting such services. Consequently, a stable connection is compulsory to transmit data across the network successfully. The new 5G technology introduces more bandwidth, stability, and reliability, but it faces a low communication range, suffering from more frequent handovers and connection drops. The shift from the base station-centric view to the user-centric view helps to cope with the smaller communication range and ultra-density of 5G networks. In this thesis, we propose a series of strategies to improve connection stability through efficient handover decision-making. First, a modified probabilistic approach, M-FiVH, aimed at reducing 5G handovers and enhancing network stability. Later, an adaptive learning approach employed Connectivity-oriented SARSA Reinforcement Learning (CO-SRL) for user-centric Virtual Cell (VC) management to enable efficient handover (HO) decisions. Following that, a user-centric Factor-distinct SARSA Reinforcement Learning (FD-SRL) approach combines time series data-oriented LSTM and adaptive SRL for VC and HO management by considering both historical and real-time data. The random direction of vehicular movement, high mobility, network load, uncertain road traffic situation, and signal strength from cellular transmission towers vary from time to time and cannot always be predicted. Our proposed approaches maintain stable connections by reducing the number of HOs by selecting the appropriate size of VCs and HO management. A series of improvements demonstrated through realistic simulations showed that M-FiVH, CO-SRL, and FD-SRL were successful in reducing the number of HOs and the average cumulative HO time. We provide an analysis and comparison of several approaches and demonstrate our proposed approaches perform better in terms of network connectivity
Pretrained Embeddings for E-commerce Machine Learning: When it Fails and Why?
The use of pretrained embeddings has become widespread in modern e-commerce
machine learning (ML) systems. In practice, however, we have encountered
several key issues when using pretrained embedding in a real-world production
system, many of which cannot be fully explained by current knowledge.
Unfortunately, we find that there is a lack of a thorough understanding of how
pre-trained embeddings work, especially their intrinsic properties and
interactions with downstream tasks. Consequently, it becomes challenging to
make interactive and scalable decisions regarding the use of pre-trained
embeddings in practice.
Our investigation leads to two significant discoveries about using pretrained
embeddings in e-commerce applications. Firstly, we find that the design of the
pretraining and downstream models, particularly how they encode and decode
information via embedding vectors, can have a profound impact. Secondly, we
establish a principled perspective of pre-trained embeddings via the lens of
kernel analysis, which can be used to evaluate their predictability,
interactively and scalably. These findings help to address the practical
challenges we faced and offer valuable guidance for successful adoption of
pretrained embeddings in real-world production. Our conclusions are backed by
solid theoretical reasoning, benchmark experiments, as well as online testings
Model Diagnostics meets Forecast Evaluation: Goodness-of-Fit, Calibration, and Related Topics
Principled forecast evaluation and model diagnostics are vital in fitting probabilistic models and forecasting outcomes of interest. A common principle is that fitted or predicted distributions ought to be calibrated, ideally in the sense that the outcome is indistinguishable from a random draw from the posited distribution. Much of this thesis is centered on calibration properties of various types of forecasts.
In the first part of the thesis, a simple algorithm for exact multinomial goodness-of-fit tests is proposed. The algorithm computes exact -values based on various test statistics, such as the log-likelihood ratio and Pearson\u27s chi-square. A thorough analysis shows improvement on extant methods. However, the runtime of the algorithm grows exponentially in the number of categories and hence its use is limited.
In the second part, a framework rooted in probability theory is developed, which gives rise to hierarchies of calibration, and applies to both predictive distributions and stand-alone point forecasts. Based on a general notion of conditional T-calibration, the thesis introduces population versions of T-reliability diagrams and revisits a score decomposition into measures of miscalibration, discrimination, and uncertainty. Stable and efficient estimators of T-reliability diagrams and score components arise via nonparametric isotonic regression and the pool-adjacent-violators algorithm. For in-sample model diagnostics, a universal coefficient of determination is introduced that nests and reinterprets the classical in least squares regression.
In the third part, probabilistic top lists are proposed as a novel type of prediction in classification, which bridges the gap between single-class predictions and predictive distributions. The probabilistic top list functional is elicited by strictly consistent evaluation metrics, based on symmetric proper scoring rules, which admit comparison of various types of predictions
Deep Transfer Learning Applications in Intrusion Detection Systems: A Comprehensive Review
Globally, the external Internet is increasingly being connected to the
contemporary industrial control system. As a result, there is an immediate need
to protect the network from several threats. The key infrastructure of
industrial activity may be protected from harm by using an intrusion detection
system (IDS), a preventive measure mechanism, to recognize new kinds of
dangerous threats and hostile activities. The most recent artificial
intelligence (AI) techniques used to create IDS in many kinds of industrial
control networks are examined in this study, with a particular emphasis on
IDS-based deep transfer learning (DTL). This latter can be seen as a type of
information fusion that merge, and/or adapt knowledge from multiple domains to
enhance the performance of the target task, particularly when the labeled data
in the target domain is scarce. Publications issued after 2015 were taken into
account. These selected publications were divided into three categories:
DTL-only and IDS-only are involved in the introduction and background, and
DTL-based IDS papers are involved in the core papers of this review.
Researchers will be able to have a better grasp of the current state of DTL
approaches used in IDS in many different types of networks by reading this
review paper. Other useful information, such as the datasets used, the sort of
DTL employed, the pre-trained network, IDS techniques, the evaluation metrics
including accuracy/F-score and false alarm rate (FAR), and the improvement
gained, were also covered. The algorithms, and methods used in several studies,
or illustrate deeply and clearly the principle in any DTL-based IDS subcategory
are presented to the reader
Fair Assortment Planning
Many online platforms, ranging from online retail stores to social media
platforms, employ algorithms to optimize their offered assortment of items
(e.g., products and contents). These algorithms tend to prioritize the
platforms' short-term goals by solely featuring items with the highest
popularity or revenue. However, this practice can then lead to undesirable
outcomes for the rest of the items, making them leave the platform, and in turn
hurting the platform's long-term goals. Motivated by that, we introduce and
study a fair assortment planning problem, which requires any two items with
similar quality/merits to be offered similar outcomes. We show that the problem
can be formulated as a linear program (LP), called (FAIR), that optimizes over
the distribution of all feasible assortments. To find a near-optimal solution
to (FAIR), we propose a framework based on the Ellipsoid method, which requires
a polynomial-time separation oracle to the dual of the LP. We show that finding
an optimal separation oracle to the dual problem is an NP-complete problem, and
hence we propose a series of approximate separation oracles, which then result
in a -approx. algorithm and a PTAS for the original Problem (FAIR). The
approximate separation oracles are designed by (i) showing the separation
oracle to the dual of the LP is equivalent to solving an infinite series of
parameterized knapsack problems, and (ii) taking advantage of the structure of
the parameterized knapsack problems. Finally, we conduct a case study using the
MovieLens dataset, which demonstrates the efficacy of our algorithms and
further sheds light on the price of fairness.Comment: 86 pages, 7 figure
Estudo da remodelagem reversa miocárdica através da análise proteómica do miocárdio e do líquido pericárdico
Valve replacement remains as the standard therapeutic option for aortic
stenosis patients, aiming at abolishing pressure overload and triggering
myocardial reverse remodeling. However, despite the instant hemodynamic
benefit, not all patients show complete regression of myocardial hypertrophy,
being at higher risk for adverse outcomes, such as heart failure. The current
comprehension of the biological mechanisms underlying an incomplete reverse
remodeling is far from complete. Furthermore, definitive prognostic tools and
ancillary therapies to improve the outcome of the patients undergoing valve
replacement are missing. To help abridge these gaps, a combined myocardial
(phospho)proteomics and pericardial fluid proteomics approach was followed,
taking advantage of human biopsies and pericardial fluid collected during
surgery and whose origin anticipated a wealth of molecular information
contained therein.
From over 1800 and 750 proteins identified, respectively, in the myocardium
and in the pericardial fluid of aortic stenosis patients, a total of 90 dysregulated
proteins were detected. Gene annotation and pathway enrichment analyses,
together with discriminant analysis, are compatible with a scenario of increased
pro-hypertrophic gene expression and protein synthesis, defective ubiquitinproteasome system activity, proclivity to cell death (potentially fed by
complement activity and other extrinsic factors, such as death receptor
activators), acute-phase response, immune system activation and fibrosis.
Specific validation of some targets through immunoblot techniques and
correlation with clinical data pointed to complement C3 β chain, Muscle Ring
Finger protein 1 (MuRF1) and the dual-specificity Tyr-phosphorylation
regulated kinase 1A (DYRK1A) as potential markers of an incomplete
response. In addition, kinase prediction from phosphoproteome data suggests
that the modulation of casein kinase 2, the family of IκB kinases, glycogen
synthase kinase 3 and DYRK1A may help improve the outcome of patients
undergoing valve replacement. Particularly, functional studies with DYRK1A+/-
cardiomyocytes show that this kinase may be an important target to treat
cardiac dysfunction, provided that mutant cells presented a different response
to stretch and reduced ability to develop force (active tension).
This study opens many avenues in post-aortic valve replacement reverse
remodeling research. In the future, gain-of-function and/or loss-of-function
studies with isolated cardiomyocytes or with animal models of aortic bandingdebanding will help disclose the efficacy of targeting the surrogate therapeutic
targets. Besides, clinical studies in larger cohorts will bring definitive proof of
complement C3, MuRF1 and DYRK1A prognostic value.A substituição da válvula aórtica continua a ser a opção terapêutica de
referência para doentes com estenose aórtica e visa a eliminação da
sobrecarga de pressão, desencadeando a remodelagem reversa miocárdica.
Contudo, apesar do benefício hemodinâmico imediato, nem todos os pacientes
apresentam regressão completa da hipertrofia do miocárdio, ficando com maior
risco de eventos adversos, como a insuficiência cardíaca. Atualmente, os
mecanismos biológicos subjacentes a uma remodelagem reversa incompleta
ainda não são claros. Além disso, não dispomos de ferramentas de
prognóstico definitivos nem de terapias auxiliares para melhorar a condição
dos pacientes indicados para substituição da válvula. Para ajudar a resolver
estas lacunas, uma abordagem combinada de (fosfo)proteómica e proteómica
para a caracterização, respetivamente, do miocárdio e do líquido pericárdico
foi seguida, tomando partido de biópsias e líquidos pericárdicos recolhidos em
ambiente cirúrgico.
Das mais de 1800 e 750 proteínas identificadas, respetivamente, no miocárdio
e no líquido pericárdico dos pacientes com estenose aórtica, um total de 90
proteínas desreguladas foram detetadas. As análises de anotação de genes,
de enriquecimento de vias celulares e discriminativa corroboram um cenário de
aumento da expressão de genes pro-hipertróficos e de síntese proteica, um
sistema ubiquitina-proteassoma ineficiente, uma tendência para morte celular
(potencialmente acelerada pela atividade do complemento e por outros fatores
extrínsecos que ativam death receptors), com ativação da resposta de fase
aguda e do sistema imune, assim como da fibrose.
A validação de alguns alvos específicos através de immunoblot e correlação
com dados clínicos apontou para a cadeia β do complemento C3, a Muscle
Ring Finger protein 1 (MuRF1) e a dual-specificity Tyr-phosphoylation
regulated kinase 1A (DYRK1A) como potenciais marcadores de uma resposta
incompleta. Por outro lado, a predição de cinases a partir do fosfoproteoma,
sugere que a modulação da caseína cinase 2, a família de cinases do IκB, a
glicogénio sintase cinase 3 e da DYRK1A pode ajudar a melhorar a condição
dos pacientes indicados para intervenção. Em particular, a avaliação funcional
de cardiomiócitos DYRK1A+/- mostraram que esta cinase pode ser um alvo
importante para tratar a disfunção cardíaca, uma vez que os miócitos mutantes
responderam de forma diferente ao estiramento e mostraram uma menor
capacidade para desenvolver força (tensão ativa).
Este estudo levanta várias hipóteses na investigação da remodelagem reversa.
No futuro, estudos de ganho e/ou perda de função realizados em
cardiomiócitos isolados ou em modelos animais de banding-debanding da
aorta ajudarão a testar a eficácia de modular os potenciais alvos terapêuticos
encontrados. Além disso, estudos clínicos em coortes de maior dimensão
trarão conclusões definitivas quanto ao valor de prognóstico do complemento
C3, MuRF1 e DYRK1A.Programa Doutoral em Biomedicin
Image classification over unknown and anomalous domains
A longstanding goal in computer vision research is to develop methods that are simultaneously applicable to a broad range of prediction problems. In contrast to this, models often perform best when they are specialized to some task or data type. This thesis investigates the challenges of learning models that generalize well over multiple unknown or anomalous modes and domains in data, and presents new solutions for learning robustly in this setting.
Initial investigations focus on normalization for distributions that contain multiple sources (e.g. images in different styles like cartoons or photos). Experiments demonstrate the extent to which existing modules, batch normalization in particular, struggle with such heterogeneous data, and a new solution is proposed that can better handle data from multiple visual modes, using differing sample statistics for each.
While ideas to counter the overspecialization of models have been formulated in sub-disciplines of transfer learning, e.g. multi-domain and multi-task learning, these usually rely on the existence of meta information, such as task or domain labels. Relaxing this assumption gives rise to a new transfer learning setting, called latent domain learning in this thesis, in which training and inference are carried out over data from multiple visual domains, without domain-level annotations. Customized solutions are required for this, as the performance of standard models degrades: a new data augmentation technique that interpolates between latent domains in an unsupervised way is presented, alongside a dedicated module that sparsely accounts for hidden domains in data, without requiring domain labels to do so.
In addition, the thesis studies the problem of classifying previously unseen or anomalous modes in data, a fundamental problem in one-class learning, and anomaly detection in particular. While recent ideas have been focused on developing self-supervised solutions for the one-class setting, in this thesis new methods based on transfer learning are formulated. Extensive experimental evidence demonstrates that a transfer-based perspective benefits new problems that have recently been proposed in anomaly detection literature, in particular challenging semantic detection tasks
- …