10,280 research outputs found
Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions
In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request
Recommended from our members
Ensuring Access to Safe and Nutritious Food for All Through the Transformation of Food Systems
Sensitivity analysis for ReaxFF reparameterization using the Hilbert-Schmidt independence criterion
We apply a global sensitivity method, the Hilbert-Schmidt independence
criterion (HSIC), to the reparameterization of a Zn/S/H ReaxFF force field to
identify the most appropriate parameters for reparameterization. Parameter
selection remains a challenge in this context as high dimensional optimizations
are prone to overfitting and take a long time, but selecting too few parameters
leads to poor quality force fields. We show that the HSIC correctly and quickly
identifies the most sensitive parameters, and that optimizations done using a
small number of sensitive parameters outperform those done using a higher
dimensional reasonable-user parameter selection. Optimizations using only
sensitive parameters: 1) converge faster, 2) have loss values comparable to
those found with the naive selection, 3) have similar accuracy in validation
tests, and 4) do not suffer from problems of overfitting. We demonstrate that
an HSIC global sensitivity is a cheap optimization pre-processing step that has
both qualitative and quantitative benefits which can substantially simplify and
speedup ReaxFF reparameterizations.Comment: author accepted manuscrip
Bayesian networks for disease diagnosis: What are they, who has used them and how?
A Bayesian network (BN) is a probabilistic graph based on Bayes' theorem,
used to show dependencies or cause-and-effect relationships between variables.
They are widely applied in diagnostic processes since they allow the
incorporation of medical knowledge to the model while expressing uncertainty in
terms of probability. This systematic review presents the state of the art in
the applications of BNs in medicine in general and in the diagnosis and
prognosis of diseases in particular. Indexed articles from the last 40 years
were included. The studies generally used the typical measures of diagnostic
and prognostic accuracy: sensitivity, specificity, accuracy, precision, and the
area under the ROC curve. Overall, we found that disease diagnosis and
prognosis based on BNs can be successfully used to model complex medical
problems that require reasoning under conditions of uncertainty.Comment: 22 pages, 5 figures, 1 table, Student PhD first pape
Likelihood Asymptotics in Nonregular Settings: A Review with Emphasis on the Likelihood Ratio
This paper reviews the most common situations where one or more regularity
conditions which underlie classical likelihood-based parametric inference fail.
We identify three main classes of problems: boundary problems, indeterminate
parameter problems -- which include non-identifiable parameters and singular
information matrices -- and change-point problems. The review focuses on the
large-sample properties of the likelihood ratio statistic. We emphasize
analytical solutions and acknowledge software implementations where available.
We furthermore give summary insight about the possible tools to derivate the
key results. Other approaches to hypothesis testing and connections to
estimation are listed in the annotated bibliography of the Supplementary
Material
Qluster: An easy-to-implement generic workflow for robust clustering of health data
The exploration of heath data by clustering algorithms allows to better describe the populations of interest by seeking the sub-profiles that compose it. This therefore reinforces medical knowledge, whether it is about a disease or a targeted population in real life. Nevertheless, contrary to the so-called conventional biostatistical methods where numerous guidelines exist, the standardization of data science approaches in clinical research remains a little discussed subject. This results in a significant variability in the execution of data science projects, whether in terms of algorithms used, reliability and credibility of the designed approach. Taking the path of parsimonious and judicious choice of both algorithms and implementations at each stage, this article proposes Qluster, a practical workflow for performing clustering tasks. Indeed, this workflow makes a compromise between (1) genericity of applications (e.g. usable on small or big data, on continuous, categorical or mixed variables, on database of high-dimensionality or not), (2) ease of implementation (need for few packages, few algorithms, few parameters, ...), and (3) robustness (e.g. use of proven algorithms and robust packages, evaluation of the stability of clusters, management of noise and multicollinearity). This workflow can be easily automated and/or routinely applied on a wide range of clustering projects. It can be useful both for data scientists with little experience in the field to make data clustering easier and more robust, and for more experienced data scientists who are looking for a straightforward and reliable solution to routinely perform preliminary data mining. A synthesis of the literature on data clustering as well as the scientific rationale supporting the proposed workflow is also provided. Finally, a detailed application of the workflow on a concrete use case is provided, along with a practical discussion for data scientists. An implementation on the Dataiku platform is available upon request to the authors
Towards a unified eco-evolutionary framework for fisheries management: Coupling advances in next-generation sequencing with species distribution modelling
The establishment of high-throughput sequencing technologies and
subsequent large-scale genomic datasets has flourished across fields of
fundamental biological sciences. The introduction of genomic resources in
fisheries management has been proposed from multiple angles, ranging from
an accurate re-definition of geographical limitations of stocks and connectivity,
identification of fine-scale stock structure linked to locally adapted subpopulations, or even the integration with individual-based biophysical
models to explore life history strategies. While those clearly enhance our
perception of patterns at the light of a spatial scale, temporal depth and
consequently forecasting ability might be compromised as an analytical
trade-off. Here, we present a framework to reinforce our understanding of
stock dynamics by adding also a temporal point of view. We propose to
integrate genomic information on temporal projections of species
distributions computed by Species Distribution Models (SDMs). SDMs have
the potential to project the current and future distribution ranges of a given
species from relevant environmental predictors. These projections serve as
tools to inform about range expansions and contractions of fish stocks and
suggest either suitable locations or local extirpations that may arise in the
future. However, SDMs assume that the whole population respond
homogenously to the range of environmental conditions. Here, we
conceptualize a framework that leverages a conventional Bayesian joint-SDM
approach with the incorporation of genomic data. We propose that introducing
genomic information at the basis of a joint-SDM will explore the range of
suitable habitats where stocks could thrive in the future as a function of their
current evolutionary potential.Fundação para a Ciência e Tecnollogia - FCT; ARNETinfo:eu-repo/semantics/publishedVersio
Anuário científico da Escola Superior de Tecnologia da Saúde de Lisboa - 2021
É com grande prazer que apresentamos a mais recente edição (a 11.ª) do Anuário Científico da Escola Superior de Tecnologia da Saúde de Lisboa. Como instituição de ensino superior, temos o compromisso de promover e incentivar a pesquisa científica em todas as áreas do conhecimento que contemplam a nossa missão. Esta publicação tem como objetivo divulgar toda a produção científica desenvolvida pelos Professores, Investigadores, Estudantes e Pessoal não Docente da ESTeSL durante 2021. Este Anuário é, assim, o reflexo do trabalho árduo e dedicado da nossa comunidade, que se empenhou na produção de conteúdo científico de elevada qualidade e partilhada com a Sociedade na forma de livros, capítulos de livros, artigos publicados em revistas nacionais e internacionais, resumos de comunicações orais e pósteres, bem como resultado dos trabalhos de 1º e 2º ciclo. Com isto, o conteúdo desta publicação abrange uma ampla variedade de tópicos, desde temas mais fundamentais até estudos de aplicação prática em contextos específicos de Saúde, refletindo desta forma a pluralidade e diversidade de áreas que definem, e tornam única, a ESTeSL. Acreditamos que a investigação e pesquisa científica é um eixo fundamental para o desenvolvimento da sociedade e é por isso que incentivamos os nossos estudantes a envolverem-se em atividades de pesquisa e prática baseada na evidência desde o início dos seus estudos na ESTeSL. Esta publicação é um exemplo do sucesso desses esforços, sendo a maior de sempre, o que faz com que estejamos muito orgulhosos em partilhar os resultados e descobertas dos nossos investigadores com a comunidade científica e o público em geral. Esperamos que este Anuário inspire e motive outros estudantes, profissionais de saúde, professores e outros colaboradores a continuarem a explorar novas ideias e contribuir para o avanço da ciência e da tecnologia no corpo de conhecimento próprio das áreas que compõe a ESTeSL. Agradecemos a todos os envolvidos na produção deste anuário e desejamos uma leitura inspiradora e agradável.info:eu-repo/semantics/publishedVersio
latent Dirichlet allocation method-based nowcasting approach for prediction of silver price
Silver is a metal that offers significant value to both investors and companies. The purpose of this study is to make an estimation of the price of silver. While making this estimation, it is planned to include the frequency of searches on Google Trends for the words that affect the silver price. Thus, it is aimed to obtain a more accurate estimate. First, using the Latent Dirichlet Allocation method, the keywords to be analyzed in Google Trends were collected from various articles on the Internet. Mining data from Google Trends combined with the information obtained by LDA is the new approach this study took, to predict the price of silver. No study has been found in the literature that has adopted this approach to estimate the price of silver. The estimation was carried out with Random Forest Regression, Gaussian Process Regression, Support Vector Machine, Regression Trees and Artificial Neural Networks methods. In addition, ARIMA, which is one of the traditional methods that is widely used in time series analysis, was also used to benchmark the accuracy of the methodology. The best MSE ratio was obtained as 0,000227131 ± 0.0000235205 by the Regression Trees method. This score indicates that it would be a valid technique to estimate the price of "Silver" by using Google Trends data using the LDA method
Evolutionary Computation in Action: Feature Selection for Deep Embedding Spaces of Gigapixel Pathology Images
One of the main obstacles of adopting digital pathology is the challenge of
efficient processing of hyperdimensional digitized biopsy samples, called whole
slide images (WSIs). Exploiting deep learning and introducing compact WSI
representations are urgently needed to accelerate image analysis and facilitate
the visualization and interpretability of pathology results in a postpandemic
world. In this paper, we introduce a new evolutionary approach for WSI
representation based on large-scale multi-objective optimization (LSMOP) of
deep embeddings. We start with patch-based sampling to feed KimiaNet , a
histopathology-specialized deep network, and to extract a multitude of feature
vectors. Coarse multi-objective feature selection uses the reduced search space
strategy guided by the classification accuracy and the number of features. In
the second stage, the frequent features histogram (FFH), a novel WSI
representation, is constructed by multiple runs of coarse LSMOP. Fine
evolutionary feature selection is then applied to find a compact (short-length)
feature vector based on the FFH and contributes to a more robust deep-learning
approach to digital pathology supported by the stochastic power of evolutionary
algorithms. We validate the proposed schemes using The Cancer Genome Atlas
(TCGA) images in terms of WSI representation, classification accuracy, and
feature quality. Furthermore, a novel decision space for multicriteria decision
making in the LSMOP field is introduced. Finally, a patch-level visualization
approach is proposed to increase the interpretability of deep features. The
proposed evolutionary algorithm finds a very compact feature vector to
represent a WSI (almost 14,000 times smaller than the original feature vectors)
with 8% higher accuracy compared to the codes provided by the state-of-the-art
methods
- …