9,355 research outputs found
Simulation of metal powder packing behaviour in laser-based powder bed fusion
Laser-based powder bed fusion (L-PBF) is a method of additive manufacturing, in which metal powder is fused into solid parts, layer by layer. L-PBF shows high promise for manufacture of functional Tungsten parts, but the development of Tungsten powder feedstock for L-PBF processing is demanding and expensive. Therefore, computer simulation is explored as a possible tool for Tungsten powder feedstock development at EOS Finland Oy, with whom this thesis was made.
The aim of this thesis was to develop a simulation model of the recoating process of an EOS M 290 L-PBF system, as well as a validation method for the simulation. The validated simulation model can be used to evaluate the applicability of the used simulation software (FLOW-3D DEM) in powder material development, and possibly use the model as a platform for future application with Tungsten powder. In order to reduce complexity and uncertainties, the irregular Tungsten powder is not yet simulated, and a well-known, spherical EOS IN718 powder feedstock was used instead.
The validation experiment is based on building a low, enclosed wall using the M 290 L-PBF system. Recoated powder is trapped inside as the enclosure is being built, making it possible to remove the sampled powder from a known volume. This enables measuring the powder packing density (PD) of the powder bed. The experiment was repeated five times and some sources of error were also quantified. Average PD was found to be 52 % with a standard deviation of 0.2 %.
The simulation was modelled after the IN718 powder and corresponding process used in the M 290 system. Material-related input values were found by dynamic image analysis, pycnometry, rheometry, and from literature. PD was measured with six different methods, and the method considered as most analogous to the practical validation experiment yielded a PD of 52 %. Various particle behavior phenomena were also observed and analyzed.
Many of the powder bed characterization methods found in literature were not applicable to L-PBF processing or were not representative of the simulated conditions. Many simulation studies were also found to use no validation, or used a validation method which is not based on the investigated phenomena. The validation model developed in this thesis accurately represents the simulated conditions and is found to produce reliable and repeatable results. The simulation model was parametrized with values acquired from practical experiments or literature and closely matched the validation experiment, and could therefore be considered a truthful representation of the powder recoating process of an EOS M 290. The model can be used as a platform for future development of Tungsten powder simulation
Virtual Stiffness: A Novel Biomechanical Approach to Estimate Limb Stiffness of a Multi-Muscle and Multi-Joint System
In recent years, different groups have developed algorithms to control the stiffness of a robotic device through the electromyographic activity collected from a human operator. However, the approaches proposed so far require an initial calibration, have a complex subject-specific muscle model, or consider the activity of only a few pairs of antagonist muscles. This study described and tested an approach based on a biomechanical model to estimate the limb stiffness of a multi-joint, multi-muscle system from muscle activations. The âvirtual stiffnessâ method approximates the generated stiffness as the stiffness due to the component of the muscle-activation vector that does not generate any endpoint force. Such a component is calculated by projecting the vector of muscle activations, estimated from the electromyographic signals, onto the null space of the linear mapping of muscle activations onto the endpoint force. The proposed method was tested by using an upper-limb model made of two joints and six Hill-type muscles and data collected during an isometric force-generation task performed with the upper limb. The null-space projection of the muscle-activation vector approximated the major axis of the stiffness ellipse or ellipsoid. The model provides a good approximation of the voluntary stiffening performed by participants that could be directly implemented in wearable myoelectric controlled devices that estimate, in real-time, the endpoint forces, or endpoint movement, from the mapping between muscle activation and force, without any additional calibrations
Explainable Disparity Compensation for Efficient Fair Ranking
Ranking functions that are used in decision systems often produce disparate
results for different populations because of bias in the underlying data.
Addressing, and compensating for, these disparate outcomes is a critical
problem for fair decision-making. Recent compensatory measures have mostly
focused on opaque transformations of the ranking functions to satisfy fairness
guarantees or on the use of quotas or set-asides to guarantee a minimum number
of positive outcomes to members of underrepresented groups. In this paper we
propose easily explainable data-driven compensatory measures for ranking
functions. Our measures rely on the generation of bonus points given to members
of underrepresented groups to address disparity in the ranking function. The
bonus points can be set in advance, and can be combined, allowing for
considering the intersections of representations and giving better transparency
to stakeholders. We propose efficient sampling-based algorithms to calculate
the number of bonus points to minimize disparity. We validate our algorithms
using real-world school admissions and recidivism datasets, and compare our
results with that of existing fair ranking algorithms.Comment: 22 pages, 5 figure
Using machine learning to predict pathogenicity of genomic variants throughout the human genome
GeschĂ€tzt mehr als 6.000 Erkrankungen werden durch VerĂ€nderungen im Genom verursacht. Ursachen gibt es viele: Eine genomische Variante kann die Translation eines Proteins stoppen, die Genregulation stören oder das SpleiĂen der mRNA in eine andere Isoform begĂŒnstigen. All diese Prozesse mĂŒssen ĂŒberprĂŒft werden, um die zum beschriebenen PhĂ€notyp passende Variante zu ermitteln. Eine Automatisierung dieses Prozesses sind Varianteneffektmodelle. Mittels maschinellem Lernen und Annotationen aus verschiedenen Quellen bewerten diese Modelle genomische Varianten hinsichtlich ihrer PathogenitĂ€t.
Die Entwicklung eines Varianteneffektmodells erfordert eine Reihe von Schritten: Annotation der Trainingsdaten, Auswahl von Features, Training verschiedener Modelle und Selektion eines Modells. Hier prĂ€sentiere ich ein allgemeines Workflow dieses Prozesses. Dieses ermöglicht es den Prozess zu konfigurieren, Modellmerkmale zu bearbeiten, und verschiedene Annotationen zu testen. Der Workflow umfasst auĂerdem die Optimierung von Hyperparametern, Validierung und letztlich die Anwendung des Modells durch genomweites Berechnen von Varianten-Scores.
Der Workflow wird in der Entwicklung von Combined Annotation Dependent Depletion (CADD), einem Varianteneffektmodell zur genomweiten Bewertung von SNVs und InDels, verwendet. Durch Etablierung des ersten Varianteneffektmodells fĂŒr das humane Referenzgenome GRCh38 demonstriere ich die gewonnenen Möglichkeiten Annotationen aufzugreifen und neue Modelle zu trainieren. AuĂerdem zeige ich, wie Deep-Learning-Scores als Feature in einem CADD-Modell die Vorhersage von RNA-SpleiĂing verbessern. AuĂerdem werden Varianteneffektmodelle aufgrund eines neuen, auf AllelhĂ€ufigkeit basierten, Trainingsdatensatz entwickelt.
Diese Ergebnisse zeigen, dass der entwickelte Workflow eine skalierbare und flexible Möglichkeit ist, um Varianteneffektmodelle zu entwickeln. Alle entstandenen Scores sind unter cadd.gs.washington.edu und cadd.bihealth.org frei verfĂŒgbar.More than 6,000 diseases are estimated to be caused by genomic variants. This can happen in many possible ways: a variant may stop the translation of a protein, interfere with gene regulation, or alter splicing of the transcribed mRNA into an unwanted isoform. It is necessary to investigate all of these processes in order to evaluate which variant may be causal for the deleterious phenotype. A great help in this regard are variant effect scores. Implemented as machine learning classifiers, they integrate annotations from different resources to rank genomic variants in terms of pathogenicity.
Developing a variant effect score requires multiple steps: annotation of the training data, feature selection, model training, benchmarking, and finally deployment for the model's application. Here, I present a generalized workflow of this process. It makes it simple to configure how information is converted into model features, enabling the rapid exploration of different annotations. The workflow further implements hyperparameter optimization, model validation and ultimately deployment of a selected model via genome-wide scoring of genomic variants.
The workflow is applied to train Combined Annotation Dependent Depletion (CADD), a variant effect model that is scoring SNVs and InDels genome-wide. I show that the workflow can be quickly adapted to novel annotations by porting CADD to the genome reference GRCh38. Further, I demonstrate the integration of deep-neural network scores as features into a new CADD model, improving the annotation of RNA splicing events. Finally, I apply the workflow to train multiple variant effect models from training data that is based on variants selected by allele frequency.
In conclusion, the developed workflow presents a flexible and scalable method to train variant effect scores. All software and developed scores are freely available from cadd.gs.washington.edu and cadd.bihealth.org
Novel 129Xe Magnetic Resonance Imaging and Spectroscopy Measurements of Pulmonary Gas-Exchange
Gas-exchange is the primary function of the lungs and involves removing carbon dioxide from the body and exchanging it within the alveoli for inhaled oxygen. Several different pulmonary, cardiac and cardiovascular abnormalities have negative effects on pulmonary gas-exchange. Unfortunately, clinical tests do not always pinpoint the problem; sensitive and specific measurements are needed to probe the individual components participating in gas-exchange for a better understanding of pathophysiology, disease progression and response to therapy.
In vivo Xenon-129 gas-exchange magnetic resonance imaging (129Xe gas-exchange MRI) has the potential to overcome these challenges. When participants inhale hyperpolarized 129Xe gas, it has different MR spectral properties as a gas, as it diffuses through the alveolar membrane and as it binds to red-blood-cells. 129Xe MR spectroscopy and imaging provides a way to tease out the different anatomic components of gas-exchange simultaneously and provides spatial information about where abnormalities may occur.
In this thesis, I developed and applied 129Xe MR spectroscopy and imaging to measure gas-exchange in the lungs alongside other clinical and imaging measurements. I measured 129Xe gas-exchange in asymptomatic congenital heart disease and in prospective, controlled studies of long-COVID. I also developed mathematical tools to model 129Xe MR signals during acquisition and reconstruction. The insights gained from my work underscore the potential for 129Xe gas-exchange MRI biomarkers towards a better understanding of cardiopulmonary disease. My work also provides a way to generate a deeper imaging and physiologic understanding of gas-exchange in vivo in healthy participants and patients with chronic lung and heart disease
RAPID: Enabling Fast Online Policy Learning in Dynamic Public Cloud Environments
Resource sharing between multiple workloads has become a prominent practice
among cloud service providers, motivated by demand for improved resource
utilization and reduced cost of ownership. Effective resource sharing, however,
remains an open challenge due to the adverse effects that resource contention
can have on high-priority, user-facing workloads with strict Quality of Service
(QoS) requirements. Although recent approaches have demonstrated promising
results, those works remain largely impractical in public cloud environments
since workloads are not known in advance and may only run for a brief period,
thus prohibiting offline learning and significantly hindering online learning.
In this paper, we propose RAPID, a novel framework for fast, fully-online
resource allocation policy learning in highly dynamic operating environments.
RAPID leverages lightweight QoS predictions, enabled by
domain-knowledge-inspired techniques for sample efficiency and bias reduction,
to decouple control from conventional feedback sources and guide policy
learning at a rate orders of magnitude faster than prior work. Evaluation on a
real-world server platform with representative cloud workloads confirms that
RAPID can learn stable resource allocation policies in minutes, as compared
with hours in prior state-of-the-art, while improving QoS by 9.0x and
increasing best-effort workload performance by 19-43%
Quantifying and Explaining Machine Learning Uncertainty in Predictive Process Monitoring: An Operations Research Perspective
This paper introduces a comprehensive, multi-stage machine learning
methodology that effectively integrates information systems and artificial
intelligence to enhance decision-making processes within the domain of
operations research. The proposed framework adeptly addresses common
limitations of existing solutions, such as the neglect of data-driven
estimation for vital production parameters, exclusive generation of point
forecasts without considering model uncertainty, and lacking explanations
regarding the sources of such uncertainty. Our approach employs Quantile
Regression Forests for generating interval predictions, alongside both local
and global variants of SHapley Additive Explanations for the examined
predictive process monitoring problem. The practical applicability of the
proposed methodology is substantiated through a real-world production planning
case study, emphasizing the potential of prescriptive analytics in refining
decision-making procedures. This paper accentuates the imperative of addressing
these challenges to fully harness the extensive and rich data resources
accessible for well-informed decision-making
Antenna Arrangement in UWB Helmet Brain Applicators for Deep Microwave Hyperthermia
Deep microwave hyperthermia applicators are typically designed as narrow-band conformal antenna arrays with equally spaced elements, arranged in one or more rings. This solution, while adequate for most body regions, might be sub-optimal for brain treatments. The introduction of ultra-wide-band semi-spherical applicators, with elements arranged around the head and not necessarily aligned, has the potential to enhance the selective thermal dose delivery in this challenging anatomical region. However, the additional degrees of freedom in this design make the problem non-trivial. We address this by treating the antenna arrangement as a global SAR-based optimization process aiming at maximizing target coverage and hot-spot suppression in a given patient. To enable the quick evaluation of a certain arrangement, we propose a novel E-field interpolation technique which calculates the field generated by an antenna at any location around the scalp from a limited number of initial simulations. We evaluate the approximation error against full array simulations. We demonstrate the design technique in the optimization of a helmet applicator for the treatment of a medulloblastoma in a paediatric patient. The optimized applicator achieves 0.3\ua0 (Formula presented.) C higher T90 than a conventional ring applicator with the same number of elements
Deciphering multiple sclerosis disability with deep learning attention maps on clinical MRI
Deep learning; Disability; Structural MRIAprendizaje profundo; Discapacidad; Resonancia magnética estructuralAprenentatge profund; Discapacitat; Ressonà ncia magnÚtica estructuralThe application of convolutional neural networks (CNNs) to MRI data has emerged as a promising approach to achieving unprecedented levels of accuracy when predicting the course of neurological conditions, including multiple sclerosis, by means of extracting image features not detectable through conventional methods. Additionally, the study of CNN-derived attention maps, which indicate the most relevant anatomical features for CNN-based decisions, has the potential to uncover key disease mechanisms leading to disability accumulation.
From a cohort of patients prospectively followed up after a first demyelinating attack, we selected those with T1-weighted and T2-FLAIR brain MRI sequences available for image analysis and a clinical assessment performed within the following six months (N = 319). Patients were divided into two groups according to expanded disability status scale (EDSS) score: â„3.0 and < 3.0. A 3D-CNN model predicted the class using whole-brain MRI scans as input. A comparison with a logistic regression (LR) model using volumetric measurements as explanatory variables and a validation of the CNN model on an independent dataset with similar characteristics (N = 440) were also performed. The layer-wise relevance propagation method was used to obtain individual attention maps.
The CNN model achieved a mean accuracy of 79% and proved to be superior to the equivalent LR-model (77%). Additionally, the model was successfully validated in the independent external cohort without any re-training (accuracy = 71%). Attention-map analyses revealed the predominant role of frontotemporal cortex and cerebellum for CNN decisions, suggesting that the mechanisms leading to disability accrual exceed the mere presence of brain lesions or atrophy and probably involve how damage is distributed in the central nervous system.MS PATHS is funded by Biogen. This study has been possible thanks to a Junior Leader La Caixa Fellowship awarded to C. Tur (fellowship code is LCF/BQ/PI20/11760008) by âla Caixaâ Foundation (ID 100010434). The salaries of C. Tur and Ll. Coll are covered by this award
- âŠ