674 research outputs found
Proceedings of SIRM 2023 - The 15th European Conference on Rotordynamics
It was our great honor and pleasure to host the SIRM Conference after 2003 and 2011 for the third time in Darmstadt. Rotordynamics covers a huge variety of different applications and challenges which are all in the scope of this conference. The conference was opened with a keynote lecture given by Rainer Nordmann, one of the three founders of SIRM “Schwingungen in rotierenden Maschinen”. In total 53 papers passed our strict review process and were presented. This impressively shows that rotordynamics is relevant as ever. These contributions cover a very wide spectrum of session topics: fluid bearings and seals; air foil bearings; magnetic bearings; rotor blade interaction; rotor fluid interactions; unbalance and balancing; vibrations in turbomachines; vibration control; instability; electrical machines; monitoring, identification and diagnosis; advanced numerical tools and nonlinearities as well as general rotordynamics. The international character of the conference has been significantly enhanced by the Scientific Board since the 14th SIRM resulting on one hand in an expanded Scientific Committee which meanwhile consists of 31 members from 13 different European countries and on the other hand in the new name “European Conference on Rotordynamics”. This new international profile has also been
emphasized by participants of the 15th SIRM coming from 17 different countries out of three continents. We experienced a vital discussion and dialogue between industry and academia at the conference where roughly one third of the papers were presented by industry and two thirds by academia being an excellent basis to follow a bidirectional transfer what we call xchange at Technical University of Darmstadt. At this point we also want to give our special thanks to the eleven industry sponsors for their great support of the conference. On behalf of the Darmstadt Local Committee I welcome you to read the papers of the 15th SIRM giving you further insight into the topics and presentations
Development of Quantitative Bone SPECT Analysis Methods for Metastatic Bone Disease
Prostate cancer is one of the most prevalent types of cancer in males in the United States. Bone is a common site of metastases for metastatic prostate cancer. However, bone metastases are often considered “unmeasurable” using standard anatomic imaging and the RECIST 1.1 criteria. As a result, response to therapy is often suboptimally evaluated by visual interpretation of planar bone scintigraphy with response criteria related to the presence or absence of new lesions. With the commercial availability of quantitative single-photon emission computed tomography (SPECT) methods, it is now feasible to establish quantitative metrics of therapy response by skeletal metastases. Quantitative bone SPECT (QBSPECT) may provide the ability to estimate bone lesion uptake, volume, and the number of lesions more accurately than planar imaging. However, the accuracy of activity quantification in QBSPECT relies heavily on the precision with which bone metastases and bone structures are delineated. In this research, we aim at developing automated image segmentation methods for fast and accurate delineation of bone and bone metastases in QBSPECT. To begin, we developed registration methods to generate a dataset of realistic and anatomically-varying computerized phantoms for use in QBSPECT simulations. Using these simulations, we develop supervised computer-automated segmentation methods to minimize intra- and inter-observer variations in delineating bone metastases. This project provides accurate segmentation techniques for QBSPECT and paves the way for the development of QBSPECT methods for assessing bone metastases’ therapy response
Multi-Fidelity Bayesian Optimization for Efficient Materials Design
Materials design is a process of identifying compositions and structures to achieve
desirable properties. Usually, costly experiments or simulations are required to evaluate
the objective function for a design solution. Therefore, one of the major challenges is how
to reduce the cost associated with sampling and evaluating the objective. Bayesian
optimization is a new global optimization method which can increase the sampling
efficiency with the guidance of the surrogate of the objective. In this work, a new
acquisition function, called consequential improvement, is proposed for simultaneous
selection of the solution and fidelity level of sampling. With the new acquisition function,
the subsequent iteration is considered for potential selections at low-fidelity levels, because
evaluations at the highest fidelity level are usually required to provide reliable objective
values. To reduce the number of samples required to train the surrogate for molecular
design, a new recursive hierarchical similarity metric is proposed. The new similarity
metric quantifies the differences between molecules at multiple levels of hierarchy
simultaneously based on the connections between multiscale descriptions of the structures.
The new methodologies are demonstrated with simulation-based design of materials and
structures based on fully atomistic and coarse-grained molecular dynamics simulations,
and finite-element analysis. The new similarity metric is demonstrated in the design of
tactile sensors and biodegradable oligomers. The multi-fidelity Bayesian optimization
method is also illustrated with the multiscale design of a piezoelectric transducer by
concurrently optimizing the atomic composition of the aluminum titanium nitride ceramic
and the device’s porous microstructure at the micrometer scale.Ph.D
Ti-6Al-4V β Phase Selective Dissolution: In Vitro Mechanism and Prediction
Retrieval studies document Ti-6Al-4V β phase dissolution within total hip replacement systems. A gap persists in our mechanistic understanding and existing standards fail to reproduce this damage. This thesis aims to (1) elucidate the Ti-6Al-4V selective dissolution mechanism as functions of solution chemistry, electrode potential and temperature; (2) investigate the effects of adverse electrochemical conditions on additively manufactured (AM) titanium alloys and (3) apply machine learning to predict the Ti-6Al-4V dissolution state. We hypothesized that (1) cathodic activation and inflammatory species (H2O2) would degrade the Ti-6Al-4V oxide, promoting dissolution; (2) AM Ti-6Al-4V selective dissolution would occur and (3) near field electrochemical impedance spectra (nEIS) would distinguish between dissolved and polished Ti-6Al-4V, allowing for deep neural network prediction. First, we show a combinatorial effect of cathodic activation and inflammatory species, degrading the oxide film’s polarization resistance (Rp) by a factor of 105 Ωcm2 (p = 0.000) and inducing selective dissolution. Next, we establish a potential range (-0.3 V to –1 V) where inflammatory species, cathodic activation and increasing solution temperatures (24 oC to 55 oC) synergistically affect the oxide film. Then, we evaluate the effect of solution temperature on the dissolution rate, documenting a logarithmic dependence. In our second aim, we show decreased AM Ti-6Al-4V Rp when compared with AM Ti-29Nb-21Zr in H2O2. AM Ti-6Al-4V oxide degradation preceded pit nucleation in the β phase. Finally, in our third aim, we identified gaps in the application of artificial intelligence to metallic biomaterial corrosion. With an input of nEIS spectra, a deep neural network predicted the surface dissolution state with 96% accuracy. In total, these results support the inclusion of inflammatory species and cathodic activation in pre-clinical titanium devices and biomaterial testing
A Practical Box Spline Compendium
Box splines provide smooth spline spaces as shifts of a single generating
function on a lattice and so generalize tensor-product splines. Their elegant
theory is laid out in classical papers and a summarizing book. This compendium
aims to succinctly but exhaustively survey symmetric low-degree box splines
with special focus on two and three variables. Tables contrast the lattices,
supports, analytic and reconstruction properties, and list available
implementations and code.Comment: 15 pages, 10 figures, 8 table
Statistical learning of random probability measures
The study of random probability measures is a lively research topic that has
attracted interest from different fields in recent years. In this thesis, we consider
random probability measures in the context of Bayesian nonparametrics,
where the law of a random probability measure is used as prior distribution,
and in the context of distributional data analysis, where
the goal is to perform inference given avsample from the law of a random probability measure.
The contributions contained in this thesis can be subdivided according to three
different topics: (i) the use of almost surely discrete repulsive random measures
(i.e., whose support points are well separated) for Bayesian model-based
clustering, (ii) the proposal of new laws for collections of random probability
measures for Bayesian density estimation of partially
exchangeable data subdivided into different groups, and (iii) the study
of principal component analysis and regression models for probability distributions
seen as elements of the 2-Wasserstein space. Specifically, for point
(i) above we propose an efficient Markov chain Monte Carlo algorithm for
posterior inference, which sidesteps the need of split-merge reversible jump
moves typically associated with poor performance, we propose a model for
clustering high-dimensional data by introducing a novel class of anisotropic
determinantal point processes, and study the distributional properties of the
repulsive measures, shedding light on important theoretical results which enable
more principled prior elicitation and more efficient posterior simulation
algorithms. For point (ii) above, we consider several models suitable for clustering
homogeneous populations, inducing spatial dependence across groups of
data, extracting the characteristic traits common to all the data-groups, and
propose a novel vector autoregressive model to study of growth
curves of Singaporean kids. Finally, for point (iii), we propose a novel class of
projected statistical methods for distributional data analysis for measures
on the real line and on the unit-circle
Generative Model based Training of Deep Neural Networks for Event Detection in Microscopy Data
Several imaging techniques employed in the life sciences heavily rely on machine learning methods
to make sense of the data that they produce. These include calcium imaging and multi-electrode
recordings of neural activity, single molecule localization microscopy, spatially-resolved transcriptomics and particle tracking, among others. All of them only produce indirect readouts of the
spatiotemporal events they aim to record. The objective when analysing data from these methods
is the identification of patterns that indicate the location of the sought-after events, e.g. spikes in
neural recordings or fluorescent particles in microscopy data.
Existing approaches for this task invert a forward model, i.e. a mathematical description of the
process that generates the observed patterns for a given set of underlying events, using established
methods like MCMC or variational inference. Perhaps surprisingly, for a long time deep learning
saw little use in this domain, even though it became the dominant approach in the field of pattern
recognition over the previous decade. The principal reason is that in the absence of labeled data
needed for supervised optimization it remains unclear how neural networks can be trained to solve
these tasks. To unlock the potential of deep learning, this thesis proposes different methods for
training neural networks using forward models and without relying on labeled data. The thesis
rests on two publications:
In the first publication we introduce an algorithm for spike extraction from calcium imaging
time traces. Building on the variational autoencoder framework, we simultaneously train a neural
network that performs spike inference and optimize the parameters of the forward model. This
approach combines several advantages that were previously incongruous: it is fast at test-time,
can be applied to different non-linear forward models and produces samples from the posterior
distribution over spike trains.
The second publication deals with the localization of fluorescent particles in single molecule
localization microscopy. We show that an accurate forward model can be used to generate simulations that act as a surrogate for labeled training data. Careful design of the output representation
and loss function result in a method with outstanding precision across experimental designs and
imaging conditions.
Overall this thesis highlights how neural networks can be applied for precise, fast and flexible model inversion on this class of problems and how this opens up new avenues to achieve
performance beyond what was previously possible
Metallurgical Process Simulation and Optimization
Metallurgy involves the art and science of extracting metals from their ores and modifying the metals for use. With thousands of years of development, many interdisciplinary technologies have been introduced into this traditional and large-scale industry. In modern metallurgical practices, modelling and simulation are widely used to provide solutions in the areas of design, control, optimization, and visualization, and are becoming increasingly significant in the progress of digital transformation and intelligent metallurgy. This Special Issue (SI), entitled “Metallurgical Process Simulation and Optimization”, has been organized as a platform to present the recent advances in the field of modelling and optimization of metallurgical processes, which covers the processes of electric/oxygen steel-making, secondary metallurgy, (continuous) casting, and processing. Eighteen articles have been included that concern various aspects of the topic
- …