1,172 research outputs found
Proceedings of SIRM 2023 - The 15th European Conference on Rotordynamics
It was our great honor and pleasure to host the SIRM Conference after 2003 and 2011 for the third time in Darmstadt. Rotordynamics covers a huge variety of different applications and challenges which are all in the scope of this conference. The conference was opened with a keynote lecture given by Rainer Nordmann, one of the three founders of SIRM “Schwingungen in rotierenden Maschinen”. In total 53 papers passed our strict review process and were presented. This impressively shows that rotordynamics is relevant as ever. These contributions cover a very wide spectrum of session topics: fluid bearings and seals; air foil bearings; magnetic bearings; rotor blade interaction; rotor fluid interactions; unbalance and balancing; vibrations in turbomachines; vibration control; instability; electrical machines; monitoring, identification and diagnosis; advanced numerical tools and nonlinearities as well as general rotordynamics. The international character of the conference has been significantly enhanced by the Scientific Board since the 14th SIRM resulting on one hand in an expanded Scientific Committee which meanwhile consists of 31 members from 13 different European countries and on the other hand in the new name “European Conference on Rotordynamics”. This new international profile has also been
emphasized by participants of the 15th SIRM coming from 17 different countries out of three continents. We experienced a vital discussion and dialogue between industry and academia at the conference where roughly one third of the papers were presented by industry and two thirds by academia being an excellent basis to follow a bidirectional transfer what we call xchange at Technical University of Darmstadt. At this point we also want to give our special thanks to the eleven industry sponsors for their great support of the conference. On behalf of the Darmstadt Local Committee I welcome you to read the papers of the 15th SIRM giving you further insight into the topics and presentations
Discrete functional inequalities on lattice graphs
In this thesis, we study problems at the interface of analysis and discrete mathematics. We discuss analogues of well known Hardy-type inequalities and Rearrangement inequalities on the lattice graphs Z^d, with a particular focus on behaviour of sharp constants and optimizers.
In the first half of the thesis, we analyse Hardy inequalities on Z^d, first for d=1 and then for d >= 3. We prove a sharp weighted Hardy inequality on integers with power weights of the form n^\alpha. This is done via two different methods, namely 'super-solution' and 'Fourier method'. We also use Fourier method to prove a weighted Hardy type inequality for higher order operators. After discussing the one dimensional case, we study the Hardy inequality in higher dimensions (d >= 3). In particular, we compute the asymptotic behaviour of the sharp constant in the discrete Hardy inequality, as d \rightarrow \infty. This is done by converting the inequality into a continuous Hardy-type inequality on a torus for functions having zero average. These continuous inequalities are new and interesting in themselves.
In the second half, we focus our attention on analogues of Rearrangement inequalities on lattice graphs. We begin by analysing the situation in dimension one. We define various notions of rearrangements and prove the corresponding Polya-Szego inequality. These inequalities are also applied to prove some weighted Hardy inequalities on integers. Finally, we study Rearrangement inequalities (Polya-Szego) on general graphs, with a particular focus on lattice graphs Z^d, for d >=2. We develop a framework to study these inequalities, using which we derive concrete results in dimension two. In particular, these results develop connections between Polya-Szego inequality and various isoperimetric inequalities on graphs.Open Acces
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
Growth of Bilinear Maps
We study a problem that is algebraic in nature but has certain applications in graph theory. It can be seen as a generalization of the joint spectral radius.
Given a bilinear map and a vector , both with nonnegative coefficients and entries, among an exponential number of ways to combine instances of using applications of , we are interested in the largest possible entry in a resulting vector. Let denote this value, the asymptotic behaviour of is investigated through the growth rate
It is known that checking is undecidable, as a consequence of the corresponding fact for the joint spectral radius. However, efficient algorithms are available to compute it exactly in certain cases, or approximate it to any precision in general. Furthermore, when the vector is positive, there exists some so that
\const n^{-r}\lambda^n\le g(n)\le \const n^r\lambda^n.
It means is actually a limit when . However, checking if this is the case in general is also undecidable. Some types of patterns for optimal combinations are proposed and studied as well, with some connections to the finiteness property of a set of matrices.
The techniques that are used for our problem can be applied well for the joint spectral radius, and they produce some stronger results by even simpler arguments. For example, if denotes the largest possible entry in a product of matrices drawn from a finite set of nonnegative matrices, whose joint spectral radius is denoted by , then there exists some so that
\[
\const n^r\rho(\Sigma)^n\le \|\Sigma^n\|\le \const n^r\rho(\Sigma)^n.
\
Computational Approaches to Drug Profiling and Drug-Protein Interactions
Despite substantial increases in R&D spending within the pharmaceutical industry, denovo drug design has become a time-consuming endeavour. High attrition rates led to a
long period of stagnation in drug approvals. Due to the extreme costs associated with
introducing a drug to the market, locating and understanding the reasons for clinical failure
is key to future productivity. As part of this PhD, three main contributions were made in
this respect. First, the web platform, LigNFam enables users to interactively explore
similarity relationships between ‘drug like’ molecules and the proteins they bind. Secondly,
two deep-learning-based binding site comparison tools were developed, competing with
the state-of-the-art over benchmark datasets. The models have the ability to predict offtarget interactions and potential candidates for target-based drug repurposing. Finally, the
open-source ScaffoldGraph software was presented for the analysis of hierarchical scaffold
relationships and has already been used in multiple projects, including integration into a
virtual screening pipeline to increase the tractability of ultra-large screening experiments.
Together, and with existing tools, the contributions made will aid in the understanding of
drug-protein relationships, particularly in the fields of off-target prediction and drug
repurposing, helping to design better drugs faster
Efficient parameterized algorithms on structured graphs
In der klassischen Komplexitätstheorie werden worst-case Laufzeiten von Algorithmen typischerweise einzig abhängig von der Eingabegröße angegeben. In dem Kontext der parametrisierten Komplexitätstheorie versucht man die Analyse der Laufzeit dahingehend zu verfeinern, dass man zusätzlich zu der Eingabengröße noch einen Parameter berücksichtigt, welcher angibt, wie strukturiert die Eingabe bezüglich einer gewissen Eigenschaft ist. Ein parametrisierter Algorithmus nutzt dann diese beschriebene Struktur aus und erreicht so eine Laufzeit, welche schneller ist als die eines besten unparametrisierten Algorithmus, falls der Parameter klein ist.
Der erste Hauptteil dieser Arbeit führt die Forschung in diese Richtung weiter aus und untersucht den Einfluss von verschieden Parametern auf die Laufzeit von bekannten effizient lösbaren Problemen. Einige vorgestellte Algorithmen sind dabei adaptive Algorithmen, was bedeutet, dass die Laufzeit von diesen Algorithmen mit der Laufzeit des besten unparametrisierten Algorithm für den größtmöglichen Parameterwert übereinstimmt und damit theoretisch niemals schlechter als die besten unparametrisierten Algorithmen und übertreffen diese bereits für leicht nichttriviale Parameterwerte.
Motiviert durch den allgemeinen Erfolg und der Vielzahl solcher parametrisierten Algorithmen, welche eine vielzahl verschiedener Strukturen ausnutzen, untersuchen wir im zweiten Hauptteil dieser Arbeit, wie man solche unterschiedliche homogene Strukturen zu mehr heterogenen Strukturen vereinen kann. Ausgehend von algebraischen Ausdrücken, welche benutzt werden können, um von Parametern beschriebene Strukturen zu definieren, charakterisieren wir klar und robust heterogene Strukturen und zeigen exemplarisch, wie sich die Parameter tree-depth und modular-width heterogen verbinden lassen. Wir beschreiben dazu effiziente Algorithmen auf heterogenen Strukturen mit Laufzeiten, welche im Spezialfall mit den homogenen Algorithmen übereinstimmen.In classical complexity theory, the worst-case running times of algorithms depend solely on the size of the input. In parameterized complexity the goal is to refine the analysis of the running time of an algorithm by additionally considering a parameter that measures some kind of structure in the input. A parameterized algorithm then utilizes the structure described by the parameter and achieves a running time that is faster than the best general (unparameterized) algorithm for instances of low parameter value.
In the first part of this thesis, we carry forward in this direction and investigate the influence of several parameters on the running times of well-known tractable problems.
Several presented algorithms are adaptive algorithms, meaning that they match the running time of a best unparameterized algorithm for worst-case parameter values. Thus, an adaptive parameterized algorithm is asymptotically never worse than the best unparameterized algorithm, while it outperforms the best general algorithm already for slightly non-trivial parameter values.
As illustrated in the first part of this thesis, for many problems there exist efficient parameterized algorithms regarding multiple parameters, each describing a different kind of structure.
In the second part of this thesis, we explore how to combine such homogeneous structures to more general and heterogeneous structures.
Using algebraic expressions, we define new combined graph classes
of heterogeneous structure in a clean and robust way, and we showcase this for the heterogeneous merge of the parameters tree-depth and modular-width, by presenting parameterized algorithms
on such heterogeneous graph classes and getting running times that match the homogeneous cases throughout
Unsolved Problems in Spectral Graph Theory
Spectral graph theory is a captivating area of graph theory that employs the
eigenvalues and eigenvectors of matrices associated with graphs to study them.
In this paper, we present a collection of topics in spectral graph theory,
covering a range of open problems and conjectures. Our focus is primarily on
the adjacency matrix of graphs, and for each topic, we provide a brief
historical overview.Comment: v3, 30 pages, 1 figure, include comments from Clive Elphick, Xiaofeng
Gu, William Linz, and Dragan Stevanovi\'c, respectively. Thanks! This paper
will be published in Operations Research Transaction
Occlusion-Ordered Semantic Instance Segmentation
Conventional semantic ‘instance’ segmentation methods offer a segmentation mask for each object instance in an image along with its semantic class label. These methods excel in distinguishing instances, whether they belong to the same class or different classes, providing valuable information about the scene. However, these methods lack the ability to provide depth-related information, thus unable to capture the 3D geometry of the scene.
One option to derive 3D information about a scene is monocular depth estimation. It predicts the absolute distance from the camera to each pixel in an image. However, monocular depth estimation has limitations. It lacks semantic information about object classes. Furthermore, it is not precise enough to reliably detect instances or establish depth order for known instances.
Even a coarse 3D geometry, such as the relative depth or occlusion order of objects is useful to obtain rich 3D-informed scene analysis. Based on this, we address occlusion-ordered semantic instance segmentation (OOSIS), which augments standard semantic instance segmentation by incorporating a coarse 3D geometry of the scene. By leveraging occlusion as a strong depth cue, OOSIS estimates a partial relative depth ordering of instances based on their occlusion relations. OOSIS produces two outputs: instance masks and their classes, as well as the occlusion ordering of those predicted instances.
Existing works pre-date deep learning and rely on simple visual cues such as the y-coordinate of objects for occlusion ordering. This thesis introduces two deep learning-based approaches for OOSIS. The first approach, following a top-down strategy, determines pairwise occlusion order between instances obtained by a standard instance segmentation method. However, this approach lacks global occlusion ordering consistency, having undesired cyclic orderings. Our second approach is bottom-up. It simultaneously derives instances and their occlusion order by grouping pixels into instances and assigning occlusion order labels. This approach ensures a globally consistent occlusion ordering. As part of this approach, we develop a novel deep model that predicts the boundaries where occlusion occurs plus the orientation of occlusion at the boundary, indicating which side of it occludes the other. The output of this model is utilized to obtain instances and their corresponding ordering by our proposed discrete optimization formulation.
To assess the performance of OOSIS methods, we introduce a novel evaluation metric capable of simultaneously evaluating instance segmentation and occlusion ordering. In addition, we utilize standard metrics for evaluating the quality of instance masks. We also evaluate occlusion ordering consistency, and oriented occlusion boundaries. We conduct evaluations on KINS and COCOA datasets
ACARORUM CATALOGUS IX. Acariformes, Acaridida, Schizoglyphoidea (Schizoglyphidae), Histiostomatoidea (Histiostomatidae, Guanolichidae), Canestrinioidea (Canestriniidae, Chetochelacaridae, Lophonotacaridae, Heterocoptidae), Hemisarcoptoidea (Chaetodactylidae, Hyadesiidae, Algophagidae, Hemisarcoptidae, Carpoglyphidae, Winterschmidtiidae)
The 9th volume of the series Acarorum Catalogus contains lists of mites of 13 families, 225 genera and 1268 species of the superfamilies Schizoglyphoidea, Histiostomatoidea, Canestrinioidea and Hemisarcoptoidea. Most of these mites live on insects or other animals (as parasites, phoretic or commensals), some inhabit rotten plant material, dung or fungi. Mites of the families Chetochelacaridae and Lophonotacaridae are specialised to live with Myriapods (Diplopoda). The peculiar aquatic or intertidal mites of the families Hyadesidae and Algophagidae are also included.Publishe
- …