2,895 research outputs found
The Mathematical Analysis of a Micro Scale Model for Lithium-Ion Batteries
In this dissertation we consider a quasi-linear elliptic-parabolic system of pdes arising in the modeling of Li-ion batteries. We establish local existence of weak solutions by exploiting the properties of the elliptic subproblem. In three or less spatial dimensions we obtain uniqueness. As auxillary results we discuss two finite element discretizations of the elliptic subproblem and present some numerical results
Towards the genuinely ghostâfree massive vector Horndeski theory
We put forward an extension to already existing Lagrangian constraint algorithms, which is readily applicable to (almost all) firstâorder classical field theories. Our algorithm is optimized to obtain the explicit constraints and thus count the number of propagating degrees of freedom in said theories. This is the main result of the thesis. We employ both the renowned DiracâBergmann procedure and our own formalism to obtain the constraint structure of the H(P)âformulation of nonâlinear electrodynamics and twoâdimensional Palatini gravity. Both approaches yield the same results. We observe that our proposed method is an algebraically simpler and conceptually clearer way to calculate the number of physical modes. The relevance and usefulness of our novel Lagrangian iterative procedure are twofold. On the one hand, it simplifies the determination of the constraint structure of these theories. This is particularly pertinent for effective theories of multiple interacting fields of different spins, whose analysis is in general cumbersome and which are prone to the presence of additional unphysical modes â ghosts. On the other hand, it constitutes an essential first step towards establishing a Lagrangian building principle for genuinely ghostâfree theories. Indeed, given a firstâorder Lagrangian, our method yields its associated constraint structure. It is then possible to reverse the logic and find out the conditions a Lagrangian must satisfy in order to possess a certain constraint structure. This natural followâup is work in progress.Wir schlagen eine Erweiterung zu bereits existierenden Lagrange Zwangsbedingungsalgorithmen vor, welche ohne weiteres auf (fast alle) klassischen Feldtheorien erster Ordnung anwendbar ist. Unser Algorithmus ist optimiert, um die explizite Form der Zwangsbedingungen zu erhalten und ermĂśglicht damit das Zählen der propagierenden Freiheitsgrade in diesen Theorien. Das ist das Hauptergebnis dieser Arbeit. Wir wenden sowohl das renommierte DiracâBergmann Verfahren, als auch unseren eigenen Formalismus an, um die Zwangsbedingungsstruktur der nichtâlinearen Elektrodynamik in ihrer H(P)âFormulierung und der zweidimensionalen Palatini Gravitation zu bestimmen. Beide Ans¨atze liefern das gleiche Ergebnis. Wir beobachten, dass unsere Methode der algebraisch einfachere und konzeptuell klarere Weg ist, um die Anzahl der physikalischen Moden zu berechnen. Relevanz und Nutzen unseres neuen iterativen Lagrange Verfahrens ist zweifältig. Einerseits vereinfacht es die Bestimmung der Zwangsbedingungsstruktur besagter Theorien. Das ist besonders relevant fĂźr effektive Feldtheorien mehrerer wechselwirkender Felder mit verschiedenen Spins. Deren Analyse ist im Allgemeinen umständlich und sie sind anfällig fĂźr das Auftauchen zusätzlicher, unphysikalischer Moden â Geister. Andererseits stellt es den ersten essenziellen Schritt in Richtung eines Konstruktionsprinzips fĂźr wirklich geistfreie Theorien dar. Unter Voraussetzung einer Lagrangedichte erster Ordnung kann mit unserer Methode die zugehĂśrige Struktur der Zwangsbedingungen bestimmt werden. Es ist dann mĂśglich die Logik umzukehren und Bedingungen zu finden, die eine Lagrangedichte erfĂźllen muss, um eine bestimmte Zwangsbedingungsstruktur aufzuweisen. Dies ist Teil einer FollowâupâArbeit in Vorbereitung
Differentiation of evolutionary stages in fog life cycles based on microphysical properties â implications for the operation of novel cloud radar profilers
Enlarged knowledge of the spatiotemporal distribution of fog and low stratus (FLS) is of great value in regards to traffic safety and air quality control. Not only the horizontal visibility in fog but also the dissolving power of harmful pollutants in boundary clouds depend on the prevailing small droplets. Since the drop size spectrum (DSD) of both phenomena varies spatially with the vertical extent of these clouds and temporally from formation to dissipation, nowcasting and forecasting of FLS is faced with difficult challenges. Present models have need of theoretical assumptions on vertical microphysical profiles and their evolution during fog life cycle for their computations since real-time data on these cannot be provided contemporaneously so far. According to COST actions 720 and 722 novel ground-based microwave FMCW cloud RADAR profilers possess the instrumental requirements for deriving microphysical properties such as liquid water content (LWC) from radar reflectivity (Z); but no implemented retrievals have been developed so far. Since for the derivation of vertical LWC-profiles from Z detailed information on prevailing DSD are required, the evolution of the latter as a function of the fog life cycle has to be considered. An accurate classification of fog evolutionary stages, accompanied with phase-specific DSD, is a necessary condition for a proper usage of the microwave RADAR profiler. Otherwise, the derivation of vertical LWC-profiles from Z would underlie too big inaccuracies.
Hence, the major aim of the thesis was the investigation of the temporal dynamics of fog microphysics with emphasis on DSD over its whole life cycle.
This intention was based on the hypothesis that it is possible to separate consecutive evolutionary stages temporally within fog life cycle on the basis of fog microphysics such as DSD at the ground as well as in vertical profiles. Novel findings of the current thesis are:
1. It is possible to derive vertical LWC-profiles in FLS directly from RADAR reflectivity of a novel 94 GH FMCW cloud RADAR profiler since a direct but non-linear relationship between Z and LWC could be approved whereby further information on the prevailing drop size distribution has to be presumed.
2. Fog occurrences can be separated in three consecutive phases during its life cycle by means of an innovative statistical approach that relies on measured microphysical fog properties or horizontal visibility at the ground.
3. According to balloon-borne measurements of vertical LWC-profiles it is legitimate to interpolate FLS life cycle phases from ground- based measurements of microphysical properties and horizontal visibility in their whole vertical extension.
The results of the thesis have manifold benefits for climate research and operational FLS applications. The identification of cloud geometrical thickness and thus the distinction between fog and low stratus by means of optical satellite retrievals has to be improved with regards to their reliability. The introduced approach for the classification of evolutionary stages during fog life cycle based on microphysical properties is a valuable step towards the development of a method for the derivation of vertical LWC-profiles from novel FMCW microwave cloud RADAR profilers. These are notably suitable for the exploration of microphysical properties of FLS with high temporal resolution. The resultant findings about the dynamics of microphysical properties during FLS could be used to improve the implemented theoretical assumptions on LWC-profiles in satellite-based approaches for fog detection. This optimization could permit in turn an operational and continuous monitoring of LWC-profiles in FLS thanks to their high spatiotemporal resolution
Perspective shift across modalities
Languages offer various ways to present what someone said, thought, imagined, felt, and so on from their perspective. The prototypical example of a perspective-shifting device is direct quotation. In this review we define perspective shift in terms of indexical shift: A direct quotation like âSelena said, âOh, I don't know.ââ involves perspective shift because the first-person indexical âIâ refers to Selena, not to the actual speaker. We then discuss a variety of noncanonical modality-specific perspective-shifting devices: role shift in signed language, quotatives in spoken language, free indirect discourse in written language, and point-of-view shift in visual language. We show that these devices permit complex mixed forms of perspective shift which may involve nonlinguistic gestural as well as visual components
Attachment working models as unconscious structures: An experimental test
Internal working models of attachment (IWMs) are presumed to be largely unconscious representations of childhood attachment experiences. Several instruments have been developed to assess IWMs; some of them are based on self-report and others on narrative interview techniques. This study investigated the capacity of a self-report measure, the Inventory of Parent and Peer Attachment (IPPA; Armsden & Greenberg, 1987), and of a narrative interview method, the Adult Attachment Interview (AAI; George, Kaplan, & Main, 1985), to measure unconscious attachment models. We compared scores on the two attachment instruments to response latencies in an attachment priming task. It was shown that attachment organisation assessed by the AAI correlates with priming effects, whereas the IPPA scales were inversely or not related to priming. The results are interpreted as support for the assumption that the AAI assesses, to a certain degree, unconscious working models of attachment
Tenfold your photons -- a physically-sound approach to filtering-based variance reduction of Monte-Carlo-simulated dose distributions
X-ray dose constantly gains interest in the interventional suite. With dose
being generally difficult to monitor reliably, fast computational methods are
desirable. A major drawback of the gold standard based on Monte Carlo (MC)
methods is its computational complexity. Besides common variance reduction
techniques, filter approaches are often applied to achieve conclusive results
within a fraction of time. Inspired by these methods, we propose a novel
approach. We down-sample the target volume based on the fraction of mass,
simulate the imaging situation, and then revert the down-sampling. To this end,
the dose is weighted by the mass energy absorption, up-sampled, and distributed
using a guided filter. Eventually, the weighting is inverted resulting in
accurate high resolution dose distributions. The approach has the potential to
considerably speed-up MC simulations since less photons and boundary checks are
necessary. First experiments substantiate these assumptions. We achieve a
median accuracy of 96.7 % to 97.4 % of the dose estimation with the proposed
method and a down-sampling factor of 8 and 4, respectively. While maintaining a
high accuracy, the proposed method provides for a tenfold speed-up. The overall
findings suggest the conclusion that the proposed method has the potential to
allow for further efficiency.Comment: 6 pages, 3 figures, Bildverarbeitung f\"ur die Medizin 202
Optimal construction of k-nearest neighbor graphs for identifying noisy clusters
We study clustering algorithms based on neighborhood graphs on a random
sample of data points. The question we ask is how such a graph should be
constructed in order to obtain optimal clustering results. Which type of
neighborhood graph should one choose, mutual k-nearest neighbor or symmetric
k-nearest neighbor? What is the optimal parameter k? In our setting, clusters
are defined as connected components of the t-level set of the underlying
probability distribution. Clusters are said to be identified in the
neighborhood graph if connected components in the graph correspond to the true
underlying clusters. Using techniques from random geometric graph theory, we
prove bounds on the probability that clusters are identified successfully, both
in a noise-free and in a noisy setting. Those bounds lead to several
conclusions. First, k has to be chosen surprisingly high (rather of the order n
than of the order log n) to maximize the probability of cluster identification.
Secondly, the major difference between the mutual and the symmetric k-nearest
neighbor graph occurs when one attempts to detect the most significant cluster
only.Comment: 31 pages, 2 figure
PYRO-NN: Python Reconstruction Operators in Neural Networks
Purpose: Recently, several attempts were conducted to transfer deep learning
to medical image reconstruction. An increasingly number of publications follow
the concept of embedding the CT reconstruction as a known operator into a
neural network. However, most of the approaches presented lack an efficient CT
reconstruction framework fully integrated into deep learning environments. As a
result, many approaches are forced to use workarounds for mathematically
unambiguously solvable problems. Methods: PYRO-NN is a generalized framework to
embed known operators into the prevalent deep learning framework Tensorflow.
The current status includes state-of-the-art parallel-, fan- and cone-beam
projectors and back-projectors accelerated with CUDA provided as Tensorflow
layers. On top, the framework provides a high level Python API to conduct FBP
and iterative reconstruction experiments with data from real CT systems.
Results: The framework provides all necessary algorithms and tools to design
end-to-end neural network pipelines with integrated CT reconstruction
algorithms. The high level Python API allows a simple use of the layers as
known from Tensorflow. To demonstrate the capabilities of the layers, the
framework comes with three baseline experiments showing a cone-beam short scan
FDK reconstruction, a CT reconstruction filter learning setup, and a TV
regularized iterative reconstruction. All algorithms and tools are referenced
to a scientific publication and are compared to existing non deep learning
reconstruction frameworks. The framework is available as open-source software
at \url{https://github.com/csyben/PYRO-NN}. Conclusions: PYRO-NN comes with the
prevalent deep learning framework Tensorflow and allows to setup end-to-end
trainable neural networks in the medical image reconstruction context. We
believe that the framework will be a step towards reproducible researchComment: V1: Submitted to Medical Physics, 11 pages, 7 figure
Risk perception or self perception
In cognitive entrepreneurship research one main question is: Do entrepreneurs think differently than others in various ways? Especially in the area of risk perception cognition is thought of as information processing. In later streams of cognitive science it has developed from a state where cognition is seen as information processing to a state where cognition is mainly seen as an effective act, where experiences play an important role. We use risk perception as an indicator for information processing and self perception as an indicator for past experience. We found that past experience explains starting a real venture whereas risk information processing explains starting a case study venture
- âŚ