12,926 research outputs found
Waiting Nets: State Classes and Taxonomy
In time Petri nets (TPNs), time and control are tightly connected: time
measurement for a transition starts only when all resources needed to fire it
are available. Further, upper bounds on duration of enabledness can force
transitions to fire (this is called urgency). For many systems, one wants to
decouple control and time, i.e. start measuring time as soon as a part of the
preset of a transition is filled, and fire it after some delay \underline{and}
when all needed resources are available. This paper considers an extension of
TPN called waiting nets that dissociates time measurement and control. Their
semantics allows time measurement to start with incomplete presets, and can
ignore urgency when upper bounds of intervals are reached but all resources
needed to fire are not yet available. Firing of a transition is then allowed as
soon as missing resources are available. It is known that extending bounded
TPNs with stopwatches leads to undecidability. Our extension is weaker, and we
show how to compute a finite state class graph for bounded waiting nets,
yielding decidability of reachability and coverability. We then compare
expressiveness of waiting nets with that of other models w.r.t. timed language
equivalence, and show that they are strictly more expressive than TPNs
Notes on the type classification of von Neumann algebras
These notes provide an explanation of the type classification of von Neumann
algebras, which has made many appearances in recent work on entanglement in
quantum field theory and quantum gravity. The goal is to bridge a gap in the
literature between resources that are too technical for the non-expert reader,
and resources that seek to explain the broad intuition of the theory without
giving precise definitions. Reading these notes will provide you with: (i) an
argument for why "factors" are the fundamental von Neumann algebras that one
needs to study; (ii) an intuitive explanation of the type classification of
factors in terms of renormalization schemes that turn unnormalizable positive
operators into "effective density matrices;" (iii) a mathematical explanation
of the different types of renormalization schemes in terms of the allowed
traces on a factor; (iv) an intuitive characterization of type I and II factors
in terms of their "standard forms;" and (v) a list of some interesting
connections between type classification and modular theory, including the
argument for why type III factors are believed to be the relevant ones in
quantum field theory. None of the material is new, but the pedagogy is
different from other sources I have read; it is most similar in spirit to the
recent work on gravity and the crossed product by Chandrasekaran, Longo,
Penington, and Witten.Comment: 38 pages plus 16 pages in appendices; introduction includes a reading
guide for which the minimal read is about 28 page
Tail asymptotics and precise large deviations for some Poisson cluster processes
We study the tail asymptotics of two functionals (the maximum and the sum of
the marks) of a generic cluster in two sub-models of the marked Poisson cluster
process, namely the renewal Poisson cluster process and the Hawkes process.
Under the hypothesis that the governing components of the processes are
regularly varying, we extend results due to [18] and [5] notably, relying on
Karamata's Tauberian Theorem to do so. We use these asymptotics to derive
precise large deviation results in the fashion of [30] for the above-mentioned
processes
Fair Grading Algorithms for Randomized Exams
This paper studies grading algorithms for randomized exams. In a randomized
exam, each student is asked a small number of random questions from a large
question bank. The predominant grading rule is simple averaging, i.e.,
calculating grades by averaging scores on the questions each student is asked,
which is fair ex-ante, over the randomized questions, but not fair ex-post, on
the realized questions. The fair grading problem is to estimate the average
grade of each student on the full question bank. The maximum-likelihood
estimator for the Bradley-Terry-Luce model on the bipartite student-question
graph is shown to be consistent with high probability when the number of
questions asked to each student is at least the cubed-logarithm of the number
of students. In an empirical study on exam data and in simulations, our
algorithm based on the maximum-likelihood estimator significantly outperforms
simple averaging in prediction accuracy and ex-post fairness even with a small
class and exam size
Soliton Gas: Theory, Numerics and Experiments
The concept of soliton gas was introduced in 1971 by V. Zakharov as an
infinite collection of weakly interacting solitons in the framework of
Korteweg-de Vries (KdV) equation. In this theoretical construction of a diluted
soliton gas, solitons with random parameters are almost non-overlapping. More
recently, the concept has been extended to dense gases in which solitons
strongly and continuously interact. The notion of soliton gas is inherently
associated with integrable wave systems described by nonlinear partial
differential equations like the KdV equation or the one-dimensional nonlinear
Schr\"odinger equation that can be solved using the inverse scattering
transform. Over the last few years, the field of soliton gases has received a
rapidly growing interest from both the theoretical and experimental points of
view. In particular, it has been realized that the soliton gas dynamics
underlies some fundamental nonlinear wave phenomena such as spontaneous
modulation instability and the formation of rogue waves. The recently
discovered deep connections of soliton gas theory with generalized
hydrodynamics have broadened the field and opened new fundamental questions
related to the soliton gas statistics and thermodynamics. We review the main
recent theoretical and experimental results in the field of soliton gas. The
key conceptual tools of the field, such as the inverse scattering transform,
the thermodynamic limit of finite-gap potentials and the Generalized Gibbs
Ensembles are introduced and various open questions and future challenges are
discussed.Comment: 35 pages, 8 figure
Chiral active fluids: Odd viscosity, active turbulence, and directed flows of hydrodynamic microrotors
While the number of publications on rotating active matter has rapidly increased in recent years, studies on purely hydrodynamically interacting rotors on the microscale are still rare, especially from the perspective of particle based hydrodynamic simulations. The work presented here targets to fill this gap. By means of high-performance computer simulations, performed in a highly parallelised fashion on graphics processing units, the dynamics of ensembles of up to 70,000 rotating colloids immersed in an explicit mesoscopic solvent consisting out of up to 30 million fluid particles, are investigated. Some of the results presented in this thesis have been worked out in collaboration with experimentalists, such that the theoretical considerations developed in this thesis are supported by experiments, and vice versa. The studied system, modelled in order to resemble the essential physics of the experimentally realisable system, consists out of rotating magnetic colloidal particles, i.e., (micro-)rotors, rotating in sync to an externally applied magnetic field, where the rotors solely interact via hydrodynamic and steric interactions. Overall, the agreement between simulations and experiments is very good, proving that hydrodynamic interactions play a key role in this and related systems.
While already an isolated rotating colloid is driven out of equilibrium, only collections of two or more rotors have experimentally shown to be able to convert the rotational energy input into translational dynamics in an orbital rotating fashion. The rotating colloids inject circular flows into the fluid, such that detailed balance is broken, and it is not a priori known whether equilibrium properties of colloids can be extended to isolated rotating colloids. A joint theoretical and experimental analysis of isolated, pairs, and small groups of hydrodynamically interacting rotors is given in chapter 2. While the translational dynamics of isolated rotors effectively resemble the dynamics of non-rotating colloids, the orbital rotation of pairs of rotors can be described with leading order hydrodynamics and a two-dimensional analogy of Faxén’s law is derived.
In chapter 3, a homogeneously distributed ensemble of rotors (bulk) as a realisation of a chiral active fluid is studied and it is explicitly shown computationally and experimentally that it carries odd viscosity. The mutual orbital translation of rotors and an increase of the effective solvent viscosity with rotor density lead to a non-monotonous behaviour of the average translational velocity. Meanwhile, the rotor suspension bears a finite osmotic compressibility resulting from the long-ranged nature of hydrody- namic interactions such that rotational and odd stresses are transmitted through the solvent also at small and intermediate rotor densities. Consequently, density inhomogeneities predicted for chiral active fluids with odd viscosity can be found and allow for an explicit measurement of odd viscosity in simulations and experiments. At intermediate densities, the collective dynamics shows the emergence of multi-scale vortices and chaotic motion which is identified as active turbulence with a self-similar power-law decay in the energy spectrum, showing that the injected energy on the rotor scale is transported to larger scales, similar to the inverse energy cascade of clas- sical two-dimensional turbulence. While either odd viscosity or active turbulence have been reported in chiral active matter previously, the system studied here shows that the emergence of both simultaneously is possible resulting from the osmotic compressibility and hydrodynamic mediation of odd and active stresses. The collective dynamics of colloids rotating out of phase, i.e., where a constant torque instead of a constant angular velocity is applied, is shown to be qualitatively very similar. However, at smaller densities, local density inhomogeneities imply position dependent angular velocities of the rotors resulting from inter-rotor friction.
While the friction of a quasi-2D layer of active colloids with the substrate is often not easily modifiable in experiments, the incorporation of substrate friction into the simulation models typically implies a considerable increase in computational effort. In chapter 4, a very efficient way of incorporating the friction with a substrate into a two-dimensional multiparticle collision dynamics solvent is introduced, allowing for an explicit investigation of the influences of substrate on active dynamics. For the rotor fluid, it is explicitly shown that the influence of the substrate friction results in a cutoff of the hydrodynamic interaction length, such that the maximum size of the formed vortices is controlled by the substrate friction, also resulting in a cutoff in the energy spectrum, because energy is taken out of the system at the respective length. These findings are in agreement with the experiments.
Since active particles in confinement are known to organise in states of collective dynamics, ensembles of rotationally actuated colloids are studied in circular confinement and in the presence of periodic obstacle lattices in chapters 5 and 6, respectively. The results show that the chaotic active turbulent transport of rotors in suspension can be enhanced and guided resulting from edge flows generated at the boundaries, as has recently been reported for a related chiral active system. The consequent collective rotor dynamics can be regarded as a superposition of active turbulent and imposed flows, leading to on average stationary flows. In contrast to the bulk dynamics, the imposed flows inject additional energy into the system on the long length scales, and the same scaling behaviour of the energy spectrum as in bulk is only obtained if the energy injection scales, due to the mutual generation of rotor translational dynamics throughout the system and the edge flows, are well separated. The combination of edge flow and entropic layering at the boundaries leads to oscillating hydrodynamic stresses and consequently to an oscillating vorticity profile. In the presence of odd viscosity, this consequently leads to non-trivial steady-state density modulations at the boundary, resulting from a balance of osmotic pressure and odd stresses.
Relevant for the efficient dispersion and mixing of inert particles on the mesoscale by means of active turbulent mixing powered by rotors, a study of the dynamics of a binary mixture consisting out of rotors and passive particles is presented in chapter 7. Because the rotors are not self-propelled, but the translational dynamics is induced by the surrounding rotors, the passive particles, which do not inject further energy into the system, are transported according to the same mechanism as the rotors. The collective dynamics thus resembles the pure rotor bulk dynamics at the respective density of only rotors. However, since no odd stresses act between the passive particles, only mutual rotor interactions lead to odd stresses leading to the accumulation of rotors in the regions of positive vorticity. This density increase is associated with a pressure increase, which balances the odd stresses acting on the rotors. However, the passive particles are only subject to the accumulation induced pressure increase such that these particles are transported into the areas of low rotor concentration, i.e., the regions of negative vorticity. Under conditions of sustained vortex flow, this results in segregation of both particle types.
Since local symmetry breaking can convert injected rotational into translational energy, microswimmers can be constructed out of rotor materials when a suitable breaking of symmetry is kept in the vicinity of a rotor. One hypothetical realisation, i.e., a coupled rotor pair consisting out of two rotors of opposite angular velocity and of fixed distance, termed a birotor, are studied in chapter 8. The birotor pumps the fluid into one direction and consequently translates into the opposite direction, and creates a flow field reminiscent of a source doublet, or sliplet flow field. Fixed in space the birotor might be an interesting realisation of a microfluidic pump. The trans- lational dynamics of a birotor can be mapped onto the active Brownian particle model for single swimmers. However, due to the hydrodynamic interactions among the rotors, the birotor ensemble dynamics do not show the emergence of stable motility induced clustering. The reason for this is the flow created by birotor in small aggregates which effectively pushes further arriving birotors away from small aggregates, which eventually are all dispersed by thermal fluctuations
Construction of radon chamber to expose active and passive detectors
In this research and development, we present the design and manufacture of a radon chamber
(PUCP radon chamber), a necessary tool for the calibration of passive detectors, verification
of the operation of active radon monitors as well as diffusion chamber calibration used in
radon measurements in air, and soils. The first chapter is an introduction to describe radon
gas and national levels of radon concentration given by many organizations. Parameters that
influence the calibration factor of the LR 115 type 2 film detector are studied, such as the
energy window, critical angle, and effective volumes. Those are strongly related to the etching
processes and counting of tracks all seen from a semi-empirical approach studied in the second
chapter. The third chapter presents a review of some radon chambers that have been reported
in the literature, based on their size and mode of operation as well as the radon source they use.
The design and construction of the radon chamber are presented, use of uranium ore (autunite)
as a chamber source is also discussed. In chapter fourth, radon chamber characterization
is presented through leakage lambda, homogeneity of radon concentration, regimes-operation
modes, and the saturation concentrations that can be reached. Procedures and methodology
used in this work are contained in the fifth chapter and also some uses and applications of the
PUCP radon chamber are presented; the calibration of cylindrical metallic diffusion chamber
based on CR-39 chips detectors taking into account overlapping effect; transmission factors of
gaps and pinhole for the same diffusion chambers are determined; permeability of glass fiber
filter for 222Rn is obtained after reach equilibrium through Ramachandran model and taking
into account a partition function as the rate of track density. The results of this research have
been published in indexed journals. Finally, the conclusion and recommendations that reflect
the fulfillment aims of this thesis are presented
A study of uncertainty quantification in overparametrized high-dimensional models
Uncertainty quantification is a central challenge in reliable and trustworthy
machine learning. Naive measures such as last-layer scores are well-known to
yield overconfident estimates in the context of overparametrized neural
networks. Several methods, ranging from temperature scaling to different
Bayesian treatments of neural networks, have been proposed to mitigate
overconfidence, most often supported by the numerical observation that they
yield better calibrated uncertainty measures. In this work, we provide a sharp
comparison between popular uncertainty measures for binary classification in a
mathematically tractable model for overparametrized neural networks: the random
features model. We discuss a trade-off between classification accuracy and
calibration, unveiling a double descent like behavior in the calibration curve
of optimally regularized estimators as a function of overparametrization. This
is in contrast with the empirical Bayes method, which we show to be well
calibrated in our setting despite the higher generalization error and
overparametrization
Discovering the hidden structure of financial markets through bayesian modelling
Understanding what is driving the price of a financial asset is a question that is currently mostly unanswered. In this work we go beyond the classic one step ahead prediction and instead construct models that create new information on the behaviour of these time series. Our aim is to get a better understanding of the hidden structures that drive the moves of each financial time series and thus the market as a whole.
We propose a tool to decompose multiple time series into economically-meaningful variables to explain the endogenous and exogenous factors driving their underlying variability. The methodology we introduce goes beyond the direct model forecast. Indeed, since our model continuously adapts its variables and coefficients, we can study the time series of coefficients and selected variables. We also present a model to construct the causal graph of relations between these time series and include them in the exogenous factors.
Hence, we obtain a model able to explain what is driving the move of both each specific time series and the market as a whole. In addition, the obtained graph of the time series provides new information on the underlying risk structure of this environment. With this deeper understanding of the hidden structure we propose novel ways to detect and forecast risks in the market. We investigate our results with inferences up to one month into the future using stocks, FX futures and ETF futures, demonstrating its superior performance according to accuracy of large moves, longer-term prediction and consistency over time. We also go in more details on the economic interpretation of the new variables and discuss the created graph structure of the market.Open Acces
Modeling Uncertainty for Reliable Probabilistic Modeling in Deep Learning and Beyond
[ES] Esta tesis se enmarca en la intersección entre las técnicas modernas de Machine Learning, como las Redes Neuronales Profundas, y el modelado probabilÃstico confiable. En muchas aplicaciones, no solo nos importa la predicción hecha por un modelo (por ejemplo esta imagen de pulmón presenta cáncer) sino también la confianza que tiene el modelo para hacer esta predicción (por ejemplo esta imagen de pulmón presenta cáncer con 67% probabilidad). En tales aplicaciones, el modelo ayuda al tomador de decisiones (en este caso un médico) a tomar la decisión final. Como consecuencia, es necesario que las probabilidades proporcionadas por un modelo reflejen las proporciones reales presentes en el conjunto al que se ha asignado dichas probabilidades; de lo contrario, el modelo es inútil en la práctica. Cuando esto sucede, decimos que un modelo está perfectamente calibrado.
En esta tesis se exploran tres vias para proveer modelos más calibrados. Primero se muestra como calibrar modelos de manera implicita, que son descalibrados por técnicas de aumentación de datos. Se introduce una función de coste que resuelve esta descalibración tomando como partida las ideas derivadas de la toma de decisiones con la regla de Bayes. Segundo, se muestra como calibrar modelos utilizando una etapa de post calibración implementada con una red neuronal Bayesiana. Finalmente, y en base a las limitaciones estudiadas en la red neuronal Bayesiana, que hipotetizamos que se basan en un prior mispecificado, se introduce un nuevo proceso estocástico que sirve como distribución a priori en un problema de inferencia Bayesiana.[CA] Aquesta tesi s'emmarca en la intersecció entre les tècniques modernes de Machine Learning, com ara les Xarxes Neuronals Profundes, i el modelatge probabilÃstic fiable. En moltes aplicacions, no només ens importa la predicció feta per un model (per ejemplem aquesta imatge de pulmó presenta cà ncer) sinó també la confiança que té el model per fer aquesta predicció (per exemple aquesta imatge de pulmó presenta cà ncer amb 67% probabilitat). En aquestes aplicacions, el model ajuda el prenedor de decisions (en aquest cas un metge) a prendre la decisió final. Com a conseqüència, cal que les probabilitats proporcionades per un model reflecteixin les proporcions reals presents en el conjunt a què s'han assignat aquestes probabilitats; altrament, el model és inútil a la prà ctica. Quan això passa, diem que un model està perfectament calibrat.
En aquesta tesi s'exploren tres vies per proveir models més calibrats. Primer es mostra com calibrar models de manera implÃcita, que són descalibrats per tècniques d'augmentació de dades. S'introdueix una funció de cost que resol aquesta descalibració prenent com a partida les idees derivades de la presa de decisions amb la regla de Bayes. Segon, es mostra com calibrar models utilitzant una etapa de post calibratge implementada amb una xarxa neuronal Bayesiana. Finalment, i segons les limitacions estudiades a la xarxa neuronal Bayesiana, que es basen en un prior mispecificat, s'introdueix un nou procés estocà stic que serveix com a distribució a priori en un problema d'inferència Bayesiana.[EN] This thesis is framed at the intersection between modern Machine Learning techniques, such as Deep Neural Networks, and reliable probabilistic modeling. In many machine learning applications, we do not only care about the prediction made by a model (e.g. this lung image presents cancer) but also in how confident is the model in making this prediction (e.g. this lung image presents cancer with 67% probability). In such applications, the model assists the decision-maker (in this case a doctor) towards making the final decision. As a consequence, one needs that the probabilities provided by a model reflects the true underlying set of outcomes, otherwise the model is useless in practice. When this happens, we say that a model is perfectly calibrated.
In this thesis three ways are explored to provide more calibrated models. First, it is shown how to calibrate models implicitly, which are decalibrated by data augmentation techniques. A cost function is introduced that solves this decalibration taking as a starting point the ideas derived from decision making with Bayes' rule. Second, it shows how to calibrate models using a post-calibration stage implemented with a Bayesian neural network. Finally, and based on the limitations studied in the Bayesian neural network, which we hypothesize that came from a mispecified prior, a new stochastic process is introduced that serves as a priori distribution in a Bayesian inference problem.Maroñas Molano, J. (2022). Modeling Uncertainty for Reliable Probabilistic Modeling in Deep Learning and Beyond [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181582TESI
- …