arXiv.org e-Print Archive
Not a member yet
    377003 research outputs found

    Optimization Algorithms as Robust Feedback Controllers

    Full text link
    Mathematical optimization is one of the cornerstones of modern engineering research and practice. Yet, throughout all application domains, mathematical optimization is, for the most part, considered to be a numerical discipline. Optimization problems are formulated to be solved numerically with specific algorithms running on microprocessors. An emerging alternative is to view optimization algorithms as dynamical systems. Besides being insightful in itself, this perspective liberates optimization methods from specific numerical and algorithmic aspects and opens up new possibilities to endow complex real-world systems with sophisticated self-optimizing behavior. Towards this goal, it is necessary to understand how numerical optimization algorithms can be converted into feedback controllers to enable robust "closed-loop optimization". In this article, we focus on recent control designs under the name of "feedback-based optimization" which implement optimization algorithms directly in closed loop with physical systems. In addition to a brief overview of selected continuous-time dynamical systems for optimization, our particular emphasis in this survey lies on closed-loop stability as well as the robust enforcement of physical and operational constraints in closed-loop implementations. To bypass accessing partial model information of physical systems, we further elaborate on fully data-driven and model-free operations. We highlight an emerging application in autonomous reserve dispatch in power systems, where the theory has transitioned to practice by now. We also provide short expository reviews of pioneering applications in communication networks and electricity grids, as well as related research streams, including extremum seeking and pertinent methods from model predictive and process control, to facilitate high-level comparisons with the main topic of this survey

    Equilibration of quantum many-body fast neutrino flavor oscillations

    Full text link
    Neutrino gases are expected to form in high density astrophysical environments, and accurately modeling their flavor evolution is critical to understanding such environments. In this work we study a simplified model of such a dense neutrino gas in the regime for which neutrino-neutrino coherent forward scattering is the dominant mechanism contributing to the flavor evolution. We show evidence that the generic potential induced by this effect is non-integrable and that the statistics of its energy level spaces are in good agreement with the Wigner surmise. We also find that individual neutrinos rapidly entangle with all of the others present which results in an equilibration of the flavor content of individual neutrinos. We show that the average neutrino flavor content can be predicted utilizing a thermodynamic partition function. A random phase approximation to the evolution gives a simple picture of this equilibration. In the case of neutrinos and antineutrinos, processes like νeνˉeνμνˉμ\nu_e {\bar{\nu}}_e \leftrightarrows \nu_\mu {\bar{\nu}_\mu} yield a rapid equilibrium satisfying n(νe)n(νˉe)=n(νμ)n(νˉμ)=n(ντ)n(νˉτ)n( \nu_e) n({\bar \nu}_e) = n( \nu_\mu) n({\bar \nu}_\mu) = n( \nu_\tau) n({\bar \nu}_\tau) in addition to the standard lepton number conservation in regimes where off-diagonal vacuum oscillations are small compared to νν\nu-\nu interactions.Comment: 16 pages, 8 figures, 1 appendi

    Lagged coherence: explicit and testable definition

    Full text link
    Measures of association between cortical regions based on activity signals provide useful information for studying brain functional connectivity. Difficulties occur with signals of electric neuronal activity, where an observed signal is a mixture, i.e. an instantaneous weighted average of the true, unobserved signals from all regions, due to volume conduction and low spatial resolution. This is why measures of lagged association are of interest, since at least theoretically, "lagged association" is of physiological origin. In contrast, the actual physiological instantaneous zero-lag association is masked and confounded by the mixing artifact. A minimum requirement for a measure of lagged association is that it must not tend to zero with an increase of strength of true instantaneous physiological association. Such biased measures cannot tell apart if a change in its value is due to a change in lagged or a change in instantaneous association. An explicit testable definition for frequency domain lagged connectivity between two multivariate time series is proposed. It is endowed with two important properties: it is invariant to non-singular linear transformations of each vector time series separately, and it is invariant to instantaneous association. As a first sanity check: in the case of two univariate time series, the new definition leads back to the bivariate lagged coherence of 2007 (eqs 25 and 26 in https://doi.org/10.48550/arXiv.0706.1776). As a second stronger sanity check: in the case of a univariate and multivariate vector time series, the new measure presented here leads back to the original multivariate lagged coherence of 2007 (eq 31 in https://doi.org/10.48550/arXiv.0711.1455), which again trivially includes the bivariate case.Comment: - (2023-11-24): First original version #1. - (2023-11-27): Second version #2: Added subsection "8. Lagged association of a univariate time series with a multivariate vector time series". - (2024-01-07): Third version #3: Current version. Eq. 44 now correct without "logarithm

    Correct and Compositional Hardware Generators

    Full text link
    Hardware generators help designers explore families of concrete designs and their efficiency trade-offs. Both parameterized hardware description languages (HDLs) and higher-level programming models, however, can obstruct composability. Different concrete designs in a family can have dramatically different timing behavior, and high-level hardware generators rarely expose a consistent HDL-level interface. Composition, therefore, is typically only feasible at the level of individual instances: the user generates concrete designs and then composes them, sacrificing the ability to parameterize the combined design. We design Parafil, a system for correctly composing hardware generators. Parafil builds on Filament, an HDL with strong compile-time guarantees, and lifts those guarantees to generators to prove that all possible instantiations are free of timing bugs. Parafil can integrate with external hardware generators via a novel system of output parameters and a framework for invoking generator tools. We conduct experiments with two other generators, FloPoCo and Google's XLS, and we implement a parameterized FFT generator to show that Parafil ensures correct design space exploration.Comment: 13 page

    Supervision by Denoising for Medical Image Segmentation

    Full text link
    Learning-based image reconstruction models, such as those based on the U-Net, require a large set of labeled images if good generalization is to be guaranteed. In some imaging domains, however, labeled data with pixel- or voxel-level label accuracy are scarce due to the cost of acquiring them. This problem is exacerbated further in domains like medical imaging, where there is no single ground truth label, resulting in large amounts of repeat variability in the labels. Therefore, training reconstruction networks to generalize better by learning from both labeled and unlabeled examples (called semi-supervised learning) is problem of practical and theoretical interest. However, traditional semi-supervised learning methods for image reconstruction often necessitate handcrafting a differentiable regularizer specific to some given imaging problem, which can be extremely time-consuming. In this work, we propose "supervision by denoising" (SUD), a framework that enables us to supervise reconstruction models using their own denoised output as soft labels. SUD unifies stochastic averaging and spatial denoising techniques under a spatio-temporal denoising framework and alternates denoising and model weight update steps in an optimization framework for semi-supervision. As example applications, we apply SUD to two problems arising from biomedical imaging -- anatomical brain reconstruction (3D) and cortical parcellation (2D) -- to demonstrate a significant improvement in the image reconstructions over supervised-only and stochastic averaging baselines.Comment: To appear in the IEEE Transactions on Pattern Analysis and Machine Intelligenc

    The BHL-BCL crossover: from nonlinear to linear quantum amplification

    Full text link
    The black-hole laser (BHL) effect is the self-amplification of Hawking radiation between a pair of horizons which act as a resonant cavity. In a flowing atomic condensate, the BHL effect arises in a finite supersonic region, where Bogoliubov-Cherenkov-Landau (BCL) radiation is resonantly excited by any static perturbation. Thus, experimental attempts to produce a BHL unavoidably deal with the presence of a strong BCL background, making the observation of the BHL effect still a major challenge in the analogue gravity field. Here, we perform a theoretical study of the BHL-BCL crossover using an idealized model where both phenomena can be unambiguously isolated. By drawing an analogy with an unstable pendulum, we distinguish three main regimes according to the interplay between quantum fluctuations and classical stimulation: quantum BHL, classical BHL, and BCL. Based on quite general scaling arguments, the nonlinear amplification of quantum fluctuations up to saturation is identified as the most robust trait of a quantum BHL. A classical BHL behaves instead as a linear quantum amplifier, where the output is proportional to the input. The BCL regime also acts as a linear quantum amplifier, but its gain is exponentially smaller as compared to a classical BHL. Complementary signatures of black-hole lasing are a decrease in the amplification for increasing BCL amplitude or a nonmonotonic dependence of the growth rate with respect to the background parameters. We also identify interesting analogue phenomena such as Hawking-stimulated white-hole radiation or quantum BCL-stimulated Hawking radiation. The results of this work not only are of interest for analogue gravity, where they help to distinguish each phenomenon and to design experimental schemes for a clear observation of the BHL effect, but they also open the prospect of finding applications of analogue concepts in quantum technologies.Comment: 24 pages, 14 figures, 1 table. Accepted version of the manuscrip

    Bayesian Optimization through Gaussian Cox Process Models for Spatio-temporal Data

    Full text link
    Bayesian optimization (BO) has established itself as a leading strategy for efficiently optimizing expensive-to-evaluate functions. Existing BO methods mostly rely on Gaussian process (GP) surrogate models and are not applicable to (doubly-stochastic) Gaussian Cox processes, where the observation process is modulated by a latent intensity function modeled as a GP. In this paper, we propose a novel maximum a posteriori inference of Gaussian Cox processes. It leverages the Laplace approximation and change of kernel technique to transform the problem into a new reproducing kernel Hilbert space, where it becomes more tractable computationally. It enables us to obtain both a functional posterior of the latent intensity function and the covariance of the posterior, thus extending existing works that often focus on specific link functions or estimating the posterior mean. Using the result, we propose a BO framework based on the Gaussian Cox process model and further develop a Nystr\"om approximation for efficient computation. Extensive evaluations on various synthetic and real-world datasets demonstrate significant improvement over state-of-the-art inference solutions for Gaussian Cox processes, as well as effective BO with a wide range of acquisition functions designed through the underlying Gaussian Cox process model.Comment: 2024 International Conference on Learning Representations (ICLR

    Atomic photoexcitation as a tool for probing purity of twisted light modes

    Full text link
    The twisted light modes used in modern atomic physics experiments can be contaminated by small admixtures of plane wave radiation. Although these admixtures hardly reveal themselves in the beam intensity profile, they may seriously affect the outcome of high precision spectroscopy measurements. In the present study we propose a method for diagnosing such a plane wave contamination, which is based on the analysis of the magnetic sublevel population of atoms or ions interacting with the "twisted + plane wave" radiation. In order to theoretically investigate the sublevel populations, we solve the Liouville-von Neumann equation for the time evolution of atomic density matrix. The proposed method is illustrated for the electric dipole 5s2S1/25p2P3/25s \, {}^{2}\mathrm{S}_{1/2} \, - \, 5p \, {}^{2}\mathrm{P}_{3/2} transition in Rb induced by (linearly, radially, or azimuthally polarized) vortex light with just a small contamination. We find that even tiny admixtures of plane wave radiation can lead to remarkable variations in the populations of the ground-state magnetic sublevels. This opens up new opportunities for diagnostics of twisted light in atomic spectroscopy experiments.Comment: 12 pages, 11 figure

    A stabilizer free weak Galerkin method with implicit θ\theta-schemes for fourth order parabolic problems

    Full text link
    In this paper, we combine the stabilizer free weak Galerkin (SFWG) method and the implicit θ\theta-schemes in time for θ[12,1]\theta\in [\frac{1}{2},1] to solve the fourth-order parabolic problem. In particular, when θ=1\theta =1, the full-discrete scheme is first-order backward Euler and the scheme is second-order Crank Nicolson scheme if θ=12\theta =\frac{1}{2}. Next, we analyze the well-posedness of the schemes and deduce the optimal convergence orders of the error in the H2H^2 and L2L^2 norms. Finally, numerical examples confirm the theoretical results

    TIFu: Tri-directional Implicit Function for High-Fidelity 3D Character Reconstruction

    Full text link
    Recent advances in implicit function-based approaches have shown promising results in 3D human reconstruction from a single RGB image. However, these methods are not sufficient to extend to more general cases, often generating dragged or disconnected body parts, particularly for animated characters. We argue that these limitations stem from the use of the existing point-level 3D shape representation, which lacks holistic 3D context understanding. Voxel-based reconstruction methods are more suitable for capturing the entire 3D space at once, however, these methods are not practical for high-resolution reconstructions due to their excessive memory usage. To address these challenges, we introduce Tri-directional Implicit Function (TIFu), which is a vector-level representation that increases global 3D consistencies while significantly reducing memory usage compared to voxel representations. We also introduce a new algorithm in 3D reconstruction at an arbitrary resolution by aggregating vectors along three orthogonal axes, resolving inherent problems with regressing fixed dimension of vectors. Our approach achieves state-of-the-art performances in both our self-curated character dataset and the benchmark 3D human dataset. We provide both quantitative and qualitative analyses to support our findings

    351,465

    full texts

    374,474

    metadata records
    Updated in last 30 days.
    arXiv.org e-Print Archive is based in United States
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇