3,364 research outputs found

    The Wiener maximum quadratic assignment problem

    Full text link
    We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.Comment: 11 pages, no figure

    Bayesian reconstruction of the cosmological large-scale structure: methodology, inverse algorithms and numerical optimization

    Full text link
    We address the inverse problem of cosmic large-scale structure reconstruction from a Bayesian perspective. For a linear data model, a number of known and novel reconstruction schemes, which differ in terms of the underlying signal prior, data likelihood, and numerical inverse extra-regularization schemes are derived and classified. The Bayesian methodology presented in this paper tries to unify and extend the following methods: Wiener-filtering, Tikhonov regularization, Ridge regression, Maximum Entropy, and inverse regularization techniques. The inverse techniques considered here are the asymptotic regularization, the Jacobi, Steepest Descent, Newton-Raphson, Landweber-Fridman, and both linear and non-linear Krylov methods based on Fletcher-Reeves, Polak-Ribiere, and Hestenes-Stiefel Conjugate Gradients. The structures of the up-to-date highest-performing algorithms are presented, based on an operator scheme, which permits one to exploit the power of fast Fourier transforms. Using such an implementation of the generalized Wiener-filter in the novel ARGO-software package, the different numerical schemes are benchmarked with 1-, 2-, and 3-dimensional problems including structured white and Poissonian noise, data windowing and blurring effects. A novel numerical Krylov scheme is shown to be superior in terms of performance and fidelity. These fast inverse methods ultimately will enable the application of sampling techniques to explore complex joint posterior distributions. We outline how the space of the dark-matter density field, the peculiar velocity field, and the power spectrum can jointly be investigated by a Gibbs-sampling process. Such a method can be applied for the redshift distortions correction of the observed galaxies and for time-reversal reconstructions of the initial density field.Comment: 40 pages, 11 figure

    Towards an Optimal Reconstruction of Baryon Oscillations

    Full text link
    The Baryon Acoustic Oscillations (BAO) in the large-scale structure of the universe leave a distinct peak in the two-point correlation function of the matter distribution. That acoustic peak is smeared and shifted by bulk flows and non-linear evolution. However, it has been shown that it is still possible to sharpen the peak and remove its shift by undoing the effects of the bulk flows. We propose an improvement to the standard acoustic peak reconstruction. Contrary to the standard approach, the new scheme has no free parameters, treats the large-scale modes consistently, and uses optimal filters to extract the BAO information. At redshift of zero, the reconstructed linear matter power spectrum leads to a markedly improved sharpening of the reconstructed acoustic peak compared to standard reconstruction.Comment: 20 pages, 5 figures; footnote adde

    The Onsager--Machlup functional for data assimilation

    Full text link
    When taking the model error into account in data assimilation, one needs to evaluate the prior distribution represented by the Onsager--Machlup functional. Through numerical experiments, this study clarifies how the prior distribution should be incorporated into cost functions for discrete-time estimation problems. Consistent with previous theoretical studies, the divergence of the drift term is essential in weak-constraint 4D-Var (w4D-Var), but it is not nec essary in Markov chain Monte Carlo with the Euler scheme. Although the former property may cause difficulties when implementing w4D-Var in large systems, this paper proposes a new technique for estimating the divergence term and its derivative.Comment: Reprint from Nonlin. Processes Geophys. (ver.5). 12 pages, 5 figure

    Boltzmann meets Nash: Energy-efficient routing in optical networks under uncertainty

    Full text link
    Motivated by the massive deployment of power-hungry data centers for service provisioning, we examine the problem of routing in optical networks with the aim of minimizing traffic-driven power consumption. To tackle this issue, routing must take into account energy efficiency as well as capacity considerations; moreover, in rapidly-varying network environments, this must be accomplished in a real-time, distributed manner that remains robust in the presence of random disturbances and noise. In view of this, we derive a pricing scheme whose Nash equilibria coincide with the network's socially optimum states, and we propose a distributed learning method based on the Boltzmann distribution of statistical mechanics. Using tools from stochastic calculus, we show that the resulting Boltzmann routing scheme exhibits remarkable convergence properties under uncertainty: specifically, the long-term average of the network's power consumption converges within ε\varepsilon of its minimum value in time which is at most O~(1/ε2)\tilde O(1/\varepsilon^2), irrespective of the fluctuations' magnitude; additionally, if the network admits a strict, non-mixing optimum state, the algorithm converges to it - again, no matter the noise level. Our analysis is supplemented by extensive numerical simulations which show that Boltzmann routing can lead to a significant decrease in power consumption over basic, shortest-path routing schemes in realistic network conditions.Comment: 24 pages, 4 figure

    Foreground separation methods for satellite observations of the cosmic microwave background

    Get PDF
    A maximum entropy method (MEM) is presented for separating the emission due to different foreground components from simulated satellite observations of the cosmic microwave background radiation (CMBR). In particular, the method is applied to simulated observations by the proposed Planck Surveyor satellite. The simulations, performed by Bouchet and Gispert (1998), include emission from the CMBR, the kinetic and thermal Sunyaev-Zel'dovich (SZ) effects from galaxy clusters, as well as Galactic dust, free-free and synchrotron emission. We find that the MEM technique performs well and produces faithful reconstructions of the main input components. The method is also compared with traditional Wiener filtering and is shown to produce consistently better results, particularly in the recovery of the thermal SZ effect.Comment: 31 pages, 19 figures (bitmapped), accpeted for publication in MNRA
    corecore