714 research outputs found

    A hamiltonian Monte Carlo method for non-smooth energy sampling

    Get PDF
    International audienceEfficient sampling from high-dimensional distribu- tions is a challenging issue that is encountered in many large data recovery problems. In this context, sampling using Hamil- tonian dynamics is one of the recent techniques that have been proposed to exploit the target distribution geometry. Such schemes have clearly been shown to be efficient for multidimensional sam- pling but, rather, are adapted to distributions from the exponential family with smooth energy functions. In this paper, we address the problem of using Hamiltonian dynamics to sample from probabil- ity distributions having non-differentiable energy functions such as those based on the l1 norm. Such distributions are being used intensively in sparse signal and image recovery applications. The technique studied in this paper uses a modified leapfrog transform involving a proximal step. The resulting nonsmooth Hamiltonian Monte Carlo method is tested and validated on a number of exper- iments. Results show its ability to accurately sample according to various multivariate target distributions. The proposed technique is illustrated on synthetic examples and is applied to an image denoising problem

    Nearly deconfined spinon excitations in the square-lattice spin-1/2 Heisenberg antiferromagnet

    Get PDF
    We study the spin-excitation spectrum (dynamic structure factor) of the spin-1/2 square-lattice Heisenberg antiferromagnet and an extended model (the J−Q model) including four-spin interactions Q in addition to the Heisenberg exchange J. Using an improved method for stochastic analytic continuation of imaginary-time correlation functions computed with quantum Monte Carlo simulations, we can treat the sharp (δ-function) contribution to the structure factor expected from spin-wave (magnon) excitations, in addition to resolving a continuum above the magnon energy. Spectra for the Heisenberg model are in excellent agreement with recent neutron-scattering experiments on Cu(DCOO)2⋅4D2O, where a broad spectral-weight continuum at wave vector q=(π,0) was interpreted as deconfined spinons, i.e., fractional excitations carrying half of the spin of a magnon. Our results at (π,0) show a similar reduction of the magnon weight and a large continuum, while the continuum is much smaller at q=(π/2,π/2) (as also seen experimentally). We further investigate the reasons for the small magnon weight at (π,0) and the nature of the corresponding excitation by studying the evolution of the spectral functions in the J−Q model. Upon turning on the Q interaction, we observe a rapid reduction of the magnon weight to zero, well before the system undergoes a deconfined quantum phase transition into a nonmagnetic spontaneously dimerized state. Based on these results, we reinterpret the picture of deconfined spinons at (π,0) in the experiments as nearly deconfined spinons—a precursor to deconfined quantum criticality. To further elucidate the picture of a fragile (π,0)-magnon pole in the Heisenberg model and its depletion in the J−Q model, we introduce an effective model of the excitations in which a magnon can split into two spinons that do not separate but fluctuate in and out of the magnon space (in analogy to the resonance between a photon and a particle-hole pair in the exciton-polariton problem). The model can reproduce the reduction of magnon weight and lowered excitation energy at (π,0) in the Heisenberg model, as well as the energy maximum and smaller continuum at (π/2,π/2). It can also account for the rapid loss of the (π,0) magnon with increasing Q and the remarkable persistence of a large magnon pole at q=(π/2,π/2) even at the deconfined critical point. The fragility of the magnons close to (π,0) in the Heisenberg model suggests that various interactions that likely are important in many materials—e.g., longer-range pair exchange, ring exchange, and spin-phonon interactions—may also destroy these magnons and lead to even stronger spinon signatures than in Cu(DCOO)2⋅4D2O.We thank Wenan Guo, Akiko Masaki-Kato, Andrey Mishchenko, Martin Mourigal, Henrik Ronnow, Kai Schmidt, Cenke Xu, and Seiji Yunoki for useful discussions. Experimental data from Ref. [33] were kindly provided by N. B. Christensen and H. M. Ronnow. H. S. was supported by the China Postdoctoral Science Foundation under Grants No. 2016M600034 and No. 2017T100031. St.C. was funded by the NSFC under Grants No. 11574025 and No. U1530401. Y. Q. Q. and Z. Y. M. acknowledge funding from the Ministry of Science and Technology of China through National Key Research and Development Program under Grant No. 2016YFA0300502, from the key research program of the Chinese Academy of Sciences under Grant No. XDPB0803, and from the NSFC under Grants No. 11421092, No. 11574359, and No. 11674370, as well as the National Thousand-Young Talents Program of China. A. W. S. was funded by the NSF under Grants No. DMR-1410126 and No. DMR-1710170, and by the Simons Foundation. In addition H. S., Y. Q. Q., and Sy. C. thank Boston University's Condensed Matter Theory Visitors program for support, and A. W. S. thanks the Beijing Computational Science Research Center and the Institute of Physics, Chinese Academy of Sciences for visitor support. We thank the Center for Quantum Simulation Sciences at the Institute of Physics, Chinese Academy of Sciences, the Tianhe-1A platform at the National Supercomputer Center in Tianjin, Boston University's Shared Computing Cluster, and CALMIP (Toulouse) for their technical support and generous allocation of CPU time. (2016M600034 - China Postdoctoral Science Foundation; 2017T100031 - China Postdoctoral Science Foundation; 11574025 - NSFC; U1530401 - NSFC; 11421092 - NSFC; 11574359 - NSFC; 11674370 - NSFC; 2016YFA0300502 - Ministry of Science and Technology of China; XDPB0803 - Chinese Academy of Sciences; National Thousand-Young Talents Program of China; DMR-1410126 - NSF; DMR-1710170 - NSF; Simons Foundation; Boston University's Condensed Matter Theory Visitors program)Accepted manuscript and published version

    Inverse problems in medical ultrasound images - applications to image deconvolution, segmentation and super-resolution

    Get PDF
    In the field of medical image analysis, ultrasound is a core imaging modality employed due to its real time and easy-to-use nature, its non-ionizing and low cost characteristics. Ultrasound imaging is used in numerous clinical applications, such as fetus monitoring, diagnosis of cardiac diseases, flow estimation, etc. Classical applications in ultrasound imaging involve tissue characterization, tissue motion estimation or image quality enhancement (contrast, resolution, signal to noise ratio). However, one of the major problems with ultrasound images, is the presence of noise, having the form of a granular pattern, called speckle. The speckle noise in ultrasound images leads to the relative poor image qualities compared with other medical image modalities, which limits the applications of medical ultrasound imaging. In order to better understand and analyze ultrasound images, several device-based techniques have been developed during last 20 years. The object of this PhD thesis is to propose new image processing methods allowing us to improve ultrasound image quality using postprocessing techniques. First, we propose a Bayesian method for joint deconvolution and segmentation of ultrasound images based on their tight relationship. The problem is formulated as an inverse problem that is solved within a Bayesian framework. Due to the intractability of the posterior distribution associated with the proposed Bayesian model, we investigate a Markov chain Monte Carlo (MCMC) technique which generates samples distributed according to the posterior and use these samples to build estimators of the ultrasound image. In a second step, we propose a fast single image super-resolution framework using a new analytical solution to the l2-l2 problems (i.e., 2\ell_2-norm regularized quadratic problems), which is applicable for both medical ultrasound images and piecewise/ natural images. In a third step, blind deconvolution of ultrasound images is studied by considering the following two strategies: i) A Gaussian prior for the PSF is proposed in a Bayesian framework. ii) An alternating optimization method is explored for blind deconvolution of ultrasound

    Fast randomized iteration: diffusion Monte Carlo through the lens of numerical linear algebra

    Full text link
    We review the basic outline of the highly successful diffusion Monte Carlo technique commonly used in contexts ranging from electronic structure calculations to rare event simulation and data assimilation, and propose a new class of randomized iterative algorithms based on similar principles to address a variety of common tasks in numerical linear algebra. From the point of view of numerical linear algebra, the main novelty of the Fast Randomized Iteration schemes described in this article is that they work in either linear or constant cost per iteration (and in total, under appropriate conditions) and are rather versatile: we will show how they apply to solution of linear systems, eigenvalue problems, and matrix exponentiation, in dimensions far beyond the present limits of numerical linear algebra. While traditional iterative methods in numerical linear algebra were created in part to deal with instances where a matrix (of size O(n2)\mathcal{O}(n^2)) is too big to store, the algorithms that we propose are effective even in instances where the solution vector itself (of size O(n)\mathcal{O}(n)) may be too big to store or manipulate. In fact, our work is motivated by recent DMC based quantum Monte Carlo schemes that have been applied to matrices as large as 10108×1010810^{108} \times 10^{108}. We provide basic convergence results, discuss the dependence of these results on the dimension of the system, and demonstrate dramatic cost savings on a range of test problems.Comment: 44 pages, 7 figure

    Asymptotically exact data augmentation : models and Monte Carlo sampling with applications to Bayesian inference

    Get PDF
    Numerous machine learning and signal/image processing tasks can be formulated as statistical inference problems. As an archetypal example, recommendation systems rely on the completion of partially observed user/item matrix, which can be conducted via the joint estimation of latent factors and activation coefficients. More formally, the object to be inferred is usually defined as the solution of a variational or stochastic optimization problem. In particular, within a Bayesian framework, this solution is defined as the minimizer of a cost function, referred to as the posterior loss. In the simple case when this function is chosen as quadratic, the Bayesian estimator is known to be the posterior mean which minimizes the mean square error and defined as an integral according to the posterior distribution. In most real-world applicative contexts, computing such integrals is not straightforward. One alternative lies in making use of Monte Carlo integration, which consists in approximating any expectation according to the posterior distribution by an empirical average involving samples from the posterior. This so-called Monte Carlo integration requires the availability of efficient algorithmic schemes able to generate samples from a desired posterior distribution. A huge literature dedicated to random variable generation has proposed various Monte Carlo algorithms. For instance, Markov chain Monte Carlo (MCMC) methods, whose particular instances are the famous Gibbs sampler and Metropolis-Hastings algorithm, define a wide class of algorithms which allow a Markov chain to be generated with the desired stationary distribution. Despite their seemingly simplicity and genericity, conventional MCMC algorithms may be computationally inefficient for large-scale, distributed and/or highly structured problems. The main objective of this thesis consists in introducing new models and related MCMC approaches to alleviate these issues. The intractability of the posterior distribution is tackled by proposing a class of approximate but asymptotically exact augmented (AXDA) models. Then, two Gibbs samplers targetting approximate posterior distributions based on the AXDA framework, are proposed and their benefits are illustrated on challenging signal processing, image processing and machine learning problems. A detailed theoretical study of the convergence rates associated to one of these two Gibbs samplers is also conducted and reveals explicit dependences with respect to the dimension, condition number of the negative log-posterior and prescribed precision. In this work, we also pay attention to the feasibility of the sampling steps involved in the proposed Gibbs samplers. Since one of this step requires to sample from a possibly high-dimensional Gaussian distribution, we review and unify existing approaches by introducing a framework which stands for the stochastic counterpart of the celebrated proximal point algorithm. This strong connection between simulation and optimization is not isolated in this thesis. Indeed, we also show that the derived Gibbs samplers share tight links with quadratic penalty methods and that the AXDA framework yields a class of envelope functions related to the Moreau one

    Time-lapse seismic imaging and uncertainty quantification

    Get PDF
    Time–lapse (4D) seismic monitoring is to date the most commonly used technique for estimating changes of a reservoir under production. Full–Waveform Inversion (FWI) is a high resolution technique that delivers Earth models by iteratively trying to match synthetic prestack seismic data with the observed data. Over the past decade the application of FWI on 4D data has been extensively studied, with a variety of strategies being currently available. However, 4D FWI still has challenges unsolved. In addition, the standard outcome of a 4D FWI scheme is a single image, without any measurement of the associated uncertainty. These issues beg the following questions: (1) Can we go beyond the current FWI limitations and deliver more accurate 4D imaging?, and (2) How well do we know what we think we know? In this thesis, I take steps to answer both questions. I first compare the performances of three common 4D FWI approaches in the presence of model uncertainties. These results provide a preliminary understanding of the underlying uncertainty, but also highlight some of the limitations of pixel by pixel uncertainty quantification. I then introduce a hybrid inversion technique that I call Dual–Domain Waveform Inversion (DDWI), whose objective function joins traditional FWI with Image Domain Wavefield Tomography (IDWT). The new objective function combines diving wave information in the data–domain FWI term with reflected wave information in the image–domain IDWT term, resulting in more accurate 4D model reconstructions. Working with 4D data provides an ideal situation for testing and developing new algorithms. Since there are repeated surveys at the same location, not only is the surrounding geology well–known and the results of interest are localized in small regions, but also they allow for better error analysis. Uncertainty quantification is very valuable for building knowledge but is not commonly done due to the computational challenge of exploring the range of all possible models that could fit the data. I exploit the structure of the 4D problem and propose the use of a focused modeling technique for a fast Metropolis–Hastings inversion. The proposed framework calculates time–lapse uncertainty quantification in a targeted way that is computationally feasible. Having the ground truth 4D probability distributions, I propose a local 4D Hamiltonian Monte Carlo (HMC) — a more advanced uncertainty quantification technique — that can handle higher dimensionalities while offering faster convergence

    Bayesian computation: a summary of the current state, and samples backwards and forwards

    Full text link

    Machine Learning applications in spectroscopy and dynamics

    Get PDF
    The discovery of useful molecules and new molecular phenomena is one of the cornerstones of human progress. Until the last two centuries, this process was largely driven by empirical evidence and serendipitous discovery. The understanding of physical phenomena at the macro level, driven by Newtonian mechanics, electromagnetism, and thermodynamics, and at the micro level, driven by quantum mechanics, has allowed for a more targeted approach to the discovery of new functional molecules for various applications. Despite these advances, the pace at which such molecules are discovered lags behind the rate of demand for green catalysts, sustainable materials, and effective medicines. A significant factor influencing this is the vastness of the chemical space of molecules. It has been approximated that within this chemical space, there exist approximately 1060 organic molecules (with a molecular weight less than 500, containing atoms H, C, N, and S). This count will be several magnitudes higher if larger molecules and extended structures are taken into account. Cataloging the properties of these molecules is not currently possible with our computational existing capabilities, but it is essential to find better materials and more effective drugs. As a result, the search for methods that can help speed up the assessment of the properties of molecules and accelerate the discovery of new molecules is an issue of paramount importance in modern chemistry. Machine Learning (ML) algorithms for predicting chemical properties represent an important step in this direction. Not only are ML algorithms capable of learning accurate structure-property relationships, but they are also faster than experiments or quantum chemical simulations. Furthermore, some ML methods leverage the structure-property relationships learned from data to generate novel molecules with desired properties, providing a cost-efficient way to identify useful molecules for laboratory synthesis. The spectrum of a molecule is one such important molecular property that helps scientists identify different molecules without destroying them. Amongst the various techniques of spectroscopy, X-ray Absorption Spectroscopy (XAS) is a well established technique that provides information about the structure and composition of various materials. The identification of materials using XAS, however, is not straightforward and requires using a combination of experimental data and quantum-chemical calculations performed on large computing clusters. These computational evaluations are resource-intensive and one often needs several such calculations to achieve successful molecular identification. Access to methods that can accelerate the prediction of spectra through structure-property relationships in spectroscopy can greatly enhance the ability to identify compounds synthesized in laboratories. Therefore, a major part of this dissertation is dedicated to employing and understanding ML methods that speed up the prediction of spectra by learning structure-property relationships from data. This work lays a foundation for future applications, where ML models can be used in experimental setups to identify molecules from spectra without human intervention, thereby helping accelerate the synthesis and identification of novel compounds. One downside of ML applications is the lack of model interpretability, which decreases the trust of the end-users. Investigations in this dissertation focus on devising a technique that helps humans understand why ML models make certain predictions, thereby helping build trust between the ML model and its end user. The creation of chemical data for ML applications itself usually requires quantum chemical calculations that involve solving the Schrödinger equation. The time dependent Schrödinger equation (TDSE) helps understand the behavior of quantum systems and allows for the calculation of time-dependent properties of molecules. The area of research that concerns itself with techniques for solving the TDSE is termed quantum dynamics. Using computer simulations of numerical methods for solving this equation, researchers have modeled several quantum dynamical systems, which have improved our understanding of photo-catalysis (reactions driven by light), surface phenomena such as chemisorption, and chemical reaction pathways. The second part of this dissertation focuses on using ML methods to solve the TDSE. The TDSE, which is a partial differential equation (PDE) in space and time, is one of the many fundamental equations that help model the behavior of physical systems. Some other notable PDEs that play an important role in physics and engineering are the Navier-Stokes equation for modeling fluids, the Heat equation in thermodynamics, and the wave equation in acoustics. Numerical techniques for solving PDEs are based on the discretization of the coordinate space into finite elements. As the size and dimensions of the grids increase, these methods become computationally expensive. As a result, solving PDEs such as the TDSE for large molecular systems is computationally demanding or even impossible. Advances in ML for solving PDEs aim at accelerating the solution of PDEs through a data-driven approach. In the second part of this thesis, ML models were trained on simulation data from quantum dynamical systems. Once trained, these models are capable of providing accurate descriptions of the behavior of systems that were not seen during training. A key advantage of such methods is their ability to generate novel simulations accurately and at high speed. As a proof of concept, the work in this dissertation shows how this speed can be exploited for downstream applications in quantum dynamics.Die Entdeckung funktionaler Moleküle und neuer molekularer Phänomene ist einer der Eckpfeiler des menschlichen Fortschritts. Bis vor zwei Jahrhunderten wurde dieser Prozess weitgehend durch empirische Beweise und zufällige Entdeckungen vorangetrieben. Das durch die Newtonsche Mechanik, den Elektromagnetismus und die Thermodynamik geförderte Verständnis physikalischer Phänomene auf der Makroebene und das durch die Quantenmechanik ermöglichte Verständnis auf der Mikroebene haben ein gezielteres Vorgehen bei der Entdeckung neuer funktioneller Moleküle für verschiedene Anwendungen ermöglicht. Trotz dieser Fortschritte bleibt das Tempo neuer Entdeckungen solcher Moleküle hinter dem Bedarf an umweltfreundlichen Katalysatoren, nachhaltigen Materialien und wirksamen Medikamenten zurück. Ein wichtiger Faktor, der dies beeinflusst, ist die enorme Anzahl der existierenden Molekülstrukturen. Schätzungen zufolge gibt es etwa 1060 organische Moleküle (mit einem Molekulargewicht von weniger als 500 und den Atomen H,C, N und S ). Diese Zahl ist noch deutlich höher, wenn größere Moleküle und weitere mögliche Strukturen berücksichtigt werden. Die vollständige Katalogisierung der Eigenschaften dieser Moleküle ist mit den derzeit verfügbaren Methoden nicht möglich, aber für die Suche nach besseren Materialien und wirksameren Arzneimitteln ist sie unerlässlich. Daher ist die Suche nach Methoden, die eine schnellere Einschätzung der Eigenschaften von Molekülen ermöglichen und damit die Entdeckung neuer Moleküle beschleunigen können derzeit von größter Bedeutung. Algorithmen des maschinellen Lernens (ML) zur Vorhersage chemischer Eigenschaften sind ein wichtiger Schritt in diese Richtung. ML-Algorithmen sind nicht nur in der Lage, genaue Struktur-Eigenschafts-Beziehungen zu lernen, sondern sie sind auch schneller als Experimente oder quantenchemische Simulationen. Darüber hinaus nutzen einige ML-Methoden die aus den Daten erlernten Struktur-Eigenschafts-Beziehungen, um neuartige Molekülstrukturen mit den gewünschten Eigenschaften zu erzeugen. Dies stellt eine kosteneffiziente Möglichkeit zur Identifizierung neuer funktionaler Moleküle dar, die anschließend im Labor synthetisiert werden können. Das Spektrum eines Moleküls ist eine wichtige molekulare Eigenschaft, die Wissenschaftlern hilft, die Eigenschaften von Molekülen zu identifizieren, ohne sie zu zerstören. Unter den verschiedenen Techniken der Spektroskopie ist die Röntgenabsorptionsspektroskopie (X-ray absorption spectroscopy, XAS) eine etablierte Technik, die Informationen über die Struktur und Zusammensetzung verschiedener Materialien liefert. Die Identifizierung von Materialien anhand von XAS ist jedoch nicht einfach und erfordert eine Kombination aus experimentellen Methoden und quantenchemischen Berechnungen, die auf großen Computerclustern durchgeführt werden. Die rechnerischen Auswertungen sind ressourcenintensiv und können mehrere Iterationen erfordern, um zu einer erfolgreichen molekularen Identifizierung zu gelangen. Der Zugang zu Methoden, die die Vorhersage von Struktur-Eigenschafts-Beziehungen in der Spektroskopie beschleunigen, kann die Fähigkeit zur Identifizierung von in synthetischen Verbindungen erheblich verbessern. Daher ist ein großer Teil dieser Arbeit der Anwendung von ML-Methoden gewidmet, die die effiziente Vorhersage von Spektren durch das Lernen von Struktur-Eigenschafts-Beziehungen aus Daten ermöglichen. Diese Arbeit legt den Grundstein für künftige Anwendungen, bei denen ML-Modelle in Versuchsaufbauten verwendet werden können, um Moleküle aus Spektren ohne menschliches Eingreifen zu identifizieren und so die Synthese und Identifizierung neuer Verbindungen zu beschleunigen. Ein Nachteil von ML-Anwendungen ist die mangelnde Interpretierbarkeit der Modelle, was das Vertrauen der Endnutzer und manchmal auch die Genauigkeit der ML-Modelle beeinträchtigt. Weitere Untersuchungen im Rahmen dieser Arbeit konzentrieren sich auf die Entwicklung einer Technik, die den Menschen hilft zu verstehen, warum ML-Modelle bestimmte Vorhersagen treffen, und so dazu beiträgt, das Vertrauen der Endnutzer in die ML-Modelle zu stärken. Die Erstellung chemischer Daten selbst für ML erfordert normalerweise quantenchemische Berechnungen, bei denen die Schrödinger-Gleichung gelöst wird. Der zweite Teil dieser Arbeit konzentriert sich auf die Verwendung von ML zur Lösung der zeitabhängigen Schrödinger-Gleichung (time-dependent Schrödinger equation, TDSE), die nicht nur zum Verständnis des Verhaltens von Quantensystemen beiträgt, sondern auch die Berechnung zeitabhängiger Eigenschaften von Molekülsystemen ermöglicht. Das Forschungsgebiet, das sich mit Techniken zur Lösung der TDSE beschäftigt, wird als Quantendynamik bezeichnet. Durch die Anwendung numerischer Methoden zur Lösung dieser Gleichung haben Forschende verschiedene quantendynamische Systeme modelliert, die unser Verständnis der Photokatalyse (durch Licht ausgelöste Reaktionen), von Oberflächenphänomenen wie der Chemisorption und von chemischen Reaktionswegen maßgeblich verbessert haben. Die TDSE ist eine partielle Differentialgleichung (partial differential equation, PDE) in Raum und Zeit und ist eine der vielen grundlegenden Gleichungen, die dazu beitragen, das Verhalten von chemischen Systemen zu modellieren. Einige andere nennenswerte PDEs, die in der Physik und im Ingenieurwesen eine wichtige Rolle spielen, sind die Navier-Stokes-Gleichung für die Modellierung von Flüssigkeiten, die Wärmeleitungsgleichung in der Thermodynamik und die Wellengleichung in der Akustik. Numerische Verfahren zur Lösung von PDEs beruhen auf der Diskretisierung des Koordinatenraums in eine endliche Zahl von Elementen. Mit ansteigender Größe und Dimension des Gitters werden diese Methoden zunehmend rechenintensiv. Infolgedessen ist die Lösung von PDEs wie der TDSE für große molekulare Systeme sehr aufwändig oder sogar unmöglich. Die Anwendung von ML für die Lösung von PDEs zielt darauf ab, sie durch einen datengetriebenen Ansatz zu beschleunigen. Im zweiten Teil dieser Arbeit wurden ML-Modelle auf Simulationsdaten von quantendynamischen Systemen trainiert. Die so trainierten Modelle sind anschließend in der Lage, genaue Beschreibungen des Verhaltens von Systemen zu liefern, die während des Trainings nicht gesehen wurden. Ein entscheidender Vorteil solcher Methoden ist ihre Fähigkeit, neue Simulationen mit hoher Genauigkeit und Geschwindigkeit zu berechnen. In dieser Arbeit wird gezeigt, wie diese Methode für nachgelagerte Anwendungen in der Quantendynamik genutzt werden kann

    Long-term memory magnetic correlations in the Hubbard model: A dynamical mean-field theory analysis

    Full text link
    We investigate the onset of a not-decaying asymptotic behavior of temporal magnetic correlations in the Hubbard model in infinite dimensions. This long-term memory feature of dynamical spin correlations can be precisely quantified by computing the difference between the zero-frequency limit of the Kubo susceptibility and the corresponding static isothermal one. Here, we present a procedure for reliably evaluating this difference starting from imaginary time-axis data, and apply it to the testbed case of the Mott-Hubbard metal-insulator transition (MIT). At low temperatures, we find long-term memory effects in the entire Mott regime, abruptly ending at the first order MIT. This directly reflects the underlying local moment physics and the associated degeneracy in the many-electron spectrum. At higher temperatures, a more gradual onset of an infinitely-long time-decay of magnetic correlations occurs in the crossover regime, not too far from the Widom line emerging from the critical point of the MIT. Our work has relevant algorithmic implications for the analytical continuation of dynamical susceptibilities in strongly correlated regimes and offers a new perspective for unveiling fundamental properties of the many-particle spectrum of the problem under scrutiny.Comment: 36 pages, 14 figures, resubmission to SciPos
    corecore