158 research outputs found

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    String and Membrane Gaussian Processes

    Full text link
    In this paper we introduce a novel framework for making exact nonparametric Bayesian inference on latent functions, that is particularly suitable for Big Data tasks. Firstly, we introduce a class of stochastic processes we refer to as string Gaussian processes (string GPs), which are not to be mistaken for Gaussian processes operating on text. We construct string GPs so that their finite-dimensional marginals exhibit suitable local conditional independence structures, which allow for scalable, distributed, and flexible nonparametric Bayesian inference, without resorting to approximations, and while ensuring some mild global regularity constraints. Furthermore, string GP priors naturally cope with heterogeneous input data, and the gradient of the learned latent function is readily available for explanatory analysis. Secondly, we provide some theoretical results relating our approach to the standard GP paradigm. In particular, we prove that some string GPs are Gaussian processes, which provides a complementary global perspective on our framework. Finally, we derive a scalable and distributed MCMC scheme for supervised learning tasks under string GP priors. The proposed MCMC scheme has computational time complexity O(N)\mathcal{O}(N) and memory requirement O(dN)\mathcal{O}(dN), where NN is the data size and dd the dimension of the input space. We illustrate the efficacy of the proposed approach on several synthetic and real-world datasets, including a dataset with 66 millions input points and 88 attributes.Comment: To appear in the Journal of Machine Learning Research (JMLR), Volume 1

    Scaling Multidimensional Inference for Big Structured Data

    Get PDF
    In information technology, big data is a collection of data sets so large and complex that it becomes difficult to process using traditional data processing applications [151]. In a world of increasing sensor modalities, cheaper storage, and more data oriented questions, we are quickly passing the limits of tractable computations using traditional statistical analysis methods. Methods which often show great results on simple data have difficulties processing complicated multidimensional data. Accuracy alone can no longer justify unwarranted memory use and computational complexity. Improving the scaling properties of these methods for multidimensional data is the only way to make these methods relevant. In this work we explore methods for improving the scaling properties of parametric and nonparametric models. Namely, we focus on the structure of the data to lower the complexity of a specific family of problems. The two types of structures considered in this work are distributive optimization with separable constraints (Chapters 2-3), and scaling Gaussian processes for multidimensional lattice input (Chapters 4-5). By improving the scaling of these methods, we can expand their use to a wide range of applications which were previously intractable open the door to new research questions

    Entanglement & Correlations in exactly solvable models

    Get PDF
    The phenomenon of entanglement is probably the most fundamental characteristic distinguishing the quantum from the classical world. It was one of the first aspects of quantum physics to be studied and discussed, and after more than 75 years from the publication of the classical papers by Einstein, Podolsky and Rosen and by Schrodinger, the interest in the properties of entanglement is still growing. The quantum nature of entanglement makes difficult any intuitive description, and it is better to consider directly what it implies. Entanglement means that the measurement of an observable of a subsystem may affect drastically and instantaneously the possible outcome of a measurement on another part of the system, no matter how far apart it is spatially. The weird and fascinating aspect is that the first measurement affects the second one with infinite speed. After about 30 years from the appearance of concept of entanglement Bell published one of its most famous works in which he showed that the entanglement forbids an explanation of quantum randomness via hidden variables, unraveling the EPR paradox, once and for all. But only 15 years later, when the Hawking radiation has been put in relation with the entanglement entropy, it has been realized that entanglement could provide unexpected information. The interest in understanding the properties of entangled states has received an impressive boost with the advent of “quantum information”, in nineties. For quantum information the entanglement is a resource, indeed quantum (non-local) correlations are fundamental for quantum teleportation or for enhancing the efficiency of quantum protocols. The progress made in quantum information for quantifying the entanglement has found important applications in the study of extended quantum systems. In this context the entanglement entropy becomes an indicator of quantum phase transitions, and its behavior at different subsystem sizes and geometries uncovers universal quantities characterizing the critical points. In comparison with quantum correlation functions, the entanglement entropy measures the fundamental properties of critical neighborhoods in a “cleaner” way, e.g. the simple (linear) dependence of entanglement entropy on the central charge in a conformal system. The first part of the thesis fits into this last genre of research: The entanglement between a subsystem and the rest in the ground state of a 1D system is investigated. In particular the dependence of the entanglement entropies on the geometry of the subsystem and on boundary conditions is widely discussed. The second part of the thesis is focused on non-equilibrium dynamics. The issue of equilibration of quantum systems has been firstly posed in a seminal paper by von Neumann in 1929, but for long time it remained only an academic problem. Indeed, in solid state physics there are many difficulties in designing experiments in which the system's parameters can be tuned. Moreover the genuine quantum features of systems could not be preserved for large enough times, because of dissipation and decoherence. Consequently, the research on quantum non-equilibrium problems blew over. Only in the last decade, the many-body physics of ultracold atomic gases overcame these problems: these are highly tunable systems, weakly coupled to the environment, so that quantum coherence is preserved for large times. In fact, a unique feature of many-body physics of cold atoms is the possibility to “simulate” quantum systems in which both the interactions and external potentials can be modified dynamically. In addition, the experimental realization of low-dimensional structures has unveiled the role that dimensionality and conservation laws play in quantum non-equilibrium dynamics. These aspects were addressed recently in a fascinating experiment on the time evolution of non-equlibrium Bose gases in one dimension, interpreted as the quantum equivalent of Newton's cradle. One of the most important open problems is the characterization of a system that evolves from a non-equilibrium state prepared by suddenly tuning an external parameter. This is commonly called quantum quench and it is the simplest example of out-of-equilibrium dynamics. The time-dependence of the various local observables could be theoretically calculated from first principles, but in general this is a too hard task that cannot be solved even by the most powerful computers (incidentally, this is also the reason why quantum computers can be extremely more effective than classical ones). Insights can be obtained exploiting the most advanced mathematical techniques for low-dimensional quantum systems to draw very general conclusions about the quantum quenches. For example, if for very large times local observables become stationary (even though the entire system will never attain equilibrium), one could describe the system by an effective stationary state that can be obtained without solving the too complicated non-equilibrium dynamics. This is an intriguing aspect of quantum quenches that led to a vigorous research for clarifying the role played by fundamental features of the system, first of all integrability, that is to say the existence of an infinite number of conservation laws. The common belief is that in non-integrable systems (i.e. with a finite number of conservation laws) the stationary state can be described by a single parameter, that is an effective temperature encoding the loss of information about non local observables. Eventually the state at late times is to all intents and purposes equivalent to a thermal one with that temperature. This interesting picture opens the way for a quantum interpretation of thermalization as a local effective description in closed systems. When there are many (infinite) conserved quantities, as in integrable systems, the effective temperature is not sufficient to describe the system's features at late times. It is widely believed that the behavior of local observables could be explained by generalizations of the celebrated Gibbs ensemble. In the thesis, this hypothesis has been tested and proved for the paradigm of systems undergoing quantum phase transitions: the quantum Ising model. For quenches within the ordered phase of the Ising model, an analytic formula that describes the evolution of the equal-time two-point correlation function of the order parameter has been obtained

    New Directions for Contact Integrators

    Get PDF
    Contact integrators are a family of geometric numerical schemes which guarantee the conservation of the contact structure. In this work we review the construction of both the variational and Hamiltonian versions of these methods. We illustrate some of the advantages of geometric integration in the dissipative setting by focusing on models inspired by recent studies in celestial mechanics and cosmology.Comment: To appear as Chapter 24 in GSI 2021, Springer LNCS 1282

    Entanglement entropy in homogeneus, fermionic chains: some results and some conjectures

    Get PDF
    El objetivo de esta tesis es el estudio de la entropía de entrelazamiento de Rényi en los estados estacionarios de cadenas de fermiones sin spin descritas por un Hamiltoniano cuadrático general con invariancia translacional y posibles acoplos a larga distancia.Nuestra investigación se basa en la relación que existe entre la matriz densidad de los estados estacionarios y la correspondiente matriz de correlaciones entre dos puntos. Esta propiedad reduce la complejidad de calcular numéricamente la entropía de entrelazamiento y permite expresar esta magnitud en términos del determinante del resolvente de la matriz de correlaciones.Dado que la cadena es invariante translacional, la matriz de correlaciones es una matriz block Toeplitz. En vista de este hecho, la filosofía que seguimos en esta tesis es la de aprovecharnos de las propiedades asintóticas de este tipo de determinantes para investigar la entropía de entrelazamiento de Rényi en el límite termodinámico. Un aspecto interesante es que los resultados conocidos sobre el comportamiento asintótico de los determinantes block Toeplitz no son válidos para algunas de las matrices de correlaciones que consideraremos. Intentando llenar esta laguna, obtenemos algunos resultados originales sobre el comportamiento asintótico de los determinantes de matrices de Toeplitz y block Toeplitz.Estos nuevos resultados combinados con los ya previamente conocidos nos permiten obtener analíticamente el término dominante en la expansión de la entropía de entrelazamiento, tanto para un único intervalo de puntos o sites contiguos de la cadena como para subsistemas formados por varios intervalos disjuntos. En particular, descubrimos que los acoplos de largo alcance dan lugar a nuevas propiedades del comportamiento asintótico de la entropía tales como la aparición de un término logarítmico no universal fuera de los puntos críticos cuando los términos de pairing decaen siguiendo una ley de potencias o un crecimiento sublogarítmico cuando dichos acoplos decaen logarítmicamente. El estudio de la entropía de entrelazamiento a través de los determinantes block Toeplitz también nos ha llevado a descubrir una nueva simetría de la entropía de entrelazamiento bajo transformaciones de Möbius que pueden verse como transformaciones de los acoplos de la teoría. En particular, encontramos que para teorías críticasesta simetría presenta un intrigante paralelismo con las transformaciones conformes en el espacio-tiempo. <br /

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Preconditioned fast solvers for large linear systems with specific sparse and/or Toeplitz-like structures and applications

    Get PDF
    In this thesis, the design of the preconditioners we propose starts from applications instead of treating the problem in a completely general way. The reason is that not all types of linear systems can be addressed with the same tools. In this sense, the techniques for designing efficient iterative solvers depends mostly on properties inherited from the continuous problem, that has originated the discretized sequence of matrices. Classical examples are locality, isotropy in the PDE context, whose discrete counterparts are sparsity and matrices constant along the diagonals, respectively. Therefore, it is often important to take into account the properties of the originating continuous model for obtaining better performances and for providing an accurate convergence analysis. We consider linear systems that arise in the solution of both linear and nonlinear partial differential equation of both integer and fractional type. For the latter case, an introduction to both the theory and the numerical treatment is given. All the algorithms and the strategies presented in this thesis are developed having in mind their parallel implementation. In particular, we consider the processor-co-processor framework, in which the main part of the computation is performed on a Graphics Processing Unit (GPU) accelerator. In Part I we introduce our proposal for sparse approximate inverse preconditioners for either the solution of time-dependent Partial Differential Equations (PDEs), Chapter 3, and Fractional Differential Equations (FDEs), containing both classical and fractional terms, Chapter 5. More precisely, we propose a new technique for updating preconditioners for dealing with sequences of linear systems for PDEs and FDEs, that can be used also to compute matrix functions of large matrices via quadrature formula in Chapter 4 and for optimal control of FDEs in Chapter 6. At last, in Part II, we consider structured preconditioners for quasi-Toeplitz systems. The focus is towards the numerical treatment of discretized convection-diffusion equations in Chapter 7 and on the solution of FDEs with linear multistep formula in boundary value form in Chapter 8
    corecore