7,343 research outputs found

    Similar phenomena at different scales: Black Holes, the Sun, Gamma-ray Bursts, Supernovae, Galaxies and Galaxy Clusters

    Get PDF
    Many similar phenomena occur in astrophysical systems with spatial and mass scales different by many orders of magnitudes. For examples, collimated outflows are produced from the Sun, proto-stellar systems, gamma-ray bursts, neutron star and black hole X-ray binaries, and supermassive black holes; various kinds of flares occur from the Sun, stellar coronae, X-ray binaries and active galactic nuclei; shocks and particle acceleration exist in supernova remnants, gamma-ray bursts, clusters of galaxies, etc. In this report I summarize briefly these phenomena and possible physical mechanisms responsible for them. I emphasize the importance of using the Sun as an astrophysical laboratory in studying these physical processes, especially the roles magnetic fields play in them; it is quite likely that magnetic activities dominate the fundamental physical processes in all of these systems. As a case study, I show that X-ray lightcurves from solar flares, black hole binaries and gamma-ray bursts exhibit a common scaling law of non-linear dynamical properties, over a dynamical range of several orders of magnitudes in intensities, implying that many basic X-ray emission nodes or elements are inter-connected over multi-scales. A future high timing and imaging resolution solar X-ray instrument, aimed at isolating and resolving the fundamental elements of solar X-ray lightcurves, may shed new lights onto the fundamental physical mechanisms, which are common in astrophysical systems with vastly different mass and spatial scales. Using the Sun as an astrophysical laboratory, "Applied Solar Astrophysics" will deepen our understanding of many important astrophysical problems.Comment: 22 pages, 13 figures, invited discourse for the 26th IAU GA, Prague, Czech Republic, Aug. 2006, to be published in Vol. 14 IAU Highlights of Astronomy, Ed. K.A. van der Hucht. Revised slightly to match the final submitted version, after incorporating comments and suggestions from several colleagues. A full-resolution version is available on request from the author at [email protected]

    A Survey on Bayesian Deep Learning

    Full text link
    A comprehensive artificial intelligence system needs to not only perceive the environment with different `senses' (e.g., seeing and hearing) but also infer the world's conditional (or even causal) relations and corresponding uncertainty. The past decade has seen major advances in many perception tasks such as visual object recognition and speech recognition using deep learning models. For higher-level inference, however, probabilistic graphical models with their Bayesian nature are still more powerful and flexible. In recent years, Bayesian deep learning has emerged as a unified probabilistic framework to tightly integrate deep learning and Bayesian models. In this general framework, the perception of text or images using deep learning can boost the performance of higher-level inference and in turn, the feedback from the inference process is able to enhance the perception of text or images. This survey provides a comprehensive introduction to Bayesian deep learning and reviews its recent applications on recommender systems, topic models, control, etc. Besides, we also discuss the relationship and differences between Bayesian deep learning and other related topics such as Bayesian treatment of neural networks.Comment: To appear in ACM Computing Surveys (CSUR) 202

    Deep Exponential Families

    Full text link
    We describe \textit{deep exponential families} (DEFs), a class of latent variable models that are inspired by the hidden structures used in deep neural networks. DEFs capture a hierarchy of dependencies between latent variables, and are easily generalized to many settings through exponential families. We perform inference using recent "black box" variational inference techniques. We then evaluate various DEFs on text and combine multiple DEFs into a model for pairwise recommendation data. In an extensive study, we show that going beyond one layer improves predictions for DEFs. We demonstrate that DEFs find interesting exploratory structure in large data sets, and give better predictive performance than state-of-the-art models

    Automatic Differentiation Variational Inference

    Full text link
    Probabilistic modeling is iterative. A scientist posits a simple model, fits it to her data, refines it according to her analysis, and repeats. However, fitting complex models to large data is a bottleneck in this process. Deriving algorithms for new models can be both mathematically and computationally challenging, which makes it difficult to efficiently cycle through the steps. To this end, we develop automatic differentiation variational inference (ADVI). Using our method, the scientist only provides a probabilistic model and a dataset, nothing else. ADVI automatically derives an efficient variational inference algorithm, freeing the scientist to refine and explore many models. ADVI supports a broad class of models-no conjugacy assumptions are required. We study ADVI across ten different models and apply it to a dataset with millions of observations. ADVI is integrated into Stan, a probabilistic programming system; it is available for immediate use

    Notes on the Riemann Hypothesis

    Full text link
    These notes were written from a series of lectures given in March 2010 at the Universidad Complutense of Madrid and then in Barcelona for the centennial anniversary of the Spanish Mathematical Society (RSME). Our aim is to give an introduction to the Riemann Hypothesis and a panoramic view of the world of zeta and L-functions. We first review Riemann's foundational article and discuss the mathematical background of the time and his possible motivations for making his famous conjecture. We discuss some of the most relevant developments after Riemann that have contributed to a better understanding of the conjecture.Comment: 2 sections added, 55 pages, 6 figure

    Uncertainty in Economic Growth and Inequality

    Full text link
    A step to consilience, starting with a deconstruction of the causality of uncertainty that is embedded in the fundamentals of growth and inequality, following a construction of aggregation laws that disclose the invariance principle across heterogeneous individuals, ending with a reconstruction of metric models that yields deeper structural connections via U.S. GDP and income data

    Rhythms of the nervous system: mathematical themes and variations

    Full text link
    The nervous system displays a variety of rhythms in both waking and sleep. These rhythms have been closely associated with different behavioral and cognitive states, but it is still unknown how the nervous system makes use of these rhythms to perform functionally important tasks. To address those questions, it is first useful to understood in a mechanistic way the origin of the rhythms, their interactions, the signals which create the transitions among rhythms, and the ways in which rhythms filter the signals to a network of neurons. This talk discusses how dynamical systems have been used to investigate the origin, properties and interactions of rhythms in the nervous system. It focuses on how the underlying physiology of the cells and synapses of the networks shape the dynamics of the network in different contexts, allowing the variety of dynamical behaviors to be displayed by the same network. The work is presented using a series of related case studies on different rhythms. These case studies are chosen to highlight mathematical issues, and suggest further mathematical work to be done. The topics include: different roles of excitation and inhibition in creating synchronous assemblies of cells, different kinds of building blocks for neural oscillations, and transitions among rhythms. The mathematical issues include reduction of large networks to low dimensional maps, role of noise, global bifurcations, use of probabilistic formulations.Published versio
    • …
    corecore