50,709 research outputs found

    The Cardy-Verlinde Formula and Charged Topological AdS Black Holes

    Full text link
    We consider the brane universe in the bulk background of the charged topological AdS black holes. The evolution of the brane universe is described by the Friedmann equations for a flat or an open FRW-universe containing radiation and stiff matter. We find that the temperature and entropy of the dual CFT are simply expressed in terms of the Hubble parameter and its time derivative, and the Friedmann equations coincide with thermodynamic formulas of the dual CFT at the moment when the brane crosses the black hole horizon. We obtain the generalized Cardy-Verlinde formula for the CFT with an R-charge, for any values of the curvature parameter k in the Friedmann equations.Comment: 10 pages, LaTeX, references adde

    Thermodynamic Geometry and Critical Behavior of Black Holes

    Full text link
    Based on the observations that there exists an analogy between the Reissner-Nordstr\"om-anti-de Sitter (RN-AdS) black holes and the van der Waals-Maxwell liquid-gas system, in which a correspondence of variables is (ϕ,q)(V,P)(\phi, q) \leftrightarrow (V,P), we study the Ruppeiner geometry, defined as Hessian matrix of black hole entropy with respect to the internal energy (not the mass) of black hole and electric potential (angular velocity), for the RN, Kerr and RN-AdS black holes. It is found that the geometry is curved and the scalar curvature goes to negative infinity at the Davies' phase transition point for the RN and Kerr black holes. Our result for the RN-AdS black holes is also in good agreement with the one about phase transition and its critical behavior in the literature.Comment: Revtex, 18 pages including 4 figure

    Improved Compressive Sensing Of Natural Scenes Using Localized Random Sampling

    Get PDF
    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging

    Efficient Image Processing Via Compressive Sensing Of Integrate-And-Fire Neuronal Network Dynamics

    Get PDF
    Integrate-and-fire (I&F) neuronal networks are ubiquitous in diverse image processing applications, including image segmentation and visual perception. While conventional I&F network image processing requires the number of nodes composing the network to be equal to the number of image pixels driving the network, we determine whether I&F dynamics can accurately transmit image information when there are significantly fewer nodes than network input-signal components. Although compressive sensing (CS) theory facilitates the recovery of images using very few samples through linear signal processing, it does not address whether similar signal recovery techniques facilitate reconstructions through measurement of the nonlinear dynamics of an I&F network. In this paper, we present a new framework for recovering sparse inputs of nonlinear neuronal networks via compressive sensing. By recovering both one-dimensional inputs and two-dimensional images, resembling natural stimuli, we demonstrate that input information can be well-preserved through nonlinear I&F network dynamics even when the number of network-output measurements is significantly smaller than the number of input-signal components. This work suggests an important extension of CS theory potentially useful in improving the processing of medical or natural images through I&F network dynamics and understanding the transmission of stimulus information across the visual system

    Adaptive Thresholding for Sparse Covariance Matrix Estimation

    Get PDF
    In this paper we consider estimation of sparse covariance matrices and propose a thresholding procedure which is adaptive to the variability of individual entries. The estimators are fully data driven and enjoy excellent performance both theoretically and numerically. It is shown that the estimators adaptively achieve the optimal rate of convergence over a large class of sparse covariance matrices under the spectral norm. In contrast, the commonly used universal thresholding estimators are shown to be sub-optimal over the same parameter spaces. Support recovery is also discussed. The adaptive thresholding estimators are easy to implement. Numerical performance of the estimators is studied using both simulated and real data. Simulation results show that the adaptive thresholding estimators uniformly outperform the universal thresholding estimators. The method is also illustrated in an analysis on a dataset from a small round blue-cell tumors microarray experiment. A supplement to this paper which contains additional technical proofs is available online.Comment: To appear in Journal of the American Statistical Associatio

    Uncertainty quantification for radio interferometric imaging: II. MAP estimation

    Get PDF
    Uncertainty quantification is a critical missing component in radio interferometric imaging that will only become increasingly important as the big-data era of radio interferometry emerges. Statistical sampling approaches to perform Bayesian inference, like Markov Chain Monte Carlo (MCMC) sampling, can in principle recover the full posterior distribution of the image, from which uncertainties can then be quantified. However, for massive data sizes, like those anticipated from the Square Kilometre Array (SKA), it will be difficult if not impossible to apply any MCMC technique due to its inherent computational cost. We formulate Bayesian inference problems with sparsity-promoting priors (motivated by compressive sensing), for which we recover maximum a posteriori (MAP) point estimators of radio interferometric images by convex optimisation. Exploiting recent developments in the theory of probability concentration, we quantify uncertainties by post-processing the recovered MAP estimate. Three strategies to quantify uncertainties are developed: (i) highest posterior density credible regions; (ii) local credible intervals (cf. error bars) for individual pixels and superpixels; and (iii) hypothesis testing of image structure. These forms of uncertainty quantification provide rich information for analysing radio interferometric observations in a statistically robust manner. Our MAP-based methods are approximately 10510^5 times faster computationally than state-of-the-art MCMC methods and, in addition, support highly distributed and parallelised algorithmic structures. For the first time, our MAP-based techniques provide a means of quantifying uncertainties for radio interferometric imaging for realistic data volumes and practical use, and scale to the emerging big-data era of radio astronomy.Comment: 13 pages, 10 figures, see companion article in this arXiv listin

    Atomistic Simulations of Flash Memory Materials Based on Chalcogenide Glasses

    Get PDF
    In this chapter, by using ab-initio molecular dynamics, we introduce the latest simulation results on two materials for flash memory devices: Ge2Sb2Te5 and Ge-Se-Cu-Ag. This chapter is a review of our previous work including some of our published figures and text in Cai et al. (2010) and Prasai & Drabold (2011) and also includes several new results.Comment: 24 pages, 20 figures. This is a chapter submitted for the book under the working title "Flash Memory" (to be published by Intech ISBN 978-953-307-272-2

    Taming computational complexity: efficient and parallel SimRank optimizations on undirected graphs

    Get PDF
    SimRank has been considered as one of the promising link-based ranking algorithms to evaluate similarities of web documents in many modern search engines. In this paper, we investigate the optimization problem of SimRank similarity computation on undirected web graphs. We first present a novel algorithm to estimate the SimRank between vertices in O(n3+ Kn2) time, where n is the number of vertices, and K is the number of iterations. In comparison, the most efficient implementation of SimRank algorithm in [1] takes O(K n3 ) time in the worst case. To efficiently handle large-scale computations, we also propose a parallel implementation of the SimRank algorithm on multiple processors. The experimental evaluations on both synthetic and real-life data sets demonstrate the better computational time and parallel efficiency of our proposed techniques
    corecore