8,021 research outputs found

    Multiscale Astronomical Image Processing Based on Nonlinear Partial Differential Equations

    Get PDF
    Astronomical applications of recent advances in the field of nonastronomical image processing are presented. These innovative methods, applied to multiscale astronomical images, increase signal-to-noise ratio, do not smear point sources or extended diffuse structures, and are thus a highly useful preliminary step for detection of different features including point sources, smoothing of clumpy data, and removal of contaminants from background maps. We show how the new methods, combined with other algorithms of image processing, unveil fine diffuse structures while at the same time enhance detection of localized objects, thus facilitating interactive morphology studies and paving the way for the automated recognition and classification of different features. We have also developed a new application framework for astronomical image processing that implements some recent advances made in computer vision and modern image processing, along with original algorithms based on nonlinear partial differential equations. The framework enables the user to easily set up and customize an image-processing pipeline interactively; it has various common and new visualization features and provides access to many astronomy data archives. Altogether, the results presented here demonstrate the first implementation of a novel synergistic approach based on integration of image processing, image visualization, and image quality assessment

    The Data Big Bang and the Expanding Digital Universe: High-Dimensional, Complex and Massive Data Sets in an Inflationary Epoch

    Get PDF
    Recent and forthcoming advances in instrumentation, and giant new surveys, are creating astronomical data sets that are not amenable to the methods of analysis familiar to astronomers. Traditional methods are often inadequate not merely because of the size in bytes of the data sets, but also because of the complexity of modern data sets. Mathematical limitations of familiar algorithms and techniques in dealing with such data sets create a critical need for new paradigms for the representation, analysis and scientific visualization (as opposed to illustrative visualization) of heterogeneous, multiresolution data across application domains. Some of the problems presented by the new data sets have been addressed by other disciplines such as applied mathematics, statistics and machine learning and have been utilized by other sciences such as space-based geosciences. Unfortunately, valuable results pertaining to these problems are mostly to be found only in publications outside of astronomy. Here we offer brief overviews of a number of concepts, techniques and developments, some "old" and some new. These are generally unknown to most of the astronomical community, but are vital to the analysis and visualization of complex datasets and images. In order for astronomers to take advantage of the richness and complexity of the new era of data, and to be able to identify, adopt, and apply new solutions, the astronomical community needs a certain degree of awareness and understanding of the new concepts. One of the goals of this paper is to help bridge the gap between applied mathematics, artificial intelligence and computer science on the one side and astronomy on the other.Comment: 24 pages, 8 Figures, 1 Table. Accepted for publication: "Advances in Astronomy, special issue "Robotic Astronomy

    The Unreasonable Effectiveness of Deep Features as a Perceptual Metric

    Full text link
    While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on ImageNet classification has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new dataset of human perceptual similarity judgments. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by large margins on our dataset. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.Comment: Accepted to CVPR 2018; Code and data available at https://www.github.com/richzhang/PerceptualSimilarit

    Multiscale Dictionary Learning for Estimating Conditional Distributions

    Full text link
    Nonparametric estimation of the conditional distribution of a response given high-dimensional features is a challenging problem. It is important to allow not only the mean but also the variance and shape of the response density to change flexibly with features, which are massive-dimensional. We propose a multiscale dictionary learning model, which expresses the conditional response density as a convex combination of dictionary densities, with the densities used and their weights dependent on the path through a tree decomposition of the feature space. A fast graph partitioning algorithm is applied to obtain the tree decomposition, with Bayesian methods then used to adaptively prune and average over different sub-trees in a soft probabilistic manner. The algorithm scales efficiently to approximately one million features. State of the art predictive performance is demonstrated for toy examples and two neuroscience applications including up to a million features

    Discrete Geometric Structures in Homogenization and Inverse Homogenization with application to EIT

    Get PDF
    We introduce a new geometric approach for the homogenization and inverse homogenization of the divergence form elliptic operator with rough conductivity coefficients σ(x)\sigma(x) in dimension two. We show that conductivity coefficients are in one-to-one correspondence with divergence-free matrices and convex functions s(x)s(x) over the domain Ω\Omega. Although homogenization is a non-linear and non-injective operator when applied directly to conductivity coefficients, homogenization becomes a linear interpolation operator over triangulations of Ω\Omega when re-expressed using convex functions, and is a volume averaging operator when re-expressed with divergence-free matrices. Using optimal weighted Delaunay triangulations for linearly interpolating convex functions, we obtain an optimally robust homogenization algorithm for arbitrary rough coefficients. Next, we consider inverse homogenization and show how to decompose it into a linear ill-posed problem and a well-posed non-linear problem. We apply this new geometric approach to Electrical Impedance Tomography (EIT). It is known that the EIT problem admits at most one isotropic solution. If an isotropic solution exists, we show how to compute it from any conductivity having the same boundary Dirichlet-to-Neumann map. It is known that the EIT problem admits a unique (stable with respect to GG-convergence) solution in the space of divergence-free matrices. As such we suggest that the space of convex functions is the natural space in which to parameterize solutions of the EIT problem
    corecore