11,101 research outputs found

    The Data Big Bang and the Expanding Digital Universe: High-Dimensional, Complex and Massive Data Sets in an Inflationary Epoch

    Get PDF
    Recent and forthcoming advances in instrumentation, and giant new surveys, are creating astronomical data sets that are not amenable to the methods of analysis familiar to astronomers. Traditional methods are often inadequate not merely because of the size in bytes of the data sets, but also because of the complexity of modern data sets. Mathematical limitations of familiar algorithms and techniques in dealing with such data sets create a critical need for new paradigms for the representation, analysis and scientific visualization (as opposed to illustrative visualization) of heterogeneous, multiresolution data across application domains. Some of the problems presented by the new data sets have been addressed by other disciplines such as applied mathematics, statistics and machine learning and have been utilized by other sciences such as space-based geosciences. Unfortunately, valuable results pertaining to these problems are mostly to be found only in publications outside of astronomy. Here we offer brief overviews of a number of concepts, techniques and developments, some "old" and some new. These are generally unknown to most of the astronomical community, but are vital to the analysis and visualization of complex datasets and images. In order for astronomers to take advantage of the richness and complexity of the new era of data, and to be able to identify, adopt, and apply new solutions, the astronomical community needs a certain degree of awareness and understanding of the new concepts. One of the goals of this paper is to help bridge the gap between applied mathematics, artificial intelligence and computer science on the one side and astronomy on the other.Comment: 24 pages, 8 Figures, 1 Table. Accepted for publication: "Advances in Astronomy, special issue "Robotic Astronomy

    Fast, Scalable, and Interactive Software for Landau-de Gennes Numerical Modeling of Nematic Topological Defects

    Get PDF
    Numerical modeling of nematic liquid crystals using the tensorial Landau-de Gennes (LdG) theory provides detailed insights into the structure and energetics of the enormous variety of possible topological defect configurations that may arise when the liquid crystal is in contact with colloidal inclusions or structured boundaries. However, these methods can be computationally expensive, making it challenging to predict (meta)stable configurations involving several colloidal particles, and they are often restricted to system sizes well below the experimental scale. Here we present an open-source software package that exploits the embarrassingly parallel structure of the lattice discretization of the LdG approach. Our implementation, combining CUDA/C++ and OpenMPI, allows users to accelerate simulations using both CPU and GPU resources in either single- or multiple-core configurations. We make use of an efficient minimization algorithm, the Fast Inertial Relaxation Engine (FIRE) method, that is well-suited to large-scale parallelization, requiring little additional memory or computational cost while offering performance competitive with other commonly used methods. In multi-core operation we are able to scale simulations up to supra-micron length scales of experimental relevance, and in single-core operation the simulation package includes a user-friendly GUI environment for rapid prototyping of interfacial features and the multifarious defect states they can promote. To demonstrate this software package, we examine in detail the competition between curvilinear disclinations and point-like hedgehog defects as size scale, material properties, and geometric features are varied. We also study the effects of an interface patterned with an array of topological point-defects.Comment: 16 pages, 6 figures, 1 youtube link. The full catastroph

    3D differential phase contrast microscopy

    Full text link
    We demonstrate 3D phase and absorption recovery from partially coherent intensity images captured with a programmable LED array source. Images are captured through-focus with four different illumination patterns. Using first Born and weak object approximations (WOA), a linear 3D differential phase contrast (DPC) model is derived. The partially coherent transfer functions relate the sample's complex refractive index distribution to intensity measurements at varying defocus. Volumetric reconstruction is achieved by a global FFT-based method, without an intermediate 2D phase retrieval step. Because the illumination is spatially partially coherent, the transverse resolution of the reconstructed field achieves twice the NA of coherent systems and improved axial resolution

    Matrix product states and variational methods applied to critical quantum field theory

    Get PDF
    We study the second-order quantum phase-transition of massive real scalar field theory with a quartic interaction (ϕ4\phi^4 theory) in (1+1) dimensions on an infinite spatial lattice using matrix product states (MPS). We introduce and apply a naive variational conjugate gradient method, based on the time-dependent variational principle (TDVP) for imaginary time, to obtain approximate ground states, using a related ansatz for excitations to calculate the particle and soliton masses and to obtain the spectral density. We also estimate the central charge using finite-entanglement scaling. Our value for the critical parameter agrees well with recent Monte Carlo results, improving on an earlier study which used the related DMRG method, verifying that these techniques are well-suited to studying critical field systems. We also obtain critical exponents that agree, as expected, with those of the transverse Ising model. Additionally, we treat the special case of uniform product states (mean field theory) separately, showing that they may be used to investigate non-critical quantum field theories under certain conditions.Comment: 24 pages, 21 figures, with a minor improvement to the QFT sectio

    Convolutional Color Constancy

    Full text link
    Color constancy is the problem of inferring the color of the light that illuminated a scene, usually so that the illumination color can be removed. Because this problem is underconstrained, it is often solved by modeling the statistical regularities of the colors of natural objects and illumination. In contrast, in this paper we reformulate the problem of color constancy as a 2D spatial localization task in a log-chrominance space, thereby allowing us to apply techniques from object detection and structured prediction to the color constancy problem. By directly learning how to discriminate between correctly white-balanced images and poorly white-balanced images, our model is able to improve performance on standard benchmarks by nearly 40%

    Feature Lines for Illustrating Medical Surface Models: Mathematical Background and Survey

    Full text link
    This paper provides a tutorial and survey for a specific kind of illustrative visualization technique: feature lines. We examine different feature line methods. For this, we provide the differential geometry behind these concepts and adapt this mathematical field to the discrete differential geometry. All discrete differential geometry terms are explained for triangulated surface meshes. These utilities serve as basis for the feature line methods. We provide the reader with all knowledge to re-implement every feature line method. Furthermore, we summarize the methods and suggest a guideline for which kind of surface which feature line algorithm is best suited. Our work is motivated by, but not restricted to, medical and biological surface models.Comment: 33 page
    corecore