20,663 research outputs found

    Unsolvability Cores in Classification Problems

    Full text link
    Classification problems have been introduced by M. Ziegler as a generalization of promise problems. In this paper we are concerned with solvability and unsolvability questions with respect to a given set or language family, especially with cores of unsolvability. We generalize the results about unsolvability cores in promise problems to classification problems. Our main results are a characterization of unsolvability cores via cohesiveness and existence theorems for such cores in unsolvable classification problems. In contrast to promise problems we have to strengthen the conditions to assert the existence of such cores. In general unsolvable classification problems with more than two components exist, which possess no cores, even if the set family under consideration satisfies the assumptions which are necessary to prove the existence of cores in unsolvable promise problems. But, if one of the components is fixed we can use the results on unsolvability cores in promise problems, to assert the existence of such cores in general. In this case we speak of conditional classification problems and conditional cores. The existence of conditional cores can be related to complexity cores. Using this connection we can prove for language families, that conditional cores with recursive components exist, provided that this family admits an uniform solution for the word problem

    An Optimized and Scalable Eigensolver for Sequences of Eigenvalue Problems

    Get PDF
    In many scientific applications the solution of non-linear differential equations are obtained through the set-up and solution of a number of successive eigenproblems. These eigenproblems can be regarded as a sequence whenever the solution of one problem fosters the initialization of the next. In addition, in some eigenproblem sequences there is a connection between the solutions of adjacent eigenproblems. Whenever it is possible to unravel the existence of such a connection, the eigenproblem sequence is said to be correlated. When facing with a sequence of correlated eigenproblems the current strategy amounts to solving each eigenproblem in isolation. We propose a alternative approach which exploits such correlation through the use of an eigensolver based on subspace iteration and accelerated with Chebyshev polynomials (ChFSI). The resulting eigensolver is optimized by minimizing the number of matrix-vector multiplications and parallelized using the Elemental library framework. Numerical results show that ChFSI achieves excellent scalability and is competitive with current dense linear algebra parallel eigensolvers.Comment: 23 Pages, 6 figures. First revision of an invited submission to special issue of Concurrency and Computation: Practice and Experienc

    Determining the Parameters of Massive Protostellar Clouds via Radiative Transfer Modeling

    Full text link
    A one-dimensional method for reconstructing the structure of prestellar and protostellar clouds is presented. The method is based on radiative transfer computations and a comparison of theoretical and observed intensity distributions at both millimeter and infrared wavelengths. The radiative transfer of dust emission is modeled for specified parameters of the density distribution, central star, and external background, and the theoretical distribution of the dust temperature inside the cloud is determined. The intensity distributions at millimeter and IR wavelengths are computed and quantitatively compared with observational data. The best-fit model parameters are determined using a genetic minimization algorithm, which makes it possible to reveal the ranges of parameter degeneracy as well. The method is illustrated by modeling the structure of the two infrared dark clouds IRDC-320.27+029 (P2) and IRDC-321.73+005 (P2). The derived density and temperature distributions can be used to model the chemical structure and spectral maps in molecular lines.Comment: Accepted for publication in Astronomy Report

    Spectral tensor-train decomposition

    Get PDF
    The accurate approximation of high-dimensional functions is an essential task in uncertainty quantification and many other fields. We propose a new function approximation scheme based on a spectral extension of the tensor-train (TT) decomposition. We first define a functional version of the TT decomposition and analyze its properties. We obtain results on the convergence of the decomposition, revealing links between the regularity of the function, the dimension of the input space, and the TT ranks. We also show that the regularity of the target function is preserved by the univariate functions (i.e., the "cores") comprising the functional TT decomposition. This result motivates an approximation scheme employing polynomial approximations of the cores. For functions with appropriate regularity, the resulting \textit{spectral tensor-train decomposition} combines the favorable dimension-scaling of the TT decomposition with the spectral convergence rate of polynomial approximations, yielding efficient and accurate surrogates for high-dimensional functions. To construct these decompositions, we use the sampling algorithm \texttt{TT-DMRG-cross} to obtain the TT decomposition of tensors resulting from suitable discretizations of the target function. We assess the performance of the method on a range of numerical examples: a modifed set of Genz functions with dimension up to 100100, and functions with mixed Fourier modes or with local features. We observe significant improvements in performance over an anisotropic adaptive Smolyak approach. The method is also used to approximate the solution of an elliptic PDE with random input data. The open source software and examples presented in this work are available online.Comment: 33 pages, 19 figure

    Ideal evolution of MHD turbulence when imposing Taylor-Green symmetries

    Get PDF
    We investigate the ideal and incompressible magnetohydrodynamic (MHD) equations in three space dimensions for the development of potentially singular structures. The methodology consists in implementing the four-fold symmetries of the Taylor-Green vortex generalized to MHD, leading to substantial computer time and memory savings at a given resolution; we also use a re-gridding method that allows for lower-resolution runs at early times, with no loss of spectral accuracy. One magnetic configuration is examined at an equivalent resolution of 614436144^3 points, and three different configurations on grids of 409634096^3 points. At the highest resolution, two different current and vorticity sheet systems are found to collide, producing two successive accelerations in the development of small scales. At the latest time, a convergence of magnetic field lines to the location of maximum current is probably leading locally to a strong bending and directional variability of such lines. A novel analytical method, based on sharp analysis inequalities, is used to assess the validity of the finite-time singularity scenario. This method allows one to rule out spurious singularities by evaluating the rate at which the logarithmic decrement of the analyticity-strip method goes to zero. The result is that the finite-time singularity scenario cannot be ruled out, and the singularity time could be somewhere between t=2.33t=2.33 and t=2.70.t=2.70. More robust conclusions will require higher resolution runs and grid-point interpolation measurements of maximum current and vorticity.Comment: 18 pages, 13 figures, 2 tables; submitted to Physical Review
    corecore