6,324 research outputs found

    Multiobjective Design Exploration in Space Engineering

    Get PDF

    Local and non-local measures of acceleration in cosmology

    Get PDF
    Current cosmological observations, when interpreted within the framework of a homogeneous and isotropic Friedmann-Lemaitre-Robertson-Walker (FLRW) model, strongly suggest that the Universe is entering a period of accelerating expansion. This is often taken to mean that the expansion of space itself is accelerating. In a general spacetime, however, this is not necessarily true. We attempt to clarify this point by considering a handful of local and non-local measures of acceleration in a variety of inhomogeneous cosmological models. Each of the chosen measures corresponds to a theoretical or observational procedure that has previously been used to study acceleration in cosmology, and all measures reduce to the same quantity in the limit of exact spatial homogeneity and isotropy. In statistically homogeneous and isotropic spacetimes, we find that the acceleration inferred from observations of the distance-redshift relation is closely related to the acceleration of the spatially averaged universe, but does not necessarily bear any resemblance to the average of the local acceleration of spacetime itself. For inhomogeneous spacetimes that do not display statistical homogeneity and isotropy, however, we find little correlation between acceleration inferred from observations and the acceleration of the averaged spacetime. This shows that observations made in an inhomogeneous universe can imply acceleration without the existence of dark energy.Comment: 19 pages, 10 figures. Several references added or amended, some minor clarifications made in the tex

    Using reconfigurable computing technology to accelerate matrix decomposition and applications

    Get PDF
    Matrix decomposition plays an increasingly significant role in many scientific and engineering applications. Among numerous techniques, Singular Value Decomposition (SVD) and Eigenvalue Decomposition (EVD) are widely used as factorization tools to perform Principal Component Analysis for dimensionality reduction and pattern recognition in image processing, text mining and wireless communications, while QR Decomposition (QRD) and sparse LU Decomposition (LUD) are employed to solve the dense or sparse linear system of equations in bioinformatics, power system and computer vision. Matrix decompositions are computationally expensive and their sequential implementations often fail to meet the requirements of many time-sensitive applications. The emergence of reconfigurable computing has provided a flexible and low-cost opportunity to pursue high-performance parallel designs, and the use of FPGAs has shown promise in accelerating this class of computation. In this research, we have proposed and implemented several highly parallel FPGA-based architectures to accelerate matrix decompositions and their applications in data mining and signal processing. Specifically, in this dissertation we describe the following contributions: • We propose an efficient FPGA-based double-precision floating-point architecture for EVD, which can efficiently analyze large-scale matrices. • We implement a floating-point Hestenes-Jacobi architecture for SVD, which is capable of analyzing arbitrary sized matrices. • We introduce a novel deeply pipelined reconfigurable architecture for QRD, which can be dynamically configured to perform either Householder transformation or Givens rotation in a manner that takes advantage of the strengths of each. • We design a configurable architecture for sparse LUD that supports both symmetric and asymmetric sparse matrices with arbitrary sparsity patterns. • By further extending the proposed hardware solution for SVD, we parallelize a popular text mining tool-Latent Semantic Indexing with an FPGA-based architecture. • We present a configurable architecture to accelerate Homotopy l1-minimization, in which the modification of the proposed FPGA architecture for sparse LUD is used at its core to parallelize both Cholesky decomposition and rank-1 update. Our experimental results using an FPGA-based acceleration system indicate the efficiency of our proposed novel architectures, with application and dimension-dependent speedups over an optimized software implementation that range from 1.5ÃÂ to 43.6ÃÂ in terms of computation time

    Enabling High-Dimensional Hierarchical Uncertainty Quantification by ANOVA and Tensor-Train Decomposition

    Get PDF
    Hierarchical uncertainty quantification can reduce the computational cost of stochastic circuit simulation by employing spectral methods at different levels. This paper presents an efficient framework to simulate hierarchically some challenging stochastic circuits/systems that include high-dimensional subsystems. Due to the high parameter dimensionality, it is challenging to both extract surrogate models at the low level of the design hierarchy and to handle them in the high-level simulation. In this paper, we develop an efficient ANOVA-based stochastic circuit/MEMS simulator to extract efficiently the surrogate models at the low level. In order to avoid the curse of dimensionality, we employ tensor-train decomposition at the high level to construct the basis functions and Gauss quadrature points. As a demonstration, we verify our algorithm on a stochastic oscillator with four MEMS capacitors and 184 random parameters. This challenging example is simulated efficiently by our simulator at the cost of only 10 minutes in MATLAB on a regular personal computer.Comment: 14 pages (IEEE double column), 11 figure, accepted by IEEE Trans CAD of Integrated Circuits and System

    A proximal iteration for deconvolving Poisson noisy images using sparse representations

    Get PDF
    We propose an image deconvolution algorithm when the data is contaminated by Poisson noise. The image to restore is assumed to be sparsely represented in a dictionary of waveforms such as the wavelet or curvelet transforms. Our key contributions are: First, we handle the Poisson noise properly by using the Anscombe variance stabilizing transform leading to a {\it non-linear} degradation equation with additive Gaussian noise. Second, the deconvolution problem is formulated as the minimization of a convex functional with a data-fidelity term reflecting the noise properties, and a non-smooth sparsity-promoting penalties over the image representation coefficients (e.g. â„“1\ell_1-norm). Third, a fast iterative backward-forward splitting algorithm is proposed to solve the minimization problem. We derive existence and uniqueness conditions of the solution, and establish convergence of the iterative algorithm. Finally, a GCV-based model selection procedure is proposed to objectively select the regularization parameter. Experimental results are carried out to show the striking benefits gained from taking into account the Poisson statistics of the noise. These results also suggest that using sparse-domain regularization may be tractable in many deconvolution applications with Poisson noise such as astronomy and microscopy

    Detecting and Assessing the Problems Caused by Multi-Collinearity: A Useof the Singular-Value Decomposition

    Get PDF
    This paper presents a means for detecting the presence of multicollinearity and for assessing the damage that such collinearity may cause estimated coefficients in the standard linear regression model. The means of analysis is the singular value decomposition, a numerical analytic device that directly exposes both the conditioning of the data matrix X and the linear dependencies that may exist among its columns. The same information is employed in the second part of the paper to determine the extent to which each regression coefficient is being adversely affected by each linear relation among the columns of X that lead to its ill conditioning.
    • …
    corecore