14,428 research outputs found

    Cosmological zoo -- accelerating models with dark energy

    Get PDF
    ecent observations of type Ia supernovae indicate that the Universe is in an accelerating phase of expansion. The fundamental quest in theoretical cosmology is to identify the origin of this phenomenon. In principle there are two possibilities: 1) the presence of matter which violates the strong energy condition (a substantial form of dark energy), 2) modified Friedmann equations (Cardassian models -- a non-substantial form of dark matter). We classify all these models in terms of 2-dimensional dynamical systems of the Newtonian type. We search for generic properties of the models. It is achieved with the help of Peixoto's theorem for dynamical system on the Poincar{\'e} sphere. We find that the notion of structural stability can be useful to distinguish the generic cases of evolutional paths with acceleration. We find that, while the Λ\LambdaCDM models and phantom models are typical accelerating models, the cosmological models with bouncing phase are non-generic in the space of all planar dynamical systems. We derive the universal shape of potential function which gives rise to presently accelerating models. Our results show explicitly the advantages of using a potential function (instead of the equation of state) to probe the origin of the present acceleration. We argue that simplicity and genericity are the best guide in understanding our Universe and its acceleration.Comment: RevTeX4, 23 pages, 10 figure

    Attribute Equilibrium Dominance Reduction Accelerator (DCCAEDR) Based on Distributed Coevolutionary Cloud and Its Application in Medical Records

    Full text link
    © 2013 IEEE. Aimed at the tremendous challenge of attribute reduction for big data mining and knowledge discovery, we propose a new attribute equilibrium dominance reduction accelerator (DCCAEDR) based on the distributed coevolutionary cloud model. First, the framework of N-populations distributed coevolutionary MapReduce model is designed to divide the entire population into N subpopulations, sharing the reward of different subpopulations' solutions under a MapReduce cloud mechanism. Because the adaptive balancing between exploration and exploitation can be achieved in a better way, the reduction performance is guaranteed to be the same as those using the whole independent data set. Second, a novel Nash equilibrium dominance strategy of elitists under the N bounded rationality regions is adopted to assist the subpopulations necessary to attain the stable status of Nash equilibrium dominance. This further enhances the accelerator's robustness against complex noise on big data. Third, the approximation parallelism mechanism based on MapReduce is constructed to implement rule reduction by accelerating the computation of attribute equivalence classes. Consequently, the entire attribute reduction set with the equilibrium dominance solution can be achieved. Extensive simulation results have been used to illustrate the effectiveness and robustness of the proposed DCCAEDR accelerator for attribute reduction on big data. Furthermore, the DCCAEDR is applied to solve attribute reduction for traditional Chinese medical records and to segment cortical surfaces of the neonatal brain 3-D-MRI records, and the DCCAEDR shows the superior competitive results, when compared with the representative algorithms

    Conditional t-SNE: Complementary t-SNE embeddings through factoring out prior information

    Get PDF
    Dimensionality reduction and manifold learning methods such as t-Distributed Stochastic Neighbor Embedding (t-SNE) are routinely used to map high-dimensional data into a 2-dimensional space to visualize and explore the data. However, two dimensions are typically insufficient to capture all structure in the data, the salient structure is often already known, and it is not obvious how to extract the remaining information in a similarly effective manner. To fill this gap, we introduce \emph{conditional t-SNE} (ct-SNE), a generalization of t-SNE that discounts prior information from the embedding in the form of labels. To achieve this, we propose a conditioned version of the t-SNE objective, obtaining a single, integrated, and elegant method. ct-SNE has one extra parameter over t-SNE; we investigate its effects and show how to efficiently optimize the objective. Factoring out prior knowledge allows complementary structure to be captured in the embedding, providing new insights. Qualitative and quantitative empirical results on synthetic and (large) real data show ct-SNE is effective and achieves its goal

    Accelerating Eulerian Fluid Simulation With Convolutional Networks

    Full text link
    Efficient simulation of the Navier-Stokes equations for fluid flow is a long standing problem in applied mathematics, for which state-of-the-art methods require large compute resources. In this work, we propose a data-driven approach that leverages the approximation power of deep-learning with the precision of standard solvers to obtain fast and highly realistic simulations. Our method solves the incompressible Euler equations using the standard operator splitting method, in which a large sparse linear system with many free parameters must be solved. We use a Convolutional Network with a highly tailored architecture, trained using a novel unsupervised learning framework to solve the linear system. We present real-time 2D and 3D simulations that outperform recently proposed data-driven methods; the obtained results are realistic and show good generalization properties.Comment: Significant revisio

    Dimensional hyper-reduction of nonlinear finite element models via empirical cubature

    Get PDF
    We present a general framework for the dimensional reduction, in terms of number of degrees of freedom as well as number of integration points (“hyper-reduction”), of nonlinear parameterized finite element (FE) models. The reduction process is divided into two sequential stages. The first stage consists in a common Galerkin projection onto a reduced-order space, as well as in the condensation of boundary conditions and external forces. For the second stage (reduction in number of integration points), we present a novel cubature scheme that efficiently determines optimal points and associated positive weights so that the error in integrating reduced internal forces is minimized. The distinguishing features of the proposed method are: (1) The minimization problem is posed in terms of orthogonal basis vector (obtained via a partitioned Singular Value Decomposition) rather that in terms of snapshots of the integrand. (2) The volume of the domain is exactly integrated. (3) The selection algorithm need not solve in all iterations a nonnegative least-squares problem to force the positiveness of the weights. Furthermore, we show that the proposed method converges to the absolute minimum (zero integration error) when the number of selected points is equal to the number of internal force modes included in the objective function. We illustrate this model reduction methodology by two nonlinear, structural examples (quasi-static bending and resonant vibration of elastoplastic composite plates). In both examples, the number of integration points is reduced three order of magnitudes (with respect to FE analyses) without significantly sacrificing accuracy.Peer ReviewedPostprint (published version
    • …
    corecore