14,519 research outputs found

    Alternative mechanism of avoiding the big rip or little rip for a scalar phantom field

    Get PDF
    Depending on the choice of its potential, the scalar phantom field Ο•\phi (the equation of state parameter w<βˆ’1w<-1) leads to various catastrophic fates of the universe including big rip, little rip and other future singularity. For example, big rip results from the evolution of the phantom field with an exponential potential and little rip stems from a quadratic potential in general relativity (GR). By choosing the same potential as in GR, we suggest a new mechanism to avoid these unexpected fates (big and little rip) in the inverse-\textit{R} gravity. As a pedagogical illustration, we give an exact solution where phantom field leads to a power-law evolution of the scale factor in an exponential type potential. We also find the sufficient condition for a universe in which the equation of state parameter crosses w=βˆ’1w=-1 divide. The phantom field with different potentials, including quadratic, cubic, quantic, exponential and logarithmic potentials are studied via numerical calculation in the inverse-\textit{R} gravity with R2R^{2} correction. The singularity is avoidable under all these potentials. Hence, we conclude that the avoidance of big or little rip is hardly dependent on special potential.Comment: 9 pages,6 figure

    Efficient Optimization of Performance Measures by Classifier Adaptation

    Full text link
    In practical applications, machine learning algorithms are often needed to learn classifiers that optimize domain specific performance measures. Previously, the research has focused on learning the needed classifier in isolation, yet learning nonlinear classifier for nonlinear and nonsmooth performance measures is still hard. In this paper, rather than learning the needed classifier by optimizing specific performance measure directly, we circumvent this problem by proposing a novel two-step approach called as CAPO, namely to first train nonlinear auxiliary classifiers with existing learning methods, and then to adapt auxiliary classifiers for specific performance measures. In the first step, auxiliary classifiers can be obtained efficiently by taking off-the-shelf learning algorithms. For the second step, we show that the classifier adaptation problem can be reduced to a quadratic program problem, which is similar to linear SVMperf and can be efficiently solved. By exploiting nonlinear auxiliary classifiers, CAPO can generate nonlinear classifier which optimizes a large variety of performance measures including all the performance measure based on the contingency table and AUC, whilst keeping high computational efficiency. Empirical studies show that CAPO is effective and of high computational efficiency, and even it is more efficient than linear SVMperf.Comment: 30 pages, 5 figures, to appear in IEEE Transactions on Pattern Analysis and Machine Intelligence, 201
    • …
    corecore