3 research outputs found

    Stokes' first problem for some non-Newtonian fluids: Results and mistakes

    Full text link
    The well-known problem of unidirectional plane flow of a fluid in a half-space due to the impulsive motion of the plate it rests upon is discussed in the context of the second-grade and the Oldroyd-B non-Newtonian fluids. The governing equations are derived from the conservation laws of mass and momentum and three correct known representations of their exact solutions given. Common mistakes made in the literature are identified. Simple numerical schemes that corroborate the analytical solutions are constructed.Comment: 10 pages, 2 figures; accepted for publication in Mechanics Research Communications; v2 corrects a few typo

    Linear classifier combination and selection using group sparse regularization and hinge loss

    No full text
    The main principle of stacked generalization is using a second-level generalizer to combine the outputs of base classifiers in an ensemble. In this paper, after presenting a short survey of the literature on stacked generalization, we propose to use regularized empirical risk minimization (RERM) as a framework for learning the weights of the combiner which generalizes earlier proposals and enables improved learning methods. Our main contribution is using group sparsity for regularization to facilitate classifier selection. In addition, we propose and analyze using the hinge loss instead of the conventional least squares loss. We performed experiments on three different ensemble setups with differing diversities on 13 real-world datasets of various applications. Results show the power of group sparse regularization over the conventional norm regularization. We are able to reduce the number of selected classifiers of the diverse ensemble without sacrificing accuracy. With the non-diverse ensembles, we even gain accuracy on average by using group sparse regularization. In addition, we show that the hinge loss outperforms the least squares loss which was used in previous studies of stacked generalization
    corecore