5 research outputs found

    Good pivots for small sparse matrices

    Full text link
    For sparse matrices up to size 8×88 \times 8, we determine optimal choices for pivot selection in Gaussian elimination. It turns out that they are slightly better than the pivots chosen by a popular pivot selection strategy, so there is some room for improvement. We then create a pivot selection strategy using machine learning and find that it indeed leads to a small improvement compared to the classical strategy.Comment: 11 page

    Chevalley Warning type results on abelian groups

    Full text link
    We develop a notion of degree for functions between two abelian groups that allows us to generalize the Chevalley Warning Theorems from fields to noncommutative rings or abelian groups of prime power order

    A novel method for in situ measurement of solubility via impedance scanning quartz crystal microbalance studies

    Get PDF
    We introduce here a novel in situmeasurement method for solubility of solids in various liquids. Without any calibration the saturation point can be obtained in a relative manner. We exemplified the new method at four systems including water, organic carbonates and an ionic liquid as the solvents and various salts as dissolved solids

    Chevalley Warning Theorems on abelian groups

    No full text
    A theorem of Chevalley states that a system of polynomial equations over a finite field cannot have exactly one solution if the number of variables is strictly greater than the sum of their total degrees. We show a generalisation of this theorem to functions between abelian p-groups. To this end, we describe a concept of degree for functions on abelian groups. We also discuss other improvements and generalisations of Chevalleys Theorem.submitted by Jakob MoosbauerUniversität Linz, Masterarbeit, 2019(VLID)468161

    Multi-objective hyperparameter optimization in machine learning – an overview

    No full text
    Hyperparameter optimization constitutes a large part of typical modern machine learning (ML) workflows. This arises from the fact that ML methods and corresponding preprocessing steps often only yield optimal performance when hyperparameters are properly tuned. But in many applications, we are not only interested in optimizing ML pipelines solely for predictive accuracy; additional metrics or constraints must be considered when determining an optimal configuration, resulting in a multi-objective optimization problem. This is often neglected in practice, due to a lack of knowledge and readily available software implementations for multi-objective hyperparameter optimization. In this work, we introduce the reader to the basics of multi-objective hyperparameter optimization and motivate its usefulness in applied ML. Furthermore, we provide an extensive survey of existing optimization strategies from the domains of evolutionary algorithms and Bayesian optimization. We illustrate the utility of multi-objective optimization in several specific ML applications, considering objectives such as operating conditions, prediction time, sparseness, fairness, interpretability, and robustness
    corecore