614 research outputs found

    Model and Algorithm Selection in Statistical Learning and Optimization.

    Get PDF
    Modern data-driven statistical techniques, e.g., non-linear classification and regression machine learning methods, play an increasingly important role in applied data analysis and quantitative research. For real-world we do not know a priori which methods will work best. Furthermore, most of the available models depend on so called hyper- or control parameters, which can drastically influence their performance. This leads to a vast space of potential models, which cannot be explored exhaustively. Modern optimization techniques, often either evolutionary or model-based, are employed to speed up this process. A very similar problem occurs in continuous and discrete optimization and, in general, in many other areas where problem instances are solved by algorithmic approaches: Many competing techniques exist, some of them heavily parametrized. Again, not much knowledge exists, how, given a certain application, one makes the correct choice here. These general problems are called algorithm selection and algorithm configuration. Instead of relying on tedious, manual trial-and-error, one should rather employ available computational power in a methodical fashion to obtain an appropriate algorithmic choice, while supporting this process with machine-learning techniques to discover and exploit as much of the search space structure as possible. In this cumulative dissertation I summarize nine papers that deal with the problem of model and algorithm selection in the areas of machine learning and optimization. Issues in benchmarking, resampling, efficient model tuning, feature selection and automatic algorithm selection are addressed and solved using modern techniques. I apply these methods to tasks from engineering, music data analysis and black-box optimization. The dissertation concludes by summarizing my published R packages for such tasks and specifically discusses two packages for parallelization on high performance computing clusters and parallel statistical experiments

    Erste Schritte aus dem Elfenbeinturm : Sprachwissenschaftler zeigen sich auf der 34. Jahrestagung des IDS offen für neue Impulse

    Get PDF
    Die Sprachwissenschaftler scheinen in Bewegung geraten zu sein. Mehr Praxisnähe in Ausbildung und Tätigkeit sowie mehr Interesse für die Belange der Öffentlichkeit dies waren zentrale Forderungen, die auf der 34. Jahrestagung des Instituts für deutsche Sprache formuliert wurden. Sie sind untrügliche Anzeichen für eine Diskussion, die zwar noch am Anfang steht, gleichwohl Veränderungen erhoffen lässt. So war der Titel der Tagung »Sprache - Sprachwissenschaft - Öffentlichkeit« gut gewählt, trug er doch einem innerdisziplinären Trend Rechnung

    Frequency estimation by DFT interpolation: a comparison of methods

    Full text link
    This article comments on a frequency estimator which was proposed by [6] and shows empirically that it exhibits a much larger mean squared error than a well known frequency estimator by [8]. It is demonstrated that by using a heuristical adjustment [2] the performance can be greatly improved. Furthermore, references to two modern techniques are given, which both nearly attain the Cramér-Rao bound for this estimation problem
    corecore