3 research outputs found
PMLB: A Large Benchmark Suite for Machine Learning Evaluation and Comparison
The selection, development, or comparison of machine learning methods in data
mining can be a difficult task based on the target problem and goals of a
particular study. Numerous publicly available real-world and simulated
benchmark datasets have emerged from different sources, but their organization
and adoption as standards have been inconsistent. As such, selecting and
curating specific benchmarks remains an unnecessary burden on machine learning
practitioners and data scientists. The present study introduces an accessible,
curated, and developing public benchmark resource to facilitate identification
of the strengths and weaknesses of different machine learning methodologies. We
compare meta-features among the current set of benchmark datasets in this
resource to characterize the diversity of available data. Finally, we apply a
number of established machine learning methods to the entire benchmark suite
and analyze how datasets and algorithms cluster in terms of performance. This
work is an important first step towards understanding the limitations of
popular benchmarking suites and developing a resource that connects existing
benchmarking standards to more diverse and efficient standards in the future.Comment: 14 pages, 5 figures, submitted for review to JML
Where are we now? A large benchmark study of recent symbolic regression methods
In this paper we provide a broad benchmarking of recent genetic programming
approaches to symbolic regression in the context of state of the art machine
learning approaches. We use a set of nearly 100 regression benchmark problems
culled from open source repositories across the web. We conduct a rigorous
benchmarking of four recent symbolic regression approaches as well as nine
machine learning approaches from scikit-learn. The results suggest that
symbolic regression performs strongly compared to state-of-the-art gradient
boosting algorithms, although in terms of running times is among the slowest of
the available methodologies. We discuss the results in detail and point to
future research directions that may allow symbolic regression to gain wider
adoption in the machine learning community.Comment: 8 pages, 4 figures. GECCO 201