57 research outputs found
Human-in-the-Loop AI Reviewing: Feasibility, Opportunities, and Risks
The promise of AI for academic work is bewitching and easy to envisage, but the risks involved are often hard to detect and usually not readily exposed. In this opinion piece, we explore the feasibility, opportunities, and risks of using large language models (LLMs) for reviewing academic submissions, while keeping the human in the loop. We experiment with GPT-4 in the role of a reviewer to demonstrate the opportunities and the risks we experience and ways to mitigate them. The reviews are structured according to a conference review form with the dual purpose of evaluating submissions for editorial decisions and providing authors with constructive feedback according to predefined criteria, which include contribution, soundness, and presentation. We demonstrate feasibility by evaluating and comparing LLM reviews with human reviews, concluding that current AI-augmented reviewing is sufficiently accurate to alleviate the burden of reviewing but not completely and not for all cases. We then enumerate the opportunities of AI-augmented reviewing and present open questions. Next, we identify the risks of AI-augmented reviewing, highlighting bias, value misalignment, and misuse. We conclude with recommendations for managing these risks
Automatic Machine Learning by Pipeline Synthesis using Model-Based Reinforcement Learning and a Grammar
Automatic machine learning is an important problem in the forefront of
machine learning. The strongest AutoML systems are based on neural networks,
evolutionary algorithms, and Bayesian optimization. Recently AlphaD3M reached
state-of-the-art results with an order of magnitude speedup using reinforcement
learning with self-play. In this work we extend AlphaD3M by using a pipeline
grammar and a pre-trained model which generalizes from many different datasets
and similar tasks. Our results demonstrate improved performance compared with
our earlier work and existing methods on AutoML benchmark datasets for
classification and regression tasks. In the spirit of reproducible research we
make our data, models, and code publicly available.Comment: ICML Workshop on Automated Machine Learnin
DeepLine: AutoML Tool for Pipelines Generation using Deep Reinforcement Learning and Hierarchical Actions Filtering
Automatic machine learning (AutoML) is an area of research aimed at
automating machine learning (ML) activities that currently require human
experts. One of the most challenging tasks in this field is the automatic
generation of end-to-end ML pipelines: combining multiple types of ML
algorithms into a single architecture used for end-to-end analysis of
previously-unseen data. This task has two challenging aspects: the first is the
need to explore a large search space of algorithms and pipeline architectures.
The second challenge is the computational cost of training and evaluating
multiple pipelines. In this study we present DeepLine, a reinforcement learning
based approach for automatic pipeline generation. Our proposed approach
utilizes an efficient representation of the search space and leverages past
knowledge gained from previously-analyzed datasets to make the problem more
tractable. Additionally, we propose a novel hierarchical-actions algorithm that
serves as a plugin, mediating the environment-agent interaction in deep
reinforcement learning problems. The plugin significantly speeds up the
training process of our model. Evaluation on 56 datasets shows that DeepLine
outperforms state-of-the-art approaches both in accuracy and in computational
cost
- …