2 research outputs found
From Human Days to Machine Seconds: Automatically Answering and Generating Machine Learning Final Exams
A final exam in machine learning at a top institution such as MIT, Harvard,
or Cornell typically takes faculty days to write, and students hours to solve.
We demonstrate that large language models pass machine learning finals at a
human level, on finals available online after the models were trained, and
automatically generate new human-quality final exam questions in seconds.
Previous work has developed program synthesis and few-shot learning methods to
solve university-level problem set questions in mathematics and STEM courses.
In this work, we develop and compare methods that solve final exams, which
differ from problem sets in several ways: the questions are longer, have
multiple parts, are more complicated, and span a broader set of topics. We
curate a dataset and benchmark of questions from machine learning final exams
available online and code for answering these questions and generating new
questions. We show how to generate new questions from other questions and
course notes. For reproducibility and future research on this final exam
benchmark, we use automatic checkers for multiple-choice, numeric, and
questions with expression answers. We perform ablation studies comparing
zero-shot learning with few-shot learning and chain-of-thought prompting using
GPT-3, OPT, Codex, and ChatGPT across machine learning topics and find that
few-shot learning methods perform best. We highlight the transformative
potential of language models to streamline the writing and solution of
large-scale assessments, significantly reducing the workload from human days to
mere machine seconds. Our results suggest that rather than banning large
language models such as ChatGPT in class, instructors should teach students to
harness them by asking students meta-questions about correctness, completeness,
and originality of the responses generated, encouraging critical thinking in
academic studies.Comment: 9 page
Deep Neural Networks for Learning Protein Vibrational Behaviors to Characterize Structure and Function
Proteins’ structures and motions are essential for nearly all biological functions and malfunctions, making them prime targets for uncovering and controlling processes associated with metabolism and disease. Normal mode analysis is a powerful method that allows us to understand the mechanisms of these functions in high detail, but not without significant cost. Replacing this method with inference by a machine learning model could potentially eliminate this restriction while still providing useful accuracy. Prior work has demonstrated success in a simplified version of this problem that used features computed from each protein’s structure, and predicted parameters for a geometric function-of-best-fit relating the modes, not the explicit modes themselves. In this work, we seek to develop a fully end-toend model that will allow researchers to predict a protein’s normal mode spectrum directly from its peptide sequence, allowing us to bypass the costs associated with both normal mode analysis and protein structure determination. We additionally explore the parallels between protein science and music theory, and provide analysis of a deep neural network trained to understand Bach’s highly structured Goldberg Variations.M.Eng