55 research outputs found
Are large language models superhuman chemists?
Large language models (LLMs) have gained widespread interest due to their
ability to process human language and perform tasks on which they have not been
explicitly trained. This is relevant for the chemical sciences, which face the
problem of small and diverse datasets that are frequently in the form of text.
LLMs have shown promise in addressing these issues and are increasingly being
harnessed to predict chemical properties, optimize reactions, and even design
and conduct experiments autonomously. However, we still have only a very
limited systematic understanding of the chemical reasoning capabilities of
LLMs, which would be required to improve models and mitigate potential harms.
Here, we introduce "ChemBench," an automated framework designed to rigorously
evaluate the chemical knowledge and reasoning abilities of state-of-the-art
LLMs against the expertise of human chemists. We curated more than 7,000
question-answer pairs for a wide array of subfields of the chemical sciences,
evaluated leading open and closed-source LLMs, and found that the best models
outperformed the best human chemists in our study on average. The models,
however, struggle with some chemical reasoning tasks that are easy for human
experts and provide overconfident, misleading predictions, such as about
chemicals' safety profiles. These findings underscore the dual reality that,
although LLMs demonstrate remarkable proficiency in chemical tasks, further
research is critical to enhancing their safety and utility in chemical
sciences. Our findings also indicate a need for adaptations to chemistry
curricula and highlight the importance of continuing to develop evaluation
frameworks to improve safe and useful LLMs
SELFIES and the future of molecular string representations
Artificial intelligence (AI) and machine learning (ML) are expanding in popularity for broad applications to challenging tasks in chemistry and materials science. Examples include the prediction of properties, the discovery of new reaction pathways, or the design of new molecules. The machine needs to read and write fluently in a chemical language for each of these tasks. Strings are a common tool to represent molecular graphs, and the most popular molecular string representation, SMILES, has powered cheminformatics since the late 1980s. However, in the context of AI and ML in chemistry, SMILES has several shortcomings -- most pertinently, most combinations of symbols lead to invalid results with no valid chemical interpretation. To overcome this issue, a new language for molecules was introduced in 2020 that guarantees 100\% robustness: SELFIES (SELF-referencIng Embedded Strings). SELFIES has since simplified and enabled numerous new applications in chemistry. In this manuscript, we look to the future and discuss molecular string representations, along with their respective opportunities and challenges. We propose 16 concrete Future Projects for robust molecular representations. These involve the extension toward new chemical domains, exciting questions at the interface of AI and robust languages and interpretability for both humans and machines. We hope that these proposals will inspire several follow-up works exploiting the full potential of molecular string representations for the future of AI in chemistry and materials science
SELFIES and the future of molecular string representations
Artificial intelligence (AI) and machine learning (ML) are expanding in
popularity for broad applications to challenging tasks in chemistry and
materials science. Examples include the prediction of properties, the discovery
of new reaction pathways, or the design of new molecules. The machine needs to
read and write fluently in a chemical language for each of these tasks. Strings
are a common tool to represent molecular graphs, and the most popular molecular
string representation, SMILES, has powered cheminformatics since the late
1980s. However, in the context of AI and ML in chemistry, SMILES has several
shortcomings -- most pertinently, most combinations of symbols lead to invalid
results with no valid chemical interpretation. To overcome this issue, a new
language for molecules was introduced in 2020 that guarantees 100\% robustness:
SELFIES (SELF-referencIng Embedded Strings). SELFIES has since simplified and
enabled numerous new applications in chemistry. In this manuscript, we look to
the future and discuss molecular string representations, along with their
respective opportunities and challenges. We propose 16 concrete Future Projects
for robust molecular representations. These involve the extension toward new
chemical domains, exciting questions at the interface of AI and robust
languages and interpretability for both humans and machines. We hope that these
proposals will inspire several follow-up works exploiting the full potential of
molecular string representations for the future of AI in chemistry and
materials science.Comment: 34 pages, 15 figures, comments and suggestions for additional
references are welcome
SELFIES and the future of molecular string representations
Artificial intelligence (AI) and machine learning (ML) are expanding in popularity for broad applications to challenging tasks in chemistry and materials science. Examples include the prediction of properties, the discovery of new reaction pathways, or the design of new molecules. The machine needs to read and write fluently in a chemical language for each of these tasks. Strings are a common tool to represent molecular graphs, and the most popular molecular string representation, Smiles, has powered cheminformatics since the late 1980s. However, in the context of AI and ML in chemistry, Smiles has several shortcomings—most pertinently, most combinations of symbols lead to invalid results with no valid chemical interpretation. To overcome this issue, a new language for molecules was introduced in 2020 that guarantees 100% robustness: SELF-referencing embedded string (Selfies). Selfies has since simplified and enabled numerous new applications in chemistry. In this perspective, we look to the future and discuss molecular string representations, along with their respective opportunities and challenges. We propose 16 concrete future projects for robust molecular representations. These involve the extension toward new chemical domains, exciting questions at the interface of AI and robust languages, and interpretability for both humans and machines. We hope that these proposals will inspire several follow-up works exploiting the full potential of molecular string representations for the future of AI in chemistry and materials science
Big-Data Science in Porous Materials: Materials Genomics and Machine Learning
By combining metal nodes with organic linkers we can potentially synthesize
millions of possible metal organic frameworks (MOFs). At present, we have
libraries of over ten thousand synthesized materials and millions of in-silico
predicted materials. The fact that we have so many materials opens many
exciting avenues to tailor make a material that is optimal for a given
application. However, from an experimental and computational point of view we
simply have too many materials to screen using brute-force techniques. In this
review, we show that having so many materials allows us to use big-data methods
as a powerful technique to study these materials and to discover complex
correlations. The first part of the review gives an introduction to the
principles of big-data science. We emphasize the importance of data collection,
methods to augment small data sets, how to select appropriate training sets. An
important part of this review are the different approaches that are used to
represent these materials in feature space. The review also includes a general
overview of the different ML techniques, but as most applications in porous
materials use supervised ML our review is focused on the different approaches
for supervised ML. In particular, we review the different method to optimize
the ML process and how to quantify the performance of the different methods. In
the second part, we review how the different approaches of ML have been applied
to porous materials. In particular, we discuss applications in the field of gas
storage and separation, the stability of these materials, their electronic
properties, and their synthesis. The range of topics illustrates the large
variety of topics that can be studied with big-data science. Given the
increasing interest of the scientific community in ML, we expect this list to
rapidly expand in the coming years.Comment: Editorial changes (typos fixed, minor adjustments to figures
14 Examples of How LLMs Can Transform Materials Science and Chemistry: A Reflection on a Large Language Model Hackathon
Chemistry and materials science are complex. Recently, there have been great
successes in addressing this complexity using data-driven or computational
techniques. Yet, the necessity of input structured in very specific forms and
the fact that there is an ever-growing number of tools creates usability and
accessibility challenges. Coupled with the reality that much data in these
disciplines is unstructured, the effectiveness of these tools is limited.
Motivated by recent works that indicated that large language models (LLMs)
might help address some of these issues, we organized a hackathon event on the
applications of LLMs in chemistry, materials science, and beyond. This article
chronicles the projects built as part of this hackathon. Participants employed
LLMs for various applications, including predicting properties of molecules and
materials, designing novel interfaces for tools, extracting knowledge from
unstructured data, and developing new educational applications.
The diverse topics and the fact that working prototypes could be generated in
less than two days highlight that LLMs will profoundly impact the future of our
fields. The rich collection of ideas and projects also indicates that the
applications of LLMs are not limited to materials science and chemistry but
offer potential benefits to a wide range of scientific disciplines
- …