69 research outputs found
Bio-JOIE: Joint Representation Learning of Biological Knowledge Bases
The widespread of Coronavirus has led to a worldwide pandemic with a high
mortality rate. Currently, the knowledge accumulated from different studies
about this virus is very limited. Leveraging a wide-range of biological
knowledge, such as gene ontology and protein-protein interaction (PPI) networks
from other closely related species presents a vital approach to infer the
molecular impact of a new species. In this paper, we propose the transferred
multi-relational embedding model Bio-JOIE to capture the knowledge of gene
ontology and PPI networks, which demonstrates superb capability in modeling the
SARS-CoV-2-human protein interactions. Bio-JOIE jointly trains two model
components. The knowledge model encodes the relational facts from the protein
and GO domains into separated embedding spaces, using a hierarchy-aware
encoding technique employed for the GO terms. On top of that, the transfer
model learns a non-linear transformation to transfer the knowledge of PPIs and
gene ontology annotations across their embedding spaces. By leveraging only
structured knowledge, Bio-JOIE significantly outperforms existing
state-of-the-art methods in PPI type prediction on multiple species.
Furthermore, we also demonstrate the potential of leveraging the learned
representations on clustering proteins with enzymatic function into enzyme
commission families. Finally, we show that Bio-JOIE can accurately identify
PPIs between the SARS-CoV-2 proteins and human proteins, providing valuable
insights for advancing research on this new disease.Comment: ACM BCB 2020, Best Student Pape
Language Models can be Logical Solvers
Logical reasoning is a fundamental aspect of human intelligence and a key
component of tasks like problem-solving and decision-making. Recent
advancements have enabled Large Language Models (LLMs) to potentially exhibit
reasoning capabilities, but complex logical reasoning remains a challenge. The
state-of-the-art, solver-augmented language models, use LLMs to parse natural
language logical questions into symbolic representations first and then adopt
external logical solvers to take in the symbolic representations and output the
answers. Despite their impressive performance, any parsing errors will
inevitably result in the failure of the execution of the external logical
solver and no answer to the logical questions. In this paper, we introduce
LoGiPT, a novel language model that directly emulates the reasoning processes
of logical solvers and bypasses the parsing errors by learning to strict
adherence to solver syntax and grammar. LoGiPT is fine-tuned on a newly
constructed instruction-tuning dataset derived from revealing and refining the
invisible reasoning process of deductive solvers. Experimental results on two
public deductive reasoning datasets demonstrate that LoGiPT outperforms
state-of-the-art solver-augmented LMs and few-shot prompting methods on
competitive LLMs like ChatGPT or GPT-4.Comment: Preprin
2014-2015 Master Class - Elmar Oliveira (Violin)
https://spiral.lynn.edu/conservatory_masterclasses/1036/thumbnail.jp
2014-2015 Master Class - Elmar Oliveira (Violin)
https://spiral.lynn.edu/conservatory_masterclasses/1030/thumbnail.jp
2015-2016 Master Class - Elmar Oliveira (Violin)
https://spiral.lynn.edu/conservatory_masterclasses/1015/thumbnail.jp
2016-2017 Master Class - Elmar Oliveira (Violin)
https://spiral.lynn.edu/conservatory_masterclasses/1011/thumbnail.jp
2013-2014 Master Class - Elmar Oliveira (Violin)
https://spiral.lynn.edu/conservatory_masterclasses/1040/thumbnail.jp
- …