562,592 research outputs found
Lemur: Harmonizing Natural Language and Code for Language Agents
We introduce Lemur and Lemur-Chat, openly accessible language models
optimized for both natural language and coding capabilities to serve as the
backbone of versatile language agents. The evolution from language chat models
to functional language agents demands that models not only master human
interaction, reasoning, and planning but also ensure grounding in the relevant
environments. This calls for a harmonious blend of language and coding
capabilities in the models. Lemur and Lemur-Chat are proposed to address this
necessity, demonstrating balanced proficiencies in both domains, unlike
existing open-source models that tend to specialize in either. Through
meticulous pre-training using a code-intensive corpus and instruction
fine-tuning on text and code data, our models achieve state-of-the-art averaged
performance across diverse text and coding benchmarks among open-source models.
Comprehensive experiments demonstrate Lemur's superiority over existing
open-source models and its proficiency across various agent tasks involving
human communication, tool usage, and interaction under fully- and partially-
observable environments. The harmonization between natural and programming
languages enables Lemur-Chat to significantly narrow the gap with proprietary
models on agent abilities, providing key insights into developing advanced
open-source agents adept at reasoning, planning, and operating seamlessly
across environments. https://github.com/OpenLemur/Lemu
Using Sparse Semantic Embeddings Learned from Multimodal Text and Image Data to Model Human Conceptual Knowledge
Distributional models provide a convenient way to model semantics using dense
embedding spaces derived from unsupervised learning algorithms. However, the
dimensions of dense embedding spaces are not designed to resemble human
semantic knowledge. Moreover, embeddings are often built from a single source
of information (typically text data), even though neurocognitive research
suggests that semantics is deeply linked to both language and perception. In
this paper, we combine multimodal information from both text and image-based
representations derived from state-of-the-art distributional models to produce
sparse, interpretable vectors using Joint Non-Negative Sparse Embedding.
Through in-depth analyses comparing these sparse models to human-derived
behavioural and neuroimaging data, we demonstrate their ability to predict
interpretable linguistic descriptions of human ground-truth semantic knowledge.Comment: Proceedings of the 22nd Conference on Computational Natural Language
Learning (CoNLL 2018), pages 260-270. Brussels, Belgium, October 31 -
November 1, 2018. Association for Computational Linguistic
Auto-Detection of Programming Code Vulnerabilities With Natural Language Processing
Security vulnerabilities in source code are traditionally detected manually by software developers because there are no effective auto-detection tools. Current vulnerability detection tools require great human effort, and the results have flaws in many ways. However, deep learning models could be a solution to this problem for the following reasons: 1. Deep learning models are relatively accurate for text classification and text summarization for source code. 2. After being deployed on the cloud servers, the efficiency of deep learning based auto-detection could be much higher than human effort. Therefore, we developed two Natural Language Processing(NLP) models: the first one is a text-classification model that takes source code as input and outputs the classification of the security vulnerability of the input. The second one is a text-to-text model that takes source code as input and outputs a completely machine-generated summary about the security vulnerability of the input. Our evaluation shows that both models get impressive results
Text Segmentation Using Exponential Models
This paper introduces a new statistical approach to partitioning text
automatically into coherent segments. Our approach enlists both short-range and
long-range language models to help it sniff out likely sites of topic changes
in text. To aid its search, the system consults a set of simple lexical hints
it has learned to associate with the presence of boundaries through inspection
of a large corpus of annotated data. We also propose a new probabilistically
motivated error metric for use by the natural language processing and
information retrieval communities, intended to supersede precision and recall
for appraising segmentation algorithms. Qualitative assessment of our algorithm
as well as evaluation using this new metric demonstrate the effectiveness of
our approach in two very different domains, Wall Street Journal articles and
the TDT Corpus, a collection of newswire articles and broadcast news
transcripts.Comment: 12 pages, LaTeX source and postscript figures for EMNLP-2 pape
- …