8,217 research outputs found
NLSC: Unrestricted Natural Language-based Service Composition through Sentence Embeddings
Current approaches for service composition (assemblies of atomic services)
require developers to use: (a) domain-specific semantics to formalize services
that restrict the vocabulary for their descriptions, and (b) translation
mechanisms for service retrieval to convert unstructured user requests to
strongly-typed semantic representations. In our work, we argue that effort to
developing service descriptions, request translations, and matching mechanisms
could be reduced using unrestricted natural language; allowing both: (1)
end-users to intuitively express their needs using natural language, and (2)
service developers to develop services without relying on syntactic/semantic
description languages. Although there are some natural language-based service
composition approaches, they restrict service retrieval to syntactic/semantic
matching. With recent developments in Machine learning and Natural Language
Processing, we motivate the use of Sentence Embeddings by leveraging richer
semantic representations of sentences for service description, matching and
retrieval. Experimental results show that service composition development
effort may be reduced by more than 44\% while keeping a high precision/recall
when matching high-level user requests with low-level service method
invocations.Comment: This paper will appear on SCC'19 (IEEE International Conference on
Services Computing) on July 1
Exploratory topic modeling with distributional semantics
As we continue to collect and store textual data in a multitude of domains,
we are regularly confronted with material whose largely unknown thematic
structure we want to uncover. With unsupervised, exploratory analysis, no prior
knowledge about the content is required and highly open-ended tasks can be
supported. In the past few years, probabilistic topic modeling has emerged as a
popular approach to this problem. Nevertheless, the representation of the
latent topics as aggregations of semi-coherent terms limits their
interpretability and level of detail.
This paper presents an alternative approach to topic modeling that maps
topics as a network for exploration, based on distributional semantics using
learned word vectors. From the granular level of terms and their semantic
similarity relations global topic structures emerge as clustered regions and
gradients of concepts. Moreover, the paper discusses the visual interactive
representation of the topic map, which plays an important role in supporting
its exploration.Comment: Conference: The Fourteenth International Symposium on Intelligent
Data Analysis (IDA 2015
Patent Analytics Based on Feature Vector Space Model: A Case of IoT
The number of approved patents worldwide increases rapidly each year, which
requires new patent analytics to efficiently mine the valuable information
attached to these patents. Vector space model (VSM) represents documents as
high-dimensional vectors, where each dimension corresponds to a unique term.
While originally proposed for information retrieval systems, VSM has also seen
wide applications in patent analytics, and used as a fundamental tool to map
patent documents to structured data. However, VSM method suffers from several
limitations when applied to patent analysis tasks, such as loss of
sentence-level semantics and curse-of-dimensionality problems. In order to
address the above limitations, we propose a patent analytics based on feature
vector space model (FVSM), where the FVSM is constructed by mapping patent
documents to feature vectors extracted by convolutional neural networks (CNN).
The applications of FVSM for three typical patent analysis tasks, i.e., patents
similarity comparison, patent clustering, and patent map generation are
discussed. A case study using patents related to Internet of Things (IoT)
technology is illustrated to demonstrate the performance and effectiveness of
FVSM. The proposed FVSM can be adopted by other patent analysis studies to
replace VSM, based on which various big data learning tasks can be performed
Application of pre-training and fine-tuning AI models to machine translation: a case study of multilingual text classification in Baidu
With the development of international information technology, we are producing
a huge amount of information all the time. The processing ability of information in
various languages is gradually replacing information and becoming a rarer resource.
How to obtain the most effective information in such a large and complex amount of
multilingual textual information is a major goal of multilingual information
processing.
Multilingual text classification helps users to break the language barrier and
accurately locate the required information and triage information. At the same time,
the rapid development of the Internet has accelerated the communication among users
of various languages, giving rise to a large number of multilingual texts, such as book
and movie reviews, online chats, product introductions and other forms, which
contain a large amount of valuable implicit information and urgently need automated
tools to categorize and process those multilingual texts.
This work describes the Natural Language Process (NLP) sub-task known as
Multilingual Text Classification (MTC) performed within the context of Baidu, a
Chinese leading AI company with a strong Internet base, whose NLP division led the
industry in deep learning technology to go online in Machine Translation (MT) and
search. Multilingual text classification is an important module in NLP machine
translation and a basic module in NLP tasks. It can be applied to many fields, such as
Fake Reviews Detection, News Headlines Categories Classification, Analysis of
positive and negative reviews and so on.
In the following work, we will first define the AI model paradigm of
'pre-training and fine-tuning' in deep learning in the Baidu NLP department. Then
investigated the application scenarios of multilingual text classification. Most of the
text classification systems currently available in the Chinese market are designed for a
single language, such as Alibaba's text classification system. If users need to classify
texts of the same category in multiple languages, they need to train multiple single
text classification systems and then classify them one by one.
However, many internationalized products do not have a single text language,
such as AliExpress cross-border e-commerce business, Airbnb B&B business, etc.
Industry needs to understand and classify users’ reviews in various languages, and
have conducted in-depth statistics and marketing strategy development, and
multilingual text classification is particularly important in this scenario.
Therefore, we focus on interpreting the methodology of multilingual text
classification model of machine translation in Baidu NLP department, and capture
sets of multilingual data of reviews, news headlines and other data for manual
classification and labeling, use the labeling results for fine-tuning of multilingual text
classification model, and output the quality evaluation data of Baidu multilingual text
classification model after fine-tuning. We will discuss if the pre-training and
fine-tuning of the large model can substantially improve the quality and performance
of multilingual text classification.
Finally, based on the machine translation-multilingual text classification model,
we derive the application method of pre-training and fine-tuning paradigm in the
current cutting-edge deep learning AI model under the NLP system and verify the
generality and cutting-edge of the pre-training and fine-tuning paradigm in the deep
learning-intelligent search field.Com o desenvolvimento da tecnologia de informação internacional, estamos
sempre a produzir uma enorme quantidade de informação e o recurso mais escasso já
não é a informação, mas a capacidade de processar informação em cada língua. A
maior parte da informação multilingue é expressa sob a forma de texto. Como obter a
informação mais eficaz numa quantidade tão considerável e complexa de informação
textual multilingue é um dos principais objetivos do processamento de informação
multilingue.
A classificação de texto multilingue ajuda os utilizadores a quebrar a barreira
linguística e a localizar com precisão a informação necessária e a classificá-la. Ao
mesmo tempo, o rápido desenvolvimento da Internet acelerou a comunicação entre
utilizadores de várias línguas, dando origem a um grande número de textos
multilingues, tais como críticas de livros e filmes, chats, introduções de produtos e
outros distintos textos, que contêm uma grande quantidade de informação implícita
valiosa e necessitam urgentemente de ferramentas automatizadas para categorizar e
processar esses textos multilingues.
Este trabalho descreve a subtarefa do Processamento de Linguagem Natural
(PNL) conhecida como Classificação de Texto Multilingue (MTC), realizada no
contexto da Baidu, uma empresa chinesa líder em IA, cuja equipa de PNL levou a
indústria em tecnologia baseada em aprendizagem neuronal a destacar-se em
Tradução Automática (MT) e pesquisa científica. A classificação multilingue de
textos é um módulo importante na tradução automática de PNL e um módulo básico
em tarefas de PNL. A MTC pode ser aplicada a muitos campos, tais como análise de
sentimentos multilingues, categorização de notícias, filtragem de conteúdos
indesejados (do inglês spam), entre outros.
Neste trabalho, iremos primeiro definir o paradigma do modelo AI de 'pré-treino
e afinação' em aprendizagem profunda no departamento de PNL da Baidu. Em
seguida, realizaremos a pesquisa sobre outros produtos no mercado com capacidade
de classificação de texto — a classificação de texto levada a cabo pela Alibaba. Após
a pesquisa, verificamos que a maioria dos sistemas de classificação de texto
atualmente disponíveis no mercado chinês são concebidos para uma única língua, tal como o sistema de classificação de texto Alibaba. Se os utilizadores precisarem de
classificar textos da mesma categoria em várias línguas, precisam de aplicar vários
sistemas de classificação de texto para cada língua e depois classificá-los um a um.
No entanto, muitos produtos internacionalizados não têm uma única língua de
texto, tais como AliExpress comércio eletrónico transfronteiriço, Airbnb B&B
business, etc. A indústria precisa compreender e classificar as revisões dos
utilizadores em várias línguas. Esta necessidade conduziu a um desenvolvimento
aprofundado de estatísticas e estratégias de marketing, e a classificação de textos
multilingues é particularmente importante neste cenário.
Desta forma, concentrar-nos-emos na interpretação da metodologia do modelo
de classificação de texto multilingue da tradução automática no departamento de PNL
Baidu. Colhemos para o efeito conjuntos de dados multilingues de comentários e
críticas, manchetes de notícias e outros dados para classificação manual, utilizamos os
resultados dessa classificação para o aperfeiçoamento do modelo de classificação de
texto multilingue e produzimos os dados de avaliação da qualidade do modelo de
classificação de texto multilingue da Baidu. Discutiremos se o pré-treino e o
aperfeiçoamento do modelo podem melhorar substancialmente a qualidade e o
desempenho da classificação de texto multilingue. Finalmente, com base no modelo
de classificação de texto multilingue de tradução automática, derivamos o método de
aplicação do paradigma de pré-formação e afinação no atual modelo de IA de
aprendizagem profunda de ponta sob o sistema de PNL, e verificamos a robustez e os
resultados positivos do paradigma de pré-treino e afinação no campo de pesquisa de
aprendizagem profunda
- …