300,696 research outputs found
A Survey of AI Text-to-Image and AI Text-to-Video Generators
Text-to-Image and Text-to-Video AI generation models are revolutionary
technologies that use deep learning and natural language processing (NLP)
techniques to create images and videos from textual descriptions. This paper
investigates cutting-edge approaches in the discipline of Text-to-Image and
Text-to-Video AI generations. The survey provides an overview of the existing
literature as well as an analysis of the approaches used in various studies. It
covers data preprocessing techniques, neural network types, and evaluation
metrics used in the field. In addition, the paper discusses the challenges and
limitations of Text-to-Image and Text-to-Video AI generations, as well as
future research directions. Overall, these models have promising potential for
a wide range of applications such as video production, content creation, and
digital marketing.Comment: 4 pages, 2 tables, 4th International Conference on Artificial
Intelligence, Robotics and Control (AIRC 2023
Deep Character-Level Click-Through Rate Prediction for Sponsored Search
Predicting the click-through rate of an advertisement is a critical component
of online advertising platforms. In sponsored search, the click-through rate
estimates the probability that a displayed advertisement is clicked by a user
after she submits a query to the search engine. Commercial search engines
typically rely on machine learning models trained with a large number of
features to make such predictions. This is inevitably requires a lot of
engineering efforts to define, compute, and select the appropriate features. In
this paper, we propose two novel approaches (one working at character level and
the other working at word level) that use deep convolutional neural networks to
predict the click-through rate of a query-advertisement pair. Specially, the
proposed architectures only consider the textual content appearing in a
query-advertisement pair as input, and produce as output a click-through rate
prediction. By comparing the character-level model with the word-level model,
we show that language representation can be learnt from scratch at character
level when trained on enough data. Through extensive experiments using billions
of query-advertisement pairs of a popular commercial search engine, we
demonstrate that both approaches significantly outperform a baseline model
built on well-selected text features and a state-of-the-art word2vec-based
approach. Finally, by combining the predictions of the deep models introduced
in this study with the prediction of the model in production of the same
commercial search engine, we significantly improve the accuracy and the
calibration of the click-through rate prediction of the production system.Comment: SIGIR2017, 10 page
Recommended from our members
A computer-based strategy for foreign-language vocabulary-learning
This work sets out to establish principles for the design and evaluation of a computer-based vocabulary-learning strategy for foreign language learners. The strategy is intended to assist non-beginner learners who are working on their own, to acquire new words in such a way that they will be available when needed in subsequent communicative situations.The nature of vocabulary-learning is examined from linguistic, psychological and educational perspectives, and a strategy for autonomous learning is derived which emphasizes the processes of: selection of new items from text, mental lexicon- building through the association of items on the basis of their lexical-structural features, and practising productive recall of items by activating the same associations as were used to build the mental network. This strategy is considered from the point of view of the support it would need from a computer-based interaction, and the field of Computer-Assisted Language Learning (CALL) for vocabulary is reviewed for examples of system design which meet the strategic and interactional requirements. Specifications are produced, based on general principles for the design of computer-assisted learning, and on current technological capability to integrate large text-databases and on-line lexical tools such as dictionaries etc., within an interface which facilitates learner control and exploration. Questions of evaluation are considered, in the light of the computer's ability to record interaction data, and a psycholinguistic model of word production is proposed as a basis for assessing the learner's performance in terms of processes as well as quantitative 'end product'. A general model of deep and surface approaches to learning is then adduced to provide a way of interpreting learner subjective data, and an independent means of evaluating the quality of the learning outcome.A system implementing the strategy is tested with learners of Spanish and English, and the quantitative and qualitative data on learning process and outcome is analyzed in depth. The system is shown to support the learning objectives for learners who adopt a deep approach, or whose approach complements the assumptions of the design in some way, and the general design principles are therefore considered as validated. Some aspects of the strategy related to lexicon-building, however, are shown to be inadequately supported, as is the capability of the system to help learners remediate surface approaches. The main conclusion of the study is that, whilst learner exploration of powerful lexical information resources is essential for autonomous vocabulary-learning, on-line tutorial help of the kind that will encourage deep rather than surface approaches, is needed to optimise the quality of the learning outcome
Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation
This paper surveys the current state of the art in Natural Language
Generation (NLG), defined as the task of generating text or speech from
non-linguistic input. A survey of NLG is timely in view of the changes that the
field has undergone over the past decade or so, especially in relation to new
(usually data-driven) methods, as well as new applications of NLG technology.
This survey therefore aims to (a) give an up-to-date synthesis of research on
the core tasks in NLG and the architectures adopted in which such tasks are
organised; (b) highlight a number of relatively recent research topics that
have arisen partly as a result of growing synergies between NLG and other areas
of artificial intelligence; (c) draw attention to the challenges in NLG
evaluation, relating them to similar challenges faced in other areas of Natural
Language Processing, with an emphasis on different evaluation methods and the
relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118
pages, 8 figures, 1 tabl
COTA: Improving the Speed and Accuracy of Customer Support through Ranking and Deep Networks
For a company looking to provide delightful user experiences, it is of
paramount importance to take care of any customer issues. This paper proposes
COTA, a system to improve speed and reliability of customer support for end
users through automated ticket classification and answers selection for support
representatives. Two machine learning and natural language processing
techniques are demonstrated: one relying on feature engineering (COTA v1) and
the other exploiting raw signals through deep learning architectures (COTA v2).
COTA v1 employs a new approach that converts the multi-classification task into
a ranking problem, demonstrating significantly better performance in the case
of thousands of classes. For COTA v2, we propose an Encoder-Combiner-Decoder, a
novel deep learning architecture that allows for heterogeneous input and output
feature types and injection of prior knowledge through network architecture
choices. This paper compares these models and their variants on the task of
ticket classification and answer selection, showing model COTA v2 outperforms
COTA v1, and analyzes their inner workings and shortcomings. Finally, an A/B
test is conducted in a production setting validating the real-world impact of
COTA in reducing issue resolution time by 10 percent without reducing customer
satisfaction
- …