7,726 research outputs found
Neural Collaborative Ranking
Recommender systems are aimed at generating a personalized ranked list of
items that an end user might be interested in. With the unprecedented success
of deep learning in computer vision and speech recognition, recently it has
been a hot topic to bridge the gap between recommender systems and deep neural
network. And deep learning methods have been shown to achieve state-of-the-art
on many recommendation tasks. For example, a recent model, NeuMF, first
projects users and items into some shared low-dimensional latent feature space,
and then employs neural nets to model the interaction between the user and item
latent features to obtain state-of-the-art performance on the recommendation
tasks. NeuMF assumes that the non-interacted items are inherent negative and
uses negative sampling to relax this assumption. In this paper, we examine an
alternative approach which does not assume that the non-interacted items are
necessarily negative, just that they are less preferred than interacted items.
Specifically, we develop a new classification strategy based on the widely used
pairwise ranking assumption. We combine our classification strategy with the
recently proposed neural collaborative filtering framework, and propose a
general collaborative ranking framework called Neural Network based
Collaborative Ranking (NCR). We resort to a neural network architecture to
model a user's pairwise preference between items, with the belief that neural
network will effectively capture the latent structure of latent factors. The
experimental results on two real-world datasets show the superior performance
of our models in comparison with several state-of-the-art approaches.Comment: Proceedings of the 2018 ACM on Conference on Information and
Knowledge Managemen
Automatic Handling of Imbalanced Datasets for Classification
Imbalanced data is present in various business areas and when facing it without proper knowledge, it can have undesired negative consequences. In addition, the most common evaluation metrics in machine learning to measure the desired solution can be inappropriate and misleading. Multiple combinations of methods are proposed to handle imbalanced data however, often, they required specialised knowledge to be used correctly.
For imbalanced classification, the desire to correctly classify the underrepresented class tends to be more important than the overrepresented class, while being more challenging and time-consuming. Several approaches, ranging from more accessible and more advanced in the domains of data resampling and cost-sensitive techniques, will be considered to handle imbalanced data.
The application developed delivers recommendations of the most suited combinations of techniques for the specific dataset imported, by extracting and comparing meta-features values recorded in a knowledge base. It facilitates effortless classification and automates part of the machine learning pipeline with comparable or better results to a state-of-the-art solution and with a much smaller execution timeOs dados não balanceados estão presentes em diversas áreas de negócio e, ao enfrentá-los sem o devido conhecimento, podem trazer consequências negativas e indesejadas. Além disso, as métricas de avaliação mais comuns em aprendizagem de máquina (machine learning) para medir a solução desejada podem ser inadequadas e enganosas. Múltiplas combinações de métodos são propostas para lidar com dados não balanceados, contudo, muitas vezes, estas exigem um conhecimento especializado para serem usadas corretamente.
Para a classificação não balanceada, o desejo de classificar corretamente a classe sub-representada tende a ser mais importante do que a classe que está representada em demasia, sendo mais difÃcil e demorado. Várias abordagens, desde as mais acessÃveis até as mais avançadas nos domÃnios de reamostragem de dados e técnicas sensÃveis ao custo vão ser consideradas para lidar com dados não balanceados.
A aplicação desenvolvida fornece recomendações das combinações de técnicas mais adequadas para o conjunto de dados especÃfico importado, extraindo e comparando os valores de meta caracterÃsticas registados numa base de conhecimento. Ela facilita a classificação sem esforço e automatiza parte das etapas de aprendizagem de máquina com resultados comparáveis ou melhores a uma solução de estado da arte e com tempo de execução muito meno
Is "Better Data" Better than "Better Data Miners"? (On the Benefits of Tuning SMOTE for Defect Prediction)
We report and fix an important systematic error in prior studies that ranked
classifiers for software analytics. Those studies did not (a) assess
classifiers on multiple criteria and they did not (b) study how variations in
the data affect the results. Hence, this paper applies (a) multi-criteria tests
while (b) fixing the weaker regions of the training data (using SMOTUNED, which
is a self-tuning version of SMOTE). This approach leads to dramatically large
increases in software defect predictions. When applied in a 5*5
cross-validation study for 3,681 JAVA classes (containing over a million lines
of code) from open source systems, SMOTUNED increased AUC and recall by 60% and
20% respectively. These improvements are independent of the classifier used to
predict for quality. Same kind of pattern (improvement) was observed when a
comparative analysis of SMOTE and SMOTUNED was done against the most recent
class imbalance technique. In conclusion, for software analytic tasks like
defect prediction, (1) data pre-processing can be more important than
classifier choice, (2) ranking studies are incomplete without such
pre-processing, and (3) SMOTUNED is a promising candidate for pre-processing.Comment: 10 pages + 2 references. Accepted to International Conference of
Software Engineering (ICSE), 201
Is "Better Data" Better than "Better Data Miners"? (On the Benefits of Tuning SMOTE for Defect Prediction)
We report and fix an important systematic error in prior studies that ranked
classifiers for software analytics. Those studies did not (a) assess
classifiers on multiple criteria and they did not (b) study how variations in
the data affect the results. Hence, this paper applies (a) multi-criteria tests
while (b) fixing the weaker regions of the training data (using SMOTUNED, which
is a self-tuning version of SMOTE). This approach leads to dramatically large
increases in software defect predictions. When applied in a 5*5
cross-validation study for 3,681 JAVA classes (containing over a million lines
of code) from open source systems, SMOTUNED increased AUC and recall by 60% and
20% respectively. These improvements are independent of the classifier used to
predict for quality. Same kind of pattern (improvement) was observed when a
comparative analysis of SMOTE and SMOTUNED was done against the most recent
class imbalance technique. In conclusion, for software analytic tasks like
defect prediction, (1) data pre-processing can be more important than
classifier choice, (2) ranking studies are incomplete without such
pre-processing, and (3) SMOTUNED is a promising candidate for pre-processing.Comment: 10 pages + 2 references. Accepted to International Conference of
Software Engineering (ICSE), 201
Focusing on the Big Picture: Insights into a Systems Approach to Deep Learning for Satellite Imagery
Deep learning tasks are often complicated and require a variety of components
working together efficiently to perform well. Due to the often large scale of
these tasks, there is a necessity to iterate quickly in order to attempt a
variety of methods and to find and fix bugs. While participating in IARPA's
Functional Map of the World challenge, we identified challenges along the
entire deep learning pipeline and found various solutions to these challenges.
In this paper, we present the performance, engineering, and deep learning
considerations with processing and modeling data, as well as underlying
infrastructure considerations that support large-scale deep learning tasks. We
also discuss insights and observations with regard to satellite imagery and
deep learning for image classification.Comment: Accepted to IEEE Big Data 201
Privacy-Aware Recommender Systems Challenge on Twitter's Home Timeline
Recommender systems constitute the core engine of most social network
platforms nowadays, aiming to maximize user satisfaction along with other key
business objectives. Twitter is no exception. Despite the fact that Twitter
data has been extensively used to understand socioeconomic and political
phenomena and user behaviour, the implicit feedback provided by users on Tweets
through their engagements on the Home Timeline has only been explored to a
limited extent. At the same time, there is a lack of large-scale public social
network datasets that would enable the scientific community to both benchmark
and build more powerful and comprehensive models that tailor content to user
interests. By releasing an original dataset of 160 million Tweets along with
engagement information, Twitter aims to address exactly that. During this
release, special attention is drawn on maintaining compliance with existing
privacy laws. Apart from user privacy, this paper touches on the key challenges
faced by researchers and professionals striving to predict user engagements. It
further describes the key aspects of the RecSys 2020 Challenge that was
organized by ACM RecSys in partnership with Twitter using this dataset.Comment: 16 pages, 2 table
- …