215,318 research outputs found
Using the Mean Absolute Percentage Error for Regression Models
We study in this paper the consequences of using the Mean Absolute Percentage
Error (MAPE) as a measure of quality for regression models. We show that
finding the best model under the MAPE is equivalent to doing weighted Mean
Absolute Error (MAE) regression. We show that universal consistency of
Empirical Risk Minimization remains possible using the MAPE instead of the MAE.Comment: European Symposium on Artificial Neural Networks, Computational
Intelligence and Machine Learning (ESANN), Apr 2015, Bruges, Belgium. 2015,
Proceedings of the 23-th European Symposium on Artificial Neural Networks,
Computational Intelligence and Machine Learning (ESANN 2015
Reducing offline evaluation bias of collaborative filtering algorithms
Recommendation systems have been integrated into the majority of large online
systems to filter and rank information according to user profiles. It thus
influences the way users interact with the system and, as a consequence, bias
the evaluation of the performance of a recommendation algorithm computed using
historical data (via offline evaluation). This paper presents a new application
of a weighted offline evaluation to reduce this bias for collaborative
filtering algorithms.Comment: European Symposium on Artificial Neural Networks, Computational
Intelligence and Machine Learning (ESANN), Apr 2015, Bruges, Belgium.
pp.137-142, 2015, Proceedings of the 23-th European Symposium on Artificial
Neural Networks, Computational Intelligence and Machine Learning (ESANN 2015
Exact ICL maximization in a non-stationary time extension of the latent block model for dynamic networks
The latent block model (LBM) is a flexible probabilistic tool to describe
interactions between node sets in bipartite networks, but it does not account
for interactions of time varying intensity between nodes in unknown classes. In
this paper we propose a non stationary temporal extension of the LBM that
clusters simultaneously the two node sets of a bipartite network and constructs
classes of time intervals on which interactions are stationary. The number of
clusters as well as the membership to classes are obtained by maximizing the
exact complete-data integrated likelihood relying on a greedy search approach.
Experiments on simulated and real data are carried out in order to assess the
proposed methodology.Comment: European Symposium on Artificial Neural Networks, Computational
Intelligence and Machine Learning (ESANN), Apr 2015, Bruges, Belgium.
pp.225-230, 2015, Proceedings of the 23-th European Symposium on Artificial
Neural Networks, Computational Intelligence and Machine Learning (ESANN 2015
Generating Artificial Data for Private Deep Learning
In this paper, we propose generating artificial data that retain statistical
properties of real data as the means of providing privacy with respect to the
original dataset. We use generative adversarial network to draw
privacy-preserving artificial data samples and derive an empirical method to
assess the risk of information disclosure in a differential-privacy-like way.
Our experiments show that we are able to generate artificial data of high
quality and successfully train and validate machine learning models on this
data while limiting potential privacy loss.Comment: Privacy-Enhancing Artificial Intelligence and Language Technologies,
AAAI Spring Symposium Series, 201
Dissimilarity Clustering by Hierarchical Multi-Level Refinement
We introduce in this paper a new way of optimizing the natural extension of
the quantization error using in k-means clustering to dissimilarity data. The
proposed method is based on hierarchical clustering analysis combined with
multi-level heuristic refinement. The method is computationally efficient and
achieves better quantization errors than theComment: 20-th European Symposium on Artificial Neural Networks, Computational
Intelligence and Machine Learning (ESANN 2012), Bruges : Belgium (2012
Frontiers in Precision Medicine IV: Artificial Intelligence, Assembling Large Cohorts, and the Population Data Revolution
Large cohort studies and more recently electronic medical records (EMR) are being used to collect massive amounts of genetic information. Implementation of artificial intelligence has become increasingly necessary to interpret this data with the goal of augmenting patient care. While it is impossible to predict what the future holds, policy makers are challenged to create guiding principles and responsibly roll out these new technologies. On March 22, 2019, the University of Utah hosted its fourth annual Precision Medicine Symposium focusing on artificial intelligence, assembling large cohorts, and the population data revolution. The symposium brought together experts in medicine, science, law and ethics to discuss and debate these emerging issues
Electronic Journal of SADIO Special Issue on ASAI 2006
ASAI, the Argentine Symposium on Artificial Intelligence, is an annual event intended to be the main forum of the Artificial Intelligence (AI) community in Argentina. The symposium provides a forum for researchers and AI community members to discuss and exchange ideas and experiences on diverse topics of AI. The Eighth Argentine Symposium on Artificial Intelligence, ASAI 2006, was held during 4 – 5 September 2006, in Mendoza, Argentina. ASAI 2006 was part of the 35th JAIIO, the 35th Argentine Meetings on Informatics and Operations Research, organized by SADIO.Sociedad Argentina de Informática e Investigación Operativ
- …