10,111 research outputs found
Deep Feature Factorization For Concept Discovery
We propose Deep Feature Factorization (DFF), a method capable of localizing
similar semantic concepts within an image or a set of images. We use DFF to
gain insight into a deep convolutional neural network's learned features, where
we detect hierarchical cluster structures in feature space. This is visualized
as heat maps, which highlight semantically matching regions across a set of
images, revealing what the network `perceives' as similar. DFF can also be used
to perform co-segmentation and co-localization, and we report state-of-the-art
results on these tasks.Comment: The European Conference on Computer Vision (ECCV), 201
Deep Feature Factorization for Concept Discovery
We propose Deep Feature Factorization (DFF), a method capable of localizing similar semantic concepts within an image or a set of images. We use DFF to gain insight into a deep convolutional neural network's learned features, where we detect hierarchical cluster structures in feature space. This is visualized as heat maps, which highlight semantically matching regions across a set of images, revealing what the network 'perceives' as similar. DFF can also be used to perform co-segmentation and co-localization, and we report state-of-the-art results on these tasks
Learning from Multi-View Multi-Way Data via Structural Factorization Machines
Real-world relations among entities can often be observed and determined by
different perspectives/views. For example, the decision made by a user on
whether to adopt an item relies on multiple aspects such as the contextual
information of the decision, the item's attributes, the user's profile and the
reviews given by other users. Different views may exhibit multi-way
interactions among entities and provide complementary information. In this
paper, we introduce a multi-tensor-based approach that can preserve the
underlying structure of multi-view data in a generic predictive model.
Specifically, we propose structural factorization machines (SFMs) that learn
the common latent spaces shared by multi-view tensors and automatically adjust
the importance of each view in the predictive model. Furthermore, the
complexity of SFMs is linear in the number of parameters, which make SFMs
suitable to large-scale problems. Extensive experiments on real-world datasets
demonstrate that the proposed SFMs outperform several state-of-the-art methods
in terms of prediction accuracy and computational cost.Comment: 10 page
Deep recommender engine based on efficient product embeddings neural pipeline
Predictive analytics systems are currently one of the most important areas of
research and development within the Artificial Intelligence domain and
particularly in Machine Learning. One of the "holy grails" of predictive
analytics is the research and development of the "perfect" recommendation
system. In our paper, we propose an advanced pipeline model for the multi-task
objective of determining product complementarity, similarity and sales
prediction using deep neural models applied to big-data sequential transaction
systems. Our highly parallelized hybrid model pipeline consists of both
unsupervised and supervised models, used for the objectives of generating
semantic product embeddings and predicting sales, respectively. Our
experimentation and benchmarking processes have been done using pharma industry
retail real-life transactional Big-Data streams.Comment: 2018 17th RoEduNet Conference: Networking in Education and Research
(RoEduNet
A deep matrix factorization method for learning attribute representations
Semi-Non-negative Matrix Factorization is a technique that learns a
low-dimensional representation of a dataset that lends itself to a clustering
interpretation. It is possible that the mapping between this new representation
and our original data matrix contains rather complex hierarchical information
with implicit lower-level hidden attributes, that classical one level
clustering methodologies can not interpret. In this work we propose a novel
model, Deep Semi-NMF, that is able to learn such hidden representations that
allow themselves to an interpretation of clustering according to different,
unknown attributes of a given dataset. We also present a semi-supervised
version of the algorithm, named Deep WSF, that allows the use of (partial)
prior information for each of the known attributes of a dataset, that allows
the model to be used on datasets with mixed attribute knowledge. Finally, we
show that our models are able to learn low-dimensional representations that are
better suited for clustering, but also classification, outperforming
Semi-Non-negative Matrix Factorization, but also other state-of-the-art
methodologies variants.Comment: Submitted to TPAMI (16-Mar-2015
- …