9,381 research outputs found

    Sequential compact code learning for unsupervised image hashing

    Get PDF
    Effective hashing for large-scale image databases is a popular research area, attracting much attention in computer vision and visual information retrieval. Several recent methods attempt to learn either graph embedding or semantic coding for fast and accurate applications. In this paper, a novel unsupervised framework, termed evolutionary compact embedding (ECE), is introduced to automatically learn the task-specific binary hash codes. It can be regarded as an optimization algorithm that combines the genetic programming (GP) and a boosting trick. In our architecture, each bit of ECE is iteratively computed using a weak binary classification function, which is generated through GP evolving by jointly minimizing its empirical risk with the AdaBoost strategy on a training set. We address this as greedy optimization by embedding high-dimensional data points into a similarity-preserved Hamming space with a low dimension. We systematically evaluate ECE on two data sets, SIFT 1M and GIST 1M, showing the effectiveness and the accuracy of our method for a large-scale similarity search

    Deep Learning for Metagenomic Data: using 2D Embeddings and Convolutional Neural Networks

    Full text link
    Deep learning (DL) techniques have had unprecedented success when applied to images, waveforms, and texts to cite a few. In general, when the sample size (N) is much greater than the number of features (d), DL outperforms previous machine learning (ML) techniques, often through the use of convolution neural networks (CNNs). However, in many bioinformatics ML tasks, we encounter the opposite situation where d is greater than N. In these situations, applying DL techniques (such as feed-forward networks) would lead to severe overfitting. Thus, sparse ML techniques (such as LASSO e.g.) usually yield the best results on these tasks. In this paper, we show how to apply CNNs on data which do not have originally an image structure (in particular on metagenomic data). Our first contribution is to show how to map metagenomic data in a meaningful way to 1D or 2D images. Based on this representation, we then apply a CNN, with the aim of predicting various diseases. The proposed approach is applied on six different datasets including in total over 1000 samples from various diseases. This approach could be a promising one for prediction tasks in the bioinformatics field.Comment: Accepted at NIPS 2017 Workshop on Machine Learning for Health (https://ml4health.github.io/2017/); In Proceedings of the NIPS ML4H 2017 Workshop in Long Beach, CA, USA

    Hunter-gatherers in a howling wilderness: Neoliberal capitalism as a language that speaks itself

    Get PDF
    The 'self-referential' character of evolutionary process noted by Goldenfeld and Woese (2010) can be restated in the context of a generalized Darwinian theory applied to economic process through a 'language' model: The underlying inherited and learned culture of the firm, the short-time cognitive response of the firm to patterns of threat and opportunity that is sculpted by that culture, and the embedding socioeconomic environment, are represented as interacting information sources constrained by the asymptotic limit theorems of information theory. If unregulated, the larger, compound, source that characterizes high probability evolutionary paths of this composite then becomes, literally, a self-dynamic language that speaks itself. Such a structure is, for those enmeshed in it, more akin to a primitive hunter-gatherer society at the mercy of internal ecological dynamics than to, say, a neolithic agricultural community in which a highly ordered, deliberately adapted, ecosystem is consciously farmed so as to match its productivity to human needs
    • …
    corecore