3,360 research outputs found

    Knowledge Extraction from Unsupervised Multi-topographic Neural Network Models

    Get PDF
    This paper presents a new approach whose aim is to extent the scope of numerical models by providing them with knowledge extraction capabilities. The basic model which is considered in this paper is a multi-topographic neural network model. One of the most powerful features of this model is its generalization mechanism that allows rule extraction to be performed. The extraction of association rules is itself based on original quality measures which evaluate to what extent a numerical classification model behaves as a natural symbolic classifier such as a Galois lattice. A first experimental illustration of rule extraction on documentary data constituted by a set of patents issued form a patent database is presented

    Representation Learning: A Review and New Perspectives

    Full text link
    The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning

    Unsupervised Network Pretraining via Encoding Human Design

    Full text link
    Over the years, computer vision researchers have spent an immense amount of effort on designing image features for the visual object recognition task. We propose to incorporate this valuable experience to guide the task of training deep neural networks. Our idea is to pretrain the network through the task of replicating the process of hand-designed feature extraction. By learning to replicate the process, the neural network integrates previous research knowledge and learns to model visual objects in a way similar to the hand-designed features. In the succeeding finetuning step, it further learns object-specific representations from labeled data and this boosts its classification power. We pretrain two convolutional neural networks where one replicates the process of histogram of oriented gradients feature extraction, and the other replicates the process of region covariance feature extraction. After finetuning, we achieve substantially better performance than the baseline methods.Comment: 9 pages, 11 figures, WACV 2016: IEEE Conference on Applications of Computer Visio

    Knowledge Base Data Mining and Machine Learning in a Parallel Computing Environment

    Get PDF
    The expectation of this research is to greatly broaden the use of remotely sensed imagery by providing a novitiate user, access to embedded information and knowledge without embarking upon a full-scale research project to complete the content extraction, storage and retrieval process. The intent of our approach is to develop an intelligent system that can adapt to changes or new information and learn from these changes. This will drastically alter the approach researchers take in using any digital imagery by opening the scientific discovery process, particularly to disciplines that have not traditionally used imagery due to the complexity of the image processing techniques. We hope to accomplish this by the judicious use of declarative and procedural knowledge, engineering, and automatic feature or image object labeling using recent classification techniques on BEOWULF parallel computing architectures

    Integrating Remote Sensing and Geographic Information Systems

    Get PDF
    Remote sensing and geographic information systems (GIS) comprise the two major components of geographic information science (GISci), an overarching field of endeavor that also encompasses global positioning systems (GPS) technology, geodesy and traditional cartography (Goodchild 1992, Estes and Star 1993, Hepner et al. 2005). Although remote sensing and GIS developed quasi-independently, the synergism between them has become increasingly apparent (Aronoff 2005). Today, GIS software almost always includes tools for display and analysis of images, and image processing software commonly contains options for analyzing ‘ancillary’ geospatial data (Faust 1998). The significant progress made in ‘integration’ of remote sensing and GIS has been well-summarized in several reviews (Ehlers 1990, Mace 1991, Hinton 1996, Wilkinson 1996). Nevertheless, advances are so rapid that periodic reassessment of the state-of-the-art is clearly warranted
    • …
    corecore