450,579 research outputs found

    Efficient Deep Feature Learning and Extraction via StochasticNets

    Full text link
    Deep neural networks are a powerful tool for feature learning and extraction given their ability to model high-level abstractions in highly complex data. One area worth exploring in feature learning and extraction using deep neural networks is efficient neural connectivity formation for faster feature learning and extraction. Motivated by findings of stochastic synaptic connectivity formation in the brain as well as the brain's uncanny ability to efficiently represent information, we propose the efficient learning and extraction of features via StochasticNets, where sparsely-connected deep neural networks can be formed via stochastic connectivity between neurons. To evaluate the feasibility of such a deep neural network architecture for feature learning and extraction, we train deep convolutional StochasticNets to learn abstract features using the CIFAR-10 dataset, and extract the learned features from images to perform classification on the SVHN and STL-10 datasets. Experimental results show that features learned using deep convolutional StochasticNets, with fewer neural connections than conventional deep convolutional neural networks, can allow for better or comparable classification accuracy than conventional deep neural networks: relative test error decrease of ~4.5% for classification on the STL-10 dataset and ~1% for classification on the SVHN dataset. Furthermore, it was shown that the deep features extracted using deep convolutional StochasticNets can provide comparable classification accuracy even when only 10% of the training data is used for feature learning. Finally, it was also shown that significant gains in feature extraction speed can be achieved in embedded applications using StochasticNets. As such, StochasticNets allow for faster feature learning and extraction performance while facilitate for better or comparable accuracy performances.Comment: 10 pages. arXiv admin note: substantial text overlap with arXiv:1508.0546

    Incremental Art: A Neural Network System for Recognition by Incremental Feature Extraction

    Full text link
    Abstract Incremental ART extends adaptive resonance theory (ART) by incorporating mechanisms for efficient recognition through incremental feature extraction. The system achieves efficient confident prediction through the controlled acquisition of only those features necessary to discriminate an input pattern. These capabilities are achieved through three modifications to the fuzzy ART system: (1) A partial feature vector complement coding rule extends fuzzy ART logic to allow recognition based on partial feature vectors. (2) The addition of a F2 decision criterion to measure ART predictive confidence. (3) An incremental feature extraction layer computes the next feature to extract based on a measure of predictive value. Our system is demonstrated on a face recognition problem but has general applicability as a machine vision solution and as model for studying scanning patterns.Office of Naval Research (N00014-92-J-4015, N00014-92-J-1309, N00014-91-4100); Air Force Office of Scientific Research (90-0083); National Science Foundation (IRI 90-00530

    Generating and visualizing a soccer knowledge base

    Get PDF
    This demo abstract describes the SmartWeb Ontology-based Information Extraction System (SOBIE). A key feature of SOBIE is that all information is extracted and stored with respect to the SmartWeb ontology. In this way, other components of the systems, which use the same ontology, can access this information in a straightforward way. We will show how information extracted by SOBIE is visualized within its original context, thus enhancing the browsing experience of the end user

    Feature and viewpoint selection for industrial car assembly

    Get PDF
    Abstract. Quality assurance programs of today’s car manufacturers show increasing demand for automated visual inspection tasks. A typical example is just-in-time checking of assemblies along production lines. Since high throughput must be achieved, object recognition and pose estimation heavily rely on offline preprocessing stages of available CAD data. In this paper, we propose a complete, universal framework for CAD model feature extraction and entropy index based viewpoint selection that is developed in cooperation with a major german car manufacturer

    Margin maximizing discriminant analysis

    Get PDF
    Abstract. We propose a new feature extraction method called Margin Maximizing Discriminant Analysis (MMDA) which seeks to extract features suitable for classification tasks. MMDA is based on the principle that an ideal feature should convey the maximum information about the class labels and it should depend only on the geometry of the optimal decision boundary and not on those parts of the distribution of the input data that do not participate in shaping this boundary. Further, distinct feature components should convey unrelated information about the data. Two feature extraction methods are proposed for calculating the parameters of such a projection that are shown to yield equivalent results. The kernel mapping idea is used to derive non-linear versions. Experiments with several real-world, publicly available data sets demonstrate that the new method yields competitive results.

    Computer-aided tongue image diagnosis and analysis

    Get PDF
    Title from PDF of title page (University of Missouri--Columbia, viewed on May 14, 2013).The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file.Dissertation advisor: Dr. Ye DuanIncludes bibliographical references.Vita.Ph. D. University of Missouri--Columbia 2012."May 2012"This work focuses on computer-aided tongue image analysis, specifically, as it relates to Traditional Chinese Medicine (TCM). Computerized tongue diagnosis aid medical practitioners capture quantitative features to improve reliability and consistence of diagnosis. A total computer-aided tongue analysis framework consists of tongue detection, tongue segmentation, tongue feature extraction, tongue classification and analysis, which are all included in our work. We propose a new hybrid image segmentation algorithm that integrates the region-based method with the boundary-based method. We apply this segmentation algorithm in designing an automatic tongue detection and segmentation framework. We also develop a novel color space based feature set for tongue feature extraction to implement an automated ZHENG (TCM syndrome) classification system using machine learning techniques. To further enhance the performance of our classification system, we propose preprocessing the tongue images using the Modified Specular-free technique prior to feature extraction, and explore the extraction of geometry features from the Petechia. Lastly, we propose a new feature set for automated tongue shape classification.Includes bibliographical reference
    corecore