22,500 research outputs found

    Blend recognition from CAD mesh models using pattern matching

    Get PDF
    This paper reports a unique, platform-independent approach for blend recognition from CAD mesh model using pattern matching. About 60% of the average portion of the total facets in CAD mesh model is blended features. So, it becomes essential and necessary to extract these blend features for the successful accomplishment of seamless CAD/CAM integration. The facets of the same region have similar patterns. The focus of this paper is to recognize the blends using hybrid mesh segmentation based on pattern matching. Blend recognition has been carried out in three phases viz. preprocessing, pattern matching hybrid mesh segmentation and blend feature identification. In preprocessing, the adjacency relationship is set in facets of CAD mesh model, and then Artificial Neural Networks based threshold prediction is employed for hybrid mesh segmentation. In the second phase, pattern matching hybrid mesh segmentation is used for clustering the facets into patches based on distinct geometrical properties. After segmentation, each facet group is subjected to several conformal tests to identify the type of analytical surfaces such as a cylinder, cone, sphere, or tori. In the blend feature recognition phase, the rule-based reasoning is used for blend feature extraction. The proposed method has been implemented in VC++ and extensively tested on benchmark test cases for prismatic surfaces. The proposed algorithm extracts the features with coverage of more than 95 %. The innovation lies in “Facet Area” based pattern matching hybrid mesh segmentation and blend recognition rules. The extracted feature information can be utilized for downstream applications like tool path generation, computer-aided process planning, FEA, reverse engineering, and additive manufacturing

    Deviating Angular Feature for Image Recognition System Using the Improved Neural Network Classifier.

    Get PDF
    The ability to recognize images makes it possible to abstractly conceptualize the world. Many in the field of machine learning have attempted to invent an image recognition system with the recognition capabilities of a human. This dissertation presents a method of modifications to existent image recognition systems, which greatly improves the efficiency of existing data imaging methods. This modification, the Deviating Angular Feature (DAF), has two obvious applications: (1) the recognition of handwritten numerals; and (2) the automatic identification of aircraft. Modifications of feature extraction and classification processes of current image recognition systems can leads to the systemic enhancement of data imaging. This research proposes a customized blend of image curvature extraction algorithms and the neural network classifiers trained by the Epoch Gradual Increase in Accuracy (EGIA) training algorithm. Using the DAF, the recognition of handwritten numerals and the automatic identification of aircraft have been improved. According to the preliminary results, the recognition system achieved an accuracy rate of 98.7% when applied to handwritten numeral recognition. When applied to automatic aircraft identification, the system achieved a 100% rate of recognition. The novel design of the prototype is quite flexible; thus, the system is easy to maintain, modify, and distribute

    Hierarchical Deep Learning Architecture For 10K Objects Classification

    Full text link
    Evolution of visual object recognition architectures based on Convolutional Neural Networks & Convolutional Deep Belief Networks paradigms has revolutionized artificial Vision Science. These architectures extract & learn the real world hierarchical visual features utilizing supervised & unsupervised learning approaches respectively. Both the approaches yet cannot scale up realistically to provide recognition for a very large number of objects as high as 10K. We propose a two level hierarchical deep learning architecture inspired by divide & conquer principle that decomposes the large scale recognition architecture into root & leaf level model architectures. Each of the root & leaf level models is trained exclusively to provide superior results than possible by any 1-level deep learning architecture prevalent today. The proposed architecture classifies objects in two steps. In the first step the root level model classifies the object in a high level category. In the second step, the leaf level recognition model for the recognized high level category is selected among all the leaf models. This leaf level model is presented with the same input object image which classifies it in a specific category. Also we propose a blend of leaf level models trained with either supervised or unsupervised learning approaches. Unsupervised learning is suitable whenever labelled data is scarce for the specific leaf level models. Currently the training of leaf level models is in progress; where we have trained 25 out of the total 47 leaf level models as of now. We have trained the leaf models with the best case top-5 error rate of 3.2% on the validation data set for the particular leaf models. Also we demonstrate that the validation error of the leaf level models saturates towards the above mentioned accuracy as the number of epochs are increased to more than sixty.Comment: As appeared in proceedings for CS & IT 2015 - Second International Conference on Computer Science & Engineering (CSEN 2015

    Accurate detection of dysmorphic nuclei using dynamic programming and supervised classification

    Get PDF
    A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows

    Learning Deep Structured Models

    Full text link
    Many problems in real-world applications involve predicting several random variables which are statistically related. Markov random fields (MRFs) are a great mathematical tool to encode such relationships. The goal of this paper is to combine MRFs with deep learning algorithms to estimate complex representations while taking into account the dependencies between the output random variables. Towards this goal, we propose a training algorithm that is able to learn structured models jointly with deep features that form the MRF potentials. Our approach is efficient as it blends learning and inference and makes use of GPU acceleration. We demonstrate the effectiveness of our algorithm in the tasks of predicting words from noisy images, as well as multi-class classification of Flickr photographs. We show that joint learning of the deep features and the MRF parameters results in significant performance gains.Comment: 11 pages including referenc

    FEAFA: A Well-Annotated Dataset for Facial Expression Analysis and 3D Facial Animation

    Full text link
    Facial expression analysis based on machine learning requires large number of well-annotated data to reflect different changes in facial motion. Publicly available datasets truly help to accelerate research in this area by providing a benchmark resource, but all of these datasets, to the best of our knowledge, are limited to rough annotations for action units, including only their absence, presence, or a five-level intensity according to the Facial Action Coding System. To meet the need for videos labeled in great detail, we present a well-annotated dataset named FEAFA for Facial Expression Analysis and 3D Facial Animation. One hundred and twenty-two participants, including children, young adults and elderly people, were recorded in real-world conditions. In addition, 99,356 frames were manually labeled using Expression Quantitative Tool developed by us to quantify 9 symmetrical FACS action units, 10 asymmetrical (unilateral) FACS action units, 2 symmetrical FACS action descriptors and 2 asymmetrical FACS action descriptors, and each action unit or action descriptor is well-annotated with a floating point number between 0 and 1. To provide a baseline for use in future research, a benchmark for the regression of action unit values based on Convolutional Neural Networks are presented. We also demonstrate the potential of our FEAFA dataset for 3D facial animation. Almost all state-of-the-art algorithms for facial animation are achieved based on 3D face reconstruction. We hence propose a novel method that drives virtual characters only based on action unit value regression of the 2D video frames of source actors.Comment: 9 pages, 7 figure
    corecore