4,165,777 research outputs found

    Feature recognition & tool path generation for 5 axis STEP-NC machining of free form / irregular contoured surfaces

    Get PDF
    This research paper presents a five step algorithm to generate tool paths for machining Free form / Irregular Contoured Surface(s) (FICS) by adopting STEP-NC (AP-238) format. In the first step, a parametrized CAD model with FICS is created or imported in UG-NX6.0 CAD package. The second step recognizes the features and calculates a Closeness Index (CI) by comparing them with the B-Splines / Bezier surfaces. The third step utilizes the CI and extracts the necessary data to formulate the blending functions for identified features. In the fourth step Z-level 5 axis tool paths are generated by adopting flat and ball end mill cutters. Finally, in the fifth step, tool paths are integrated with STEP-NC format and validated. All these steps are discussed and explained through a validated industrial component

    On Horizontal and Vertical Separation in Hierarchical Text Classification

    Get PDF
    Hierarchy is a common and effective way of organizing data and representing their relationships at different levels of abstraction. However, hierarchical data dependencies cause difficulties in the estimation of "separable" models that can distinguish between the entities in the hierarchy. Extracting separable models of hierarchical entities requires us to take their relative position into account and to consider the different types of dependencies in the hierarchy. In this paper, we present an investigation of the effect of separability in text-based entity classification and argue that in hierarchical classification, a separation property should be established between entities not only in the same layer, but also in different layers. Our main findings are the followings. First, we analyse the importance of separability on the data representation in the task of classification and based on that, we introduce a "Strong Separation Principle" for optimizing expected effectiveness of classifiers decision based on separation property. Second, we present Hierarchical Significant Words Language Models (HSWLM) which capture all, and only, the essential features of hierarchical entities according to their relative position in the hierarchy resulting in horizontally and vertically separable models. Third, we validate our claims on real-world data and demonstrate that how HSWLM improves the accuracy of classification and how it provides transferable models over time. Although discussions in this paper focus on the classification problem, the models are applicable to any information access tasks on data that has, or can be mapped to, a hierarchical structure.Comment: Full paper (10 pages) accepted for publication in proceedings of ACM SIGIR International Conference on the Theory of Information Retrieval (ICTIR'16

    Analysis of Feature Models using Generalised Feature Trees

    Get PDF
    This paper introduces the concept of generalised feature trees, which are feature trees where features can have multiple occurrences. It is shown how an important class of feature models can be transformed into generalised feature trees. We present algorithms which, after transforming a feature model to a generalised feature tree, compute properties of the corresponding software product line. We discuss the computational complexity of these algorithms and provide executable specifications in the functional programming language Miranda

    Unsupervised Feature Learning through Divergent Discriminative Feature Accumulation

    Full text link
    Unlike unsupervised approaches such as autoencoders that learn to reconstruct their inputs, this paper introduces an alternative approach to unsupervised feature learning called divergent discriminative feature accumulation (DDFA) that instead continually accumulates features that make novel discriminations among the training set. Thus DDFA features are inherently discriminative from the start even though they are trained without knowledge of the ultimate classification problem. Interestingly, DDFA also continues to add new features indefinitely (so it does not depend on a hidden layer size), is not based on minimizing error, and is inherently divergent instead of convergent, thereby providing a unique direction of research for unsupervised feature learning. In this paper the quality of its learned features is demonstrated on the MNIST dataset, where its performance confirms that indeed DDFA is a viable technique for learning useful features.Comment: Corrected citation formattin

    Effect of dietary supplement of sugar beet, neem leaf, linseed and coriander on growth performance and carcass trait of Vanaraja chicken

    Get PDF
    Aim: This study was planned to investigate the effect of sugar beet, neem leaf, linseed and coriander on growth parameters such as feed intake, body weight gain, feed conversion ratio (FCR), performance index (PI), and carcass characteristics in broiler birds. Materials and Methods: The experiment was conducted for a period of 42 days on Vanaraja strain of broiler birds. Different dietary supplement such as sugar beet meal, neem leaf meal, linseed meal and coriander seed meal were used in the basal diet. All day-old 150 male chicks were individually weighed and distributed into five groups having 30 birds in each. Each group was further sub-divided into triplicates having 10 birds in each. Group T1served as control and rest groups T2, T3, T4 and T5 as treatment groups. Birds in T1 group were fed basal ration only, however, T2 , T3, T4 and T5 groups were fed basal ration mixed with 2.5% sugar beet meal, neem leaf meal, linseed meal, and coriander seed meal individually, respectively. Results: Broilers supplemented with herbs/spices showed improvement in growth attributes and carcass characteristics. Broilers fed with herbs at the rate of 2.5% had higher feed intake except sugar beet and coriander seed meal fed group. The body weight and weight gain was also significantly (p<0.05) higher than control. Both FCR and PI were improved in supplemented groups in comparison to control. Dressing percentage was not significantly (p>0.05) affected. Average giblet percentage of all supplemented groups were significantly (p<0.05) higher than control and was found to be highest in neem leaf meal fed group. Average by-product percentage was found to be highest in linseed fed group. Conclusion: Various herbs such as sugar beet, neem leaf, linseed and coriander seed meals affected the growth performance, and carcass trait showed positive inclination toward supplemented groups in broilers. The exact mode of action of these herbs/spices is still not clear, however, one or more numbers of active compounds present in these supplements may be responsible

    Feature refinement

    Get PDF
    Development by formal stepwise refinement offers a guarantee that an implementation satisfies a specification. But refinement is frequently defined in such a restrictive way as to disallow some useful development steps. Here we de- fine feature refinement to overcome some limitations of re- finement and show its usefulness by applying it to examples taken from the literature. Using partial relations as a canonical state-based semantics and labelled transition systems as a canonical event-based semantics, we degine functions formally linking the state- and event-based operational semantics. We can then use this link to move notions of refinement between the event- and state-based worlds. An advantage of this abstract approach is that it is not restricted to a specific syntax or even a specific interpretation of the operational semantic

    Improving feature selection algorithms using normalised feature histograms

    Full text link
    The proposed feature selection method builds a histogram of the most stable features from random subsets of a training set and ranks the features based on a classifier based cross-validation. This approach reduces the instability of features obtained by conventional feature selection methods that occur with variation in training data and selection criteria. Classification results on four microarray and three image datasets using three major feature selection criteria and a naive Bayes classifier show considerable improvement over benchmark results

    Feature extraction from electroencephalograms for Bayesian assessment of newborn brain maturity

    Get PDF
    We explored the feature extraction techniques for Bayesian assessment of EEG maturity of newborns in the context that the continuity of EEG is the most important feature for assessment of the brain development. The continuity is associated with EEG “stationarity” which we propose to evaluate with adaptive segmentation of EEG into pseudo-stationary intervals. The histograms of these intervals are then used as new features for the assessment of EEG maturity. In our experiments, we used Bayesian model averaging over decision trees to differentiate two age groups, each included 110 EEG recordings. The use of the proposed EEG features has shown, on average, a 6% increase in the accuracy of age differentiation
    • 

    corecore