209,966 research outputs found

    A Study in Rule-Specific Issue Categorization for e-Rulemaking

    Get PDF
    We address the e-rulemaking problem of categorizing public comments according to the issues that they address. In contrast to previous text categorization research in e-rulemaking [5, 6], and in an attempt to more closely duplicate the comment analysis process in federal agencies, we employ a set of rule-specific categories, each of which corresponds to a significant issue raised in the comments. We describe the creation of a corpus to support this text categorization task and report interannotator agreement results for a group of six annotators. We outline those features of the task and of the e-rulemaking context that engender both a non-traditional text categorization corpus and a correspondingly difficult machine learning problem. Finally, we investigate the application of standard and hierarchical text categorization techniques to the e-rulemaking data sets and find that automatic categorization methods show promise as a means of reducing the manual labor required to analyze large comment sets: the automatic annotation methods approach the performance of human annotators for both flat and hierarchical issue categorization

    A Study in Rule-Specific Issue Categorization for e-Rulemaking

    Get PDF
    We address the e-rulemaking problem of categorizing public comments according to the issues that they address. In contrast to previous text categorization research in e-rulemaking, and in an attempt to more closely duplicate the comment analysis process in federal agencies, we employ a set of rule-specific categories, each of which corresponds to a significant issue raised in the comments. We describe the creation of a corpus to support this text categorization task and report interannotator agreement results for a group of six annotators. We outline those features of the task and of the e-rulemaking context that engender both a non-traditional text categorization corpus and a correspondingly difficult machine learning problem. Finally, we investigate the application of standard and hierarchical text categorization techniques to the e-rulemaking data sets and find that automatic categorization methods show promise as a means of reducing the manual labor required to analyze large comment sets: the automatic annotation methods approach the performance of human annotators for both flat and hierarchical issue categorization

    Hierarchical meta-rules for scalable meta-learning

    Get PDF
    The Pairwise Meta-Rules (PMR) method proposed in [18] has been shown to improve the predictive performances of several metalearning algorithms for the algorithm ranking problem. Given m target objects (e.g., algorithms), the training complexity of the PMR method with respect to m is quadratic: (formula presented). This is usually not a problem when m is moderate, such as when ranking 20 different learning algorithms. However, for problems with a much larger m, such as the meta-learning-based parameter ranking problem, where m can be 100+, the PMR method is less efficient. In this paper, we propose a novel method named Hierarchical Meta-Rules (HMR), which is based on the theory of orthogonal contrasts. The proposed HMR method has a linear training complexity with respect to m, providing a way of dealing with a large number of objects that the PMR method cannot handle efficiently. Our experimental results demonstrate the benefit of the new method in the context of meta-learning

    Self-Organizing Information Fusion and Hierarchical Knowledge Discovery: A New Framework Using Artmap Neural Networks

    Full text link
    Classifying novel terrain or objects from sparse, complex data may require the resolution of conflicting information from sensors woring at different times, locations, and scales, and from sources with different goals and situations. Information fusion methods can help resolve inconsistencies, as when eveidence variously suggests that and object's class is car, truck, or airplane. The methods described her address a complementary problem, supposing that information from sensors and experts is reliable though inconsistent, as when evidence suggests that an object's class is car, vehicle, and man-made. Underlying relationships among classes are assumed to be unknown to the autonomated system or the human user. The ARTMAP information fusion system uses distributed code representations that exploit the neural network's capacity for one-to-many learning in order to produce self-organizing expert systems that discover hierachical knowlege structures. The fusion system infers multi-level relationships among groups of output classes, without any supervised labeling of these relationships. The procedure is illustrated with two image examples, but is not limited to image domain.Air Force Office of Scientific Research (F49620-01-1-0423); National Geospatial-Intelligence Agency (NMA 201-01-1-2016, NMA 501-03-1-2030); National Science Foundation (SBE-0354378, DGE-0221680); Office of Naval Research (N00014-01-1-0624); Department of Homeland Securit

    Modeling and Optimal Design of Machining-Induced Residual Stresses in Aluminium Alloys Using a Fast Hierarchical Multiobjective Optimization Algorithm

    Get PDF
    The residual stresses induced during shaping and machining play an important role in determining the integrity and durability of metal components. An important issue of producing safety critical components is to find the machining parameters that create compressive surface stresses or minimise tensile surface stresses. In this paper, a systematic data-driven fuzzy modelling methodology is proposed, which allows constructing transparent fuzzy models considering both accuracy and interpretability attributes of fuzzy systems. The new method employs a hierarchical optimisation structure to improve the modelling efficiency, where two learning mechanisms cooperate together: NSGA-II is used to improve the model’s structure while the gradient descent method is used to optimise the numerical parameters. This hybrid approach is then successfully applied to the problem that concerns the prediction of machining induced residual stresses in aerospace aluminium alloys. Based on the developed reliable prediction models, NSGA-II is further applied to the multi-objective optimal design of aluminium alloys in a ‘reverse-engineering’ fashion. It is revealed that the optimal machining regimes to minimise the residual stress and the machining cost simultaneously can be successfully located

    The computational magic of the ventral stream

    Get PDF
    I argue that the sample complexity of (biological, feedforward) object recognition is mostly due to geometric image transformations and conjecture that a main goal of the ventral stream – V1, V2, V4 and IT – is to learn-and-discount image transformations.

In the first part of the paper I describe a class of simple and biologically plausible memory-based modules that learn transformations from unsupervised visual experience. The main theorems show that these modules provide (for every object) a signature which is invariant to local affine transformations and approximately invariant for other transformations. I also prove that,
in a broad class of hierarchical architectures, signatures remain invariant from layer to layer. The identification of these memory-based modules with complex (and simple) cells in visual areas leads to a theory of invariant recognition for the ventral stream.

In the second part, I outline a theory about hierarchical architectures that can learn invariance to transformations. I show that the memory complexity of learning affine transformations is drastically reduced in a hierarchical architecture that factorizes transformations in terms of the subgroup of translations and the subgroups of rotations and scalings. I then show how translations are automatically selected as the only learnable transformations during development by enforcing small apertures – eg small receptive fields – in the first layer.

In a third part I show that the transformations represented in each area can be optimized in terms of storage and robustness, as a consequence determining the tuning of the neurons in the area, rather independently (under normal conditions) of the statistics of natural images. I describe a model of learning that can be proved to have this property, linking in an elegant way the spectral properties of the signatures with the tuning of receptive fields in different areas. A surprising implication of these theoretical results is that the computational goals and some of the tuning properties of cells in the ventral stream may follow from symmetry properties (in the sense of physics) of the visual world through a process of unsupervised correlational learning, based on Hebbian synapses. In particular, simple and complex cells do not directly care about oriented bars: their tuning is a side effect of their role in translation invariance. Across the whole ventral stream the preferred features reported for neurons in different areas are only a symptom of the invariances computed and represented.

The results of each of the three parts stand on their own independently of each other. Together this theory-in-fieri makes several broad predictions, some of which are:

-invariance to small transformations in early areas (eg translations in V1) may underly stability of visual perception (suggested by Stu Geman);

-each cell’s tuning properties are shaped by visual experience of image transformations during developmental and adult plasticity;

-simple cells are likely to be the same population as complex cells, arising from different convergence of the Hebbian learning rule. The input to complex “complex” cells are dendritic branches with simple cell properties;

-class-specific transformations are learned and represented at the top of the ventral stream hierarchy; thus class-specific modules such as faces, places and possibly body areas should exist in IT;

-the type of transformations that are learned from visual experience depend on the size of the receptive fields and thus on the area (layer in the models) – assuming that the size increases with layers;

-the mix of transformations learned in each area influences the tuning properties of the cells oriented bars in V1+V2, radial and spiral patterns in V4 up to class specific tuning in AIT (eg face tuned cells);

-features must be discriminative and invariant: invariance to transformations is the primary determinant of the tuning of cortical neurons rather than statistics of natural images.

The theory is broadly consistent with the current version of HMAX. It explains it and extend it in terms of unsupervised learning, a broader class of transformation invariance and higher level modules. The goal of this paper is to sketch a comprehensive theory with little regard for mathematical niceties. If the theory turns out to be useful there will be scope for deep mathematics, ranging from group representation tools to wavelet theory to dynamics of learning

    Development of accident prediction model by using artificial neural network (ANN)

    Get PDF
    Statistical or crash prediction model have frequently been used in highway safety studies. They can be used in identify major contributing factors or establish relationship between crashes and explanatory accident variables. The measurements to prevent accident are from the speed reduction, widening the roads, speed enforcement, or construct the road divider, or other else. Therefore, the purpose of this study is to develop an accident prediction model at federal road FT 050 Batu Pahat to Kluang. The study process involves the identification of accident blackspot locations, establishment of general patterns of accident, analysis of the factors involved, site studies, and development of accident prediction model using Artificial Neural Network (ANN) applied software which named NeuroShell2. The significant of the variables that are selected from these accident factors are checked to ensure the developed model can give a good prediction results. The performance of neural network is evaluated by using the Mean Absolute Percentage Error (MAPE). The study result showed that the best neural network for accident prediction model at federal road FT 050 is 4-10-1 with 0.1 learning rate and 0.2 momentum rate. This network model contains the lowest value of MAPE and highest value of linear correlation, r which is 0.8986. This study has established the accident point weightage as the rank of the blackspot section by kilometer along the FT 050 road (km 1 – km 103). Several main accident factors also have been determined along this road, and after all the data gained, it has successfully analyzed by using artificial neural network
    corecore