16,360 research outputs found

    Intrusion Detection Systems Using Adaptive Regression Splines

    Full text link
    Past few years have witnessed a growing recognition of intelligent techniques for the construction of efficient and reliable intrusion detection systems. Due to increasing incidents of cyber attacks, building effective intrusion detection systems (IDS) are essential for protecting information systems security, and yet it remains an elusive goal and a great challenge. In this paper, we report a performance analysis between Multivariate Adaptive Regression Splines (MARS), neural networks and support vector machines. The MARS procedure builds flexible regression models by fitting separate splines to distinct intervals of the predictor variables. A brief comparison of different neural network learning algorithms is also given

    A survey of visual preprocessing and shape representation techniques

    Get PDF
    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention)

    An Adaptive Locally Connected Neuron Model: Focusing Neuron

    Full text link
    This paper presents a new artificial neuron model capable of learning its receptive field in the topological domain of inputs. The model provides adaptive and differentiable local connectivity (plasticity) applicable to any domain. It requires no other tool than the backpropagation algorithm to learn its parameters which control the receptive field locations and apertures. This research explores whether this ability makes the neuron focus on informative inputs and yields any advantage over fully connected neurons. The experiments include tests of focusing neuron networks of one or two hidden layers on synthetic and well-known image recognition data sets. The results demonstrated that the focusing neurons can move their receptive fields towards more informative inputs. In the simple two-hidden layer networks, the focusing layers outperformed the dense layers in the classification of the 2D spatial data sets. Moreover, the focusing networks performed better than the dense networks even when 70%\% of the weights were pruned. The tests on convolutional networks revealed that using focusing layers instead of dense layers for the classification of convolutional features may work better in some data sets.Comment: 45 pages, a national patent filed, submitted to Turkish Patent Office, No: -2017/17601, Date: 09.11.201

    Learning, Categorization, Rule Formation, and Prediction by Fuzzy Neural Networks

    Full text link
    National Science Foundation (IRI 94-01659); Office of Naval Research (N00014-91-J-4100, N00014-92-J-4015) Air Force Office of Scientific Research (90-0083, N00014-92-J-4015

    Cyclic Self-Organizing Map for Object Recognition

    Get PDF
    Object recognition is an important machine learning (ML) application. To have a robust ML application, we need three major steps: (1) preprocessing (i.e. preparing the data for the ML algorithms); (2) using appropriate segmentation and feature extraction algorithms to abstract the core features data and (3) applying feature classification or feature recognition algorithms. The quality of the ML algorithm depends on a good representation of the data. Data representation requires the extraction of features with an appropriate learning rate. Learning rate influences how the algorithm will learn about the data or how the data will be processed and treated. Generally, this parameter is found on a trial-and-error basis and scholars sometimes set it to be constant. This paper presents a new optimization technique for object recognition problems called Cyclic-SOM by accelerating the learning process of the self-organizing map (SOM) using a non-constant learning rate. SOM uses the Euclidean distance to measure the similarity between the inputs and the features maps. Our algorithm considers image correlation using mean absolute difference instead of traditional Euclidean distance. It uses cyclical learning rates to get high performance with a better recognition rate. Cyclic-SOM possesses the following merits: (1) it accelerates the learning process and eliminates the need to experimentally find the best values and schedule for the learning rates; (2) it offers one form of improvement in both results and training; (3) it requires no manual tuning of the learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyper-parameters and (4) it shows promising results compared to other methods on different datasets. Three wide benchmark databases illustrate the efficiency of the proposed technique: AHD Base for Arabic digits, MNIST for English digits, and CMU-PIE for faces

    Neural Network Detection of Fatigue Crack Growth in Riveted Joints Using Acoustic Emission

    Get PDF
    The purpose of this research was to demonstrate the capability of neural networks to discriminate between individual acoustic emission (AE) signals originating from crack growth and rivet rubbing (fretting) in aluminum lap joints. AE waveforms were recorded during tensile fatigue cycling of six notched and riveted 7075-T6 specimens using a broadband piezoelectric transducer and a computer interfaced oscilloscope. The source of 1,311 signals was identified based on triggering logic, amplitude relationships, and time of arrival data collected from the broad-band transducer and three additional 300 Hz resonant transducers bonded to the specimens. The power spectrum of each waveform was calculated and normalized to correct for variable specimen geometry and wave propagation effects. In order to determine the variation between individual signals of the same class, the normalized spectra were clustered onto a two-dimensional feature space using a Kohonen self organizing map (SOM). Then 132 crack growth and 137 rivet rubbing spectra were used to train a back-propagation neural network to provide automatic pattern classification. Although there was some overlap between the clusters mapped in the Kohonen feature space, the trained back-propagation neural network was able to classify the remaining 463 crack growth signals with a 94% accuracy and the 367 rivet rubbing signals with a 99% accuracy

    Visual pathways from the perspective of cost functions and multi-task deep neural networks

    Get PDF
    Vision research has been shaped by the seminal insight that we can understand the higher-tier visual cortex from the perspective of multiple functional pathways with different goals. In this paper, we try to give a computational account of the functional organization of this system by reasoning from the perspective of multi-task deep neural networks. Machine learning has shown that tasks become easier to solve when they are decomposed into subtasks with their own cost function. We hypothesize that the visual system optimizes multiple cost functions of unrelated tasks and this causes the emergence of a ventral pathway dedicated to vision for perception, and a dorsal pathway dedicated to vision for action. To evaluate the functional organization in multi-task deep neural networks, we propose a method that measures the contribution of a unit towards each task, applying it to two networks that have been trained on either two related or two unrelated tasks, using an identical stimulus set. Results show that the network trained on the unrelated tasks shows a decreasing degree of feature representation sharing towards higher-tier layers while the network trained on related tasks uniformly shows high degree of sharing. We conjecture that the method we propose can be used to analyze the anatomical and functional organization of the visual system and beyond. We predict that the degree to which tasks are related is a good descriptor of the degree to which they share downstream cortical-units.Comment: 16 pages, 5 figure

    ANOMALY NETWORK INTRUSION DETECTION SYSTEM BASED ON DISTRIBUTED TIME-DELAY NEURAL NETWORK (DTDNN)

    Get PDF
    In this research, a hierarchical off-line anomaly network intrusion detection system based on Distributed Time-Delay Artificial Neural Network is introduced. This research aims to solve a hierarchical multi class problem in which the type of attack (DoS, U2R, R2L and Probe attack) detected by dynamic neural network. The results indicate that dynamic neural nets (Distributed Time-Delay Artificial Neural Network) can achieve a high detection rate, where the overall accuracy classification rate average is equal to 97.24%
    corecore