1,261 research outputs found

    Intrinsically Evolvable Artificial Neural Networks

    Get PDF
    Dedicated hardware implementations of neural networks promise to provide faster, lower power operation when compared to software implementations executing on processors. Unfortunately, most custom hardware implementations do not support intrinsic training of these networks on-chip. The training is typically done using offline software simulations and the obtained network is synthesized and targeted to the hardware offline. The FPGA design presented here facilitates on-chip intrinsic training of artificial neural networks. Block-based neural networks (BbNN), the type of artificial neural networks implemented here, are grid-based networks neuron blocks. These networks are trained using genetic algorithms to simultaneously optimize the network structure and the internal synaptic parameters. The design supports online structure and parameter updates, and is an intrinsically evolvable BbNN platform supporting functional-level hardware evolution. Functional-level evolvable hardware (EHW) uses evolutionary algorithms to evolve interconnections and internal parameters of functional modules in reconfigurable computing systems such as FPGAs. Functional modules can be any hardware modules such as multipliers, adders, and trigonometric functions. In the implementation presented, the functional module is a neuron block. The designed platform is suitable for applications in dynamic environments, and can be adapted and retrained online. The online training capability has been demonstrated using a case study. A performance characterization model for RC implementations of BbNNs has also been presented

    Framework of hierarchy for neural theory

    Get PDF

    Efficient channel equalization algorithms for multicarrier communication systems

    Get PDF
    Blind adaptive algorithm that updates time-domain equalizer (TEQ) coefficients by Adjacent Lag Auto-correlation Minimization (ALAM) is proposed to shorten the channel for multicarrier modulation (MCM) systems. ALAM is an addition to the family of several existing correlation based algorithms that can achieve similar or better performance to existing algorithms with lower complexity. This is achieved by designing a cost function without the sum-square and utilizing symmetrical-TEQ property to reduce the complexity of adaptation of TEQ to half of the existing one. Furthermore, to avoid the limitations of lower unstable bit rate and high complexity, an adaptive TEQ using equal-taps constraints (ETC) is introduced to maximize the bit rate with the lowest complexity. An IP core is developed for the low-complexity ALAM (LALAM) algorithm to be implemented on an FPGA. This implementation is extended to include the implementation of the moving average (MA) estimate for the ALAM algorithm referred as ALAM-MA. Unit-tap constraint (UTC) is used instead of unit-norm constraint (UNC) while updating the adaptive algorithm to avoid all zero solution for the TEQ taps. The IP core is implemented on Xilinx Vertix II Pro XC2VP7-FF672-5 for ADSL receivers and the gate level simulation guaranteed successful operation at a maximum frequency of 27 MHz and 38 MHz for ALAM-MA and LALAM algorithm, respectively. FEQ equalizer is used, after channel shortening using TEQ, to recover distorted QAM signals due to channel effects. A new analytical learning based framework is proposed to jointly solve equalization and symbol detection problems in orthogonal frequency division multiplexing (OFDM) systems with QAM signals. The framework utilizes extreme learning machine (ELM) to achieve fast training, high performance, and low error rates. The proposed framework performs in real-domain by transforming a complex signal into a single 2–tuple real-valued vector. Such transformation offers equalization in real domain with minimum computational load and high accuracy. Simulation results show that the proposed framework outperforms other learning based equalizers in terms of symbol error rates and training speeds

    Towards a scalable and efficient data classification technique.

    Get PDF
    Data Classification is a task that could be found in many life activities. In general, the term could be used for any activity that derives some decision or forecast based on the currently available information. Using a more accurate definition, a classification procedure is the construction of some kind of a method for making judgments for a continuing sequence of cases, where each new case must be assigned to one of pre-defined classes. This type of construction has been termed supervised learning, in order to distinguish it from unsupervised learning or clustering in which the classes are not pre-defined but are concluded from the available data. This thesis is divided into five chapters, analyzing three classification techniques, namely nearest neighbor technique, perceptron learning algorithm and multi-layer perceptrons with backpropagation, based on performance and scalability issues. Chapter one gives an introduction to the research topic of this thesis. In addition it states the problem that builds the core of this thesis and predefines the objective of this study, namely selecting the most efficient and scalable classification algorithm that suits a given classification task. Chapter two explores a historical review of the literature introduced in the classification domain. It focuses mainly on the topics that are related to this study and presents some of the new classification approaches. Chapter three introduces the way based on which this thesis is designed. The technical methodology used to analyze and investigate the three classification algorithms is clearly described. In this thesis different experiments are introduced to prove the findings. The datasets used here are considered to be real-life datasets that present sports players and cars classification tasks. Chapters four and five represent the main core of this thesis, as they contain the data analysis, main findings and conclusions that are derived from different experiments. The nearest neighbor classification technique is one of the lazy learners because before the classification process starts, it needs to store all of the training samples. But, although it takes more time to classify any unknown samples, it is considered the most efficient technique amont other classification techniques. A natural and future step would be using the single-layer perception algorithm that does not need to store the data samples to reach an acceptable convergence rate. Alternatively, it speeds the recognition or the learning process, because it learns and stores only the weights of the neural network used to implement the algorithm. This algorithm has a big deficiency: it only works for the linearly separable data samples. So, it is now a suitable phase to start working on a more scalable and efficient technique. It is the multi-layer perceptrons network with backpropagation that has the power of solving different complex and non-linearly separable classification tasks

    Meta-Heuristic Optimization Methods for Quaternion-Valued Neural Networks

    Get PDF
    In recent years, real-valued neural networks have demonstrated promising, and often striking, results across a broad range of domains. This has driven a surge of applications utilizing high-dimensional datasets. While many techniques exist to alleviate issues of high-dimensionality, they all induce a cost in terms of network size or computational runtime. This work examines the use of quaternions, a form of hypercomplex numbers, in neural networks. The constructed networks demonstrate the ability of quaternions to encode high-dimensional data in an efficient neural network structure, showing that hypercomplex neural networks reduce the number of total trainable parameters compared to their real-valued equivalents. Finally, this work introduces a novel training algorithm using a meta-heuristic approach that bypasses the need for analytic quaternion loss or activation functions. This algorithm allows for a broader range of activation functions over current quaternion networks and presents a proof-of-concept for future work

    Technology Directions for the 21st Century

    Get PDF
    The Office of Space Communications (OSC) is tasked by NASA to conduct a planning process to meet NASA's science mission and other communications and data processing requirements. A set of technology trend studies was undertaken by Science Applications International Corporation (SAIC) for OSC to identify quantitative data that can be used to predict performance of electronic equipment in the future to assist in the planning process. Only commercially available, off-the-shelf technology was included. For each technology area considered, the current state of the technology is discussed, future applications that could benefit from use of the technology are identified, and likely future developments of the technology are described. The impact of each technology area on NASA operations is presented together with a discussion of the feasibility and risk associated with its development. An approximate timeline is given for the next 15 to 25 years to indicate the anticipated evolution of capabilities within each of the technology areas considered. This volume contains four chapters: one each on technology trends for database systems, computer software, neural and fuzzy systems, and artificial intelligence. The principal study results are summarized at the beginning of each chapter
    • …
    corecore