12,358 research outputs found

    Design Of Neural Network Circuit Inside High Speed Camera Using Analog CMOS 0.35 ¼m Technology

    Get PDF
    Analog VLSI on-chip learning Neural Networks represent a mature technology for a large number of applications involving industrial as well as consumer appliances. This is particularly the case when low power consumption, small size and/or very high speed are required. This approach exploits the computational features of Neural Networks, the implementation efficiency of analog VLSI circuits and the adaptation capabilities of the on-chip learning feedback schema. High-speed video cameras are powerful tools for investigating for instance the biomechanics analysis or the movements of mechanical parts in manufacturing processes. In the past years, the use of CMOS sensors instead of CCDs has enabled the development of high-speed video cameras offering digital outputs , readout flexibility, and lower manufacturing costs. In this paper, we propose a high-speed smart camera based on a CMOS sensor with embedded Analog Neural Network

    Методи паралельно-вертикального опрацювання даних у нейромережах

    No full text
    Визначено операцiйний базис нейромереж, обгрунтовано доцiльнiсть розроблення апаратних нейромереж паралельно-вертикального типу, розроблено орiєнтований на НВIС-реалiзацiю паралельно-вертикальний метод опрацювання даних у нейроелементах (нейромережах), який забезпечує зменшення кiлькостi виводiв iнтерфейсу, розрядностi мiжнейронних зв’язкiв i затрат обладнання та запропоновано принципи НВIС-реалiзацiї нейромереж.Определен операционный базис нейросетей, обоснована целесообразность разработки аппаратных нейросетей параллельно-вертикального типа, разработан ориентированный на СБИС-реализацию параллельно-вертикальный метод обработки данных в нейроэлементах (нейросетях), который обеспечивает уменьшение количества выводов интерфейса, разрядности межнейронных связей и затрат оборудования, и предложены принципы СБИС-реализации нейросетей.An operational basis of neural networks has been identified. The feasibility of development of parallel-vertical hardware neural networks has been substantiated. A parallel-vertical data processing method in neural elements (neural networks) that is oriented to the VLSI implementation and provides a reduction of the number of interface’s pins, the bitness of interneuron connection, and equipment costs has been developed. The principles of the VLSI implementation of neural networks have been proposed

    Automated implementation of rule-based expert systems with neural networks for time-critical applications

    Get PDF
    In fault diagnosis, control and real-time monitoring, both timing and accuracy are critical for operators or machines to reach proper solutions or appropriate actions. Expert systems are becoming more popular in the manufacturing community for dealing with such problems. In recent years, neural networks have revived and their applications have spread to many areas of science and engineering. A method of using neural networks to implement rule-based expert systems for time-critical applications is discussed here. This method can convert a given rule-based system into a neural network with fixed weights and thresholds. The rules governing the translation are presented along with some examples. We also present the results of automated machine implementation of such networks from the given rule-base. This significantly simplifies the translation process to neural network expert systems from conventional rule-based systems. Results comparing the performance of the proposed approach based on neural networks vs. the classical approach are given. The possibility of very large scale integration (VLSI) realization of such neural network expert systems is also discussed

    VLSI implementation of neural networks : a switched capacitor approach

    Full text link
    Recent research has indicated that Neural Networks may offer powerful and effective solutions to certain classes of problems which von Neumann machines do not handle effectively. These problems include the areas of visual and speech recognition. This thesis describes the basic principles behind neural networks and neural network implementation. The main aim of the thesis is to investigate the implementation of neural networks in silicon integrated technology. To this end the design of a CMOS VLSI test neural network is described and the problems of VLSI implementation discussed. The neural network design utilizes switched\capacitor techniques to implement the basic McCulloch-Pitts neuron. A test chip has been fabricated using this circuit technique and tested successfully. Comparisons between the switched capacitor technique and other neural network implementations are also discussed

    Analysis of Analog Neural Network Model with CMOS Multipliers

    Get PDF
    The analog neural networks have some very useful advantages in comparison with digital neural network, but recent implementation of discrete elements gives not the possibility for realizing completely these advantages. The reason of this is the great variations of discrete semiconductors characteristics. The VLSI implementation of neural network algorithm is a new direction of analog neural network developments and applications. Analog design can be very difficult because of need to compensate the variations in manufacturing, in temperature, etc. It is necessary to study the characteristics and effectiveness of this implementation. In this article the parameter variation influence over analog neural network behavior has been investigated

    SIRENA: A CAD environment for behavioural modelling and simulation of VLSI cellular neural network chips

    Get PDF
    This paper presents SIRENA, a CAD environment for the simulation and modelling of mixed-signal VLSI parallel processing chips based on cellular neural networks. SIRENA includes capabilities for: (a) the description of nominal and non-ideal operation of CNN analogue circuitry at the behavioural level; (b) performing realistic simulations of the transient evolution of physical CNNs including deviations due to second-order effects of the hardware; and, (c) evaluating sensitivity figures, and realize noise and Monte Carlo simulations in the time domain. These capabilities portray SIRENA as better suited for CNN chip development than algorithmic simulation packages (such as OpenSimulator, Sesame) or conventional neural networks simulators (RCS, GENESIS, SFINX), which are not oriented to the evaluation of hardware non-idealities. As compared to conventional electrical simulators (such as HSPICE or ELDO-FAS), SIRENA provides easier modelling of the hardware parasitics, a significant reduction in computation time, and similar accuracy levels. Consequently, iteration during the design procedure becomes possible, supporting decision making regarding design strategies and dimensioning. SIRENA has been developed using object-oriented programming techniques in C, and currently runs under the UNIX operating system and X-Windows framework. It employs a dedicated high-level hardware description language: DECEL, fitted to the description of non-idealities arising in CNN hardware. This language has been developed aiming generality, in the sense of making no restrictions on the network models that can be implemented. SIRENA is highly modular and composed of independent tools. This simplifies future expansions and improvements.Comisión Interministerial de Ciencia y Tecnología TIC96-1392-C02-0

    Power Aware Learning for Class AB Analogue VLSI Neural Network

    No full text
    Recent research into Artificial Neural Networks (ANN) has highlighted the potential of using compact analogue ANN hardware cores in embedded mobile devices, where power consumption of ANN hardware is a very significant implementation issue. This paper proposes a learning mechanism suitable for low-power class AB type analogue ANN that not only tunes the network to obtain minimum error, but also adaptively learns to reduce power consumption. Our experiments show substantial reductions in the power budget (30% to 50%) for a variety of example networks as a result of our power-aware learning
    corecore