2,312 research outputs found

    Aggregation of Descriptive Regularization Methods with Hardware/Software Co-Design for Remote Sensing Imaging

    Get PDF
    This study consider the problem of high-resolution imaging of the remote sensing (RS) environment formalized in terms of a nonlinear ill- posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the wavefield scattered from an extended remotely sensed scene (referred to as the scene image). However, the remote sensing techniques for reconstructive imaging in many RS application areas are relatively unacceptable for being implemented in a (near) real time implementation. In this work, we address a new aggregated descriptive-regularization (DR) method and the Hardware/Software (HW/SW) co-design for the SSP reconstruction from the uncertain speckle-corrupted measurement data in a computationally efficient parallel fashion that meets the (near) real time image processing requirements. The hardware design is performed via efficient systolic arrays (SAs). Finally, the efficiency both in resolution enhancement and in computational complexity reduction metrics of the aggregated descriptive-regularized and the HW/SW co-design method is presented via numerical simulations and by the performance analysis of the implementation based on a Xilinx Field Programmable Gate Array (FPGA) XC4VSX35-10ff668.Universidad de GuadalajaraUniversidad Autónoma de YucatánInstituto Tecnológico de Mérid

    Efficient ConvNets for Analog Arrays

    Full text link
    Analog arrays are a promising upcoming hardware technology with the potential to drastically speed up deep learning. Their main advantage is that they compute matrix-vector products in constant time, irrespective of the size of the matrix. However, early convolution layers in ConvNets map very unfavorably onto analog arrays, because kernel matrices are typically small and the constant time operation needs to be sequentially iterated a large number of times, reducing the speed up advantage for ConvNets. Here, we propose to replicate the kernel matrix of a convolution layer on distinct analog arrays, and randomly divide parts of the compute among them, so that multiple kernel matrices are trained in parallel. With this modification, analog arrays execute ConvNets with an acceleration factor that is proportional to the number of kernel matrices used per layer (here tested 16-128). Despite having more free parameters, we show analytically and in numerical experiments that this convolution architecture is self-regularizing and implicitly learns similar filters across arrays. We also report superior performance on a number of datasets and increased robustness to adversarial attacks. Our investigation suggests to revise the notion that mixed analog-digital hardware is not suitable for ConvNets

    Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective

    Get PDF
    On metrics of density and power efficiency, neuromorphic technologies have the potential to surpass mainstream computing technologies in tasks where real-time functionality, adaptability, and autonomy are essential. While algorithmic advances in neuromorphic computing are proceeding successfully, the potential of memristors to improve neuromorphic computing have not yet born fruit, primarily because they are often used as a drop-in replacement to conventional memory. However, interdisciplinary approaches anchored in machine learning theory suggest that multifactor plasticity rules matching neural and synaptic dynamics to the device capabilities can take better advantage of memristor dynamics and its stochasticity. Furthermore, such plasticity rules generally show much higher performance than that of classical Spike Time Dependent Plasticity (STDP) rules. This chapter reviews the recent development in learning with spiking neural network models and their possible implementation with memristor-based hardware
    corecore