2 research outputs found

    Partition Pruning: Parallelization-Aware Pruning for Deep Neural Networks

    Full text link
    Parameters of recent neural networks require a huge amount of memory. These parameters are used by neural networks to perform machine learning tasks when processing inputs. To speed up inference, we develop Partition Pruning, an innovative scheme to reduce the parameters used while taking into consideration parallelization. We evaluated the performance and energy consumption of parallel inference of partitioned models, which showed a 7.72x speed up of performance and a 2.73x reduction in the energy used for computing pruned layers of TinyVGG16 in comparison to running the unpruned model on a single accelerator. In addition, our method showed a limited reduction some numbers in accuracy while partitioning fully connected layers

    Mathematical Analysis and Design of Carbon Nanotubes based Nantennas

    Get PDF
    Recent advances in the fabrication and characterization of nanomaterials have led to intelligible applications of such nanomaterials in next generation flexible electronics and highly efficient photovoltaic devices. Nano devices are moving on a path toward smaller designs. This idea helps scientists to extend the efficiency of nano devices such as antennas, sensors and nano robots. On the other hand, the excellent electron transport property of Graphene makes it an attractive choice for next generation electronics and applications in nanotechnology. In this paper we present a mathematically analyze of Carbon Nanotubes (CNT) based Nano antennas (Nantennas) and further we present some applications regarding to a novel design in scale of nano meter
    corecore