16 research outputs found

    On a unified theory of numbers

    Full text link
    We prove that several results in different areas of number theory such as the divergent series, summation of arithmetic functions, uniform distribution modulo one and summation over prime numbers which are currently considered to be independent results can be unified under a single equation. We apply our method to derive several new results in the above areas of number theory.Comment: 18 page

    Hardware Aware Evolutionary Neural Architecture Search using Representation Similarity Metric

    Get PDF
    peer reviewedHardware-aware Neural Architecture Search (HW-NAS) is a technique used to automatically design the architecture of a neural network for a specific task and target hardware. However, evaluating the performance of candidate architectures is a key challenge in HW-NAS, as it requires significant computational resources. To address this challenge, we propose an efficient hardware-aware evolution-based NAS approach called HW-EvRSNAS. Our approach re-frames the neural architecture search problem as finding an architecture with performance similar to that of a reference model for a target hardware, while adhering to a cost constraint for that hardware. This is achieved through a representation similarity metric known as Representation Mutual Information (RMI) employed as a proxy performance evaluator. It measures the mutual information between the hidden layer representations of a reference model and those of sampled architectures using a single training batch. We also use a penalty term that penalizes the search process in proportion to how far an architecture’s hardware cost is from the desired hardware cost threshold. This resulted in a significantly reduced search time compared to the literature that reached up to 8000x speedups resulting in lower CO2 emissions. The proposed approach is evaluated on two different search spaces while using lower computational resources. Furthermore, our approach is thoroughly examined on six different edge devices under various hardware cost constraints.Enabling Learning And Inferring Compact Deep Neural Network Topologies On Edge Devices (ELITE)9. Industry, innovation and infrastructur

    Impact of Disentanglement on Pruning Neural Networks

    Get PDF
    Efficient model compression techniques are required to deploy deep neural networks (DNNs) on edge devices for task specific objectives. A variational autoencoder (VAE) framework is combined with a pruning criterion to investigate the impact of having the network learn disentangled representations on the pruning process for the classification task.Enabling Learning And Inferring Compact Deep Neural Network Topologies On Edge Devices (ELITE

    Impact of Disentanglement on Pruning Neural Networks

    Get PDF
    Deploying deep learning neural networks on edge devices, to accomplish task specific objectives in the real-world, requires a reduction in their memory footprint, power consumption, and latency. This can be realized via efficient model compression. Disentangled latent representations produced by variational autoencoder (VAE) networks are a promising approach for achieving model compression because they mainly retain task-specific information, discarding useless information for the task at hand. We make use of the Beta-VAE framework combined with a standard criterion for pruning to investigate the impact of forcing the network to learn disentangled representations on the pruning process for the task of classification. In particular, we perform experiments on MNIST and CIFAR10 datasets, examine disentanglement challenges, and propose a path forward for future works.Enabling Learning And Inferring Compact Deep Neural Network Topologies On Edge Devices (ELITE

    Compression of Deep Neural Networks for Space Autonomous Systems

    Get PDF
    Efficient compression techniques are required to deploy deep neural networks (DNNs) on edge devices for space resource utilization tasks. Two approaches are investigated.Enabling Learning And Inferring Compact Deep Neural Network Topologies On Edge Devices (ELITE

    On the Half Line: K Ramachandra.

    No full text
    This is a short biographical note on the life and works of K. Ramachandra, one of the leading mathematicians in the field of analytic number theory in the second half of the twentieth century
    corecore