554 research outputs found

    Influence of quantum confinement on the ferromagnetism of (Ga,Mn)As diluted magnetic semiconductor

    Full text link
    We investigate the effect of quantum confinement on the ferromagnetism of diluted magnetic semiconductor Ga1x_{1-x}Mnx_xAs using a combination of tight-binding and density functional methods. We observe strong majority-spin Mn dd-As pp hybridization, as well as half metallic behavior, down to sizes as small as 20 \AA in diameter. Below this critical size, the doped holes are self-trapped by the Mn-sites, signalling both valence and electronic transitions. Our results imply that magnetically doped III-V nanoparticles will provide a medium for manipulating the electronic structure of dilute magnetic semiconductors while conserving the ferromagnetic properties and even enhancing it in certain size regime.Comment: 4 pages, 3 figure

    Simultaneous control of nanocrystal size and nanocrystal-nanocrystal separation in CdS nanocrystal assembly

    Get PDF
    We report an easy, one pot synthesis to prepare ordered CdS nanocrystals with varying inter-particle separation and characterize the particle separation using x-ray diffraction at low and wide angles

    On Achieving Privacy-Preserving State-of-the-Art Edge Intelligence

    Get PDF
    Deep Neural Network (DNN) Inference in Edge Computing, often called Edge Intelligence, requires solutions to insure that sensitive data confidentiality and intellectual property are not revealed in the process. Privacy-preserving Edge Intelligence is only emerging, despite the growing prevalence of Edge Computing as a context of Machine-Learning-as-a-Service. Solutions are yet to be applied, and possibly adapted, to state-of-the-art DNNs. This position paper provides an original assessment of the compatibility of existing techniques for privacy-preserving DNN Inference with the characteristics of an Edge Computing setup, highlighting the appropriateness of secret sharing in this context. We then address the future role of model compression methods in the research towards secret sharing on DNNs with state-of-the-art performance

    CPU-GPU Layer-Switched Low Latency CNN Inference

    Get PDF
    Convolutional Neural Networks (CNNs) inference on Heterogeneous Multi-Processor System-on-Chips (HMPSoCs) in edge devices represent cutting-edge embedded machine learning. Embedded CPU and GPU within an HMPSoC can both perform inference using CNNs. However, common practice is to run a CNN on the HMPSoC component (CPU or GPU) provides the best performance (lowest latency) for that CNN. CNNs are not monolithic and are composed of several layers of different types. Some of these layers have lower latency on the CPU, while others execute faster on the GPU. In this work, we investigate the reason behind this observation. We also propose an execution of CNN that switches between CPU and GPU at the layer granularity, wherein a CNN layer executes on the component that provides it with the lowest latency. Switching between the CPU and the GPU back and forth mid-inference introduces additional overhead (delay) in the inference. Regardless of overhead, we show in this work that a CPU-GPU layer switched execution results in, on average, having 4.72% lower CNN inference latency on the Khadas VIM 3 board with Amlogic A311D HMPSoC

    Scenario Based Run-time Switching for Adaptive CNN-based Applications at the Edge

    Get PDF
    Convolutional Neural Networks (CNNs) are biologically inspired computational models that are at the heart of many modern computer vision and natural language processing applications. Some of the CNN-based applications are executed on mobile and embedded devices. Execution of CNNs on such devices places numerous demands on the CNNs, such as high accuracy, high throughput, low memory cost, and low energy consumption. These requirements are very difficult to satisfy at the same time, so CNN execution at the edge typically involves trade-offs (e.g., high CNN throughput is achieved at the cost of decreased CNN accuracy). In existing methodologies, such trade-offs are either chosen once and remain unchanged during a CNN-based application execution, or are adapted to the properties of the CNN input data. However, the application needs can also be significantly affected by the changes in the application environment, such as a change of the battery level in the edge device. Thus, CNN-based applications need a mechanism that allows to dynamically adapt their characteristics to the changes in the application environment at run-time. Therefore, in this article, we propose a scenario-based run-time switching (SBRS) methodology, that implements such a mechanism
    corecore