3,735 research outputs found

    Strong micro-macro entanglement from a weak cross-Kerr nonlinearity

    Full text link
    We study the entanglement generated by a weak cross-Kerr nonlinearity between two initial coherent states, one of which has an amplitude close to the single-photon level, while the other one is macroscopic. We show that strong micro-macro entanglement is possible for weak phase shifts by choosing the amplitude of the macroscopic beam sufficiently large. We analyze the effects of loss and discuss possible experimental demonstrations of the micro-macro entanglement based on homodyne tomography and on a new entanglement witness

    Survey of Meta-Heuristic Algorithms for Deep Learning Training

    Get PDF
    Deep learning (DL) is a type of machine learning that mimics the thinking patterns of a human brain to learn the new abstract features automatically by deep and hierarchical layers. DL is implemented by deep neural network (DNN) which has multi-hidden layers. DNN is developed from traditional artificial neural network (ANN). However, in the training process of DL, it has certain inefficiency due to very long training time required. Meta-heuristic aims to find good or near-optimal solutions at a reasonable computational cost. In this article, meta-heuristic algorithms are reviewed, such as genetic algorithm (GA) and particle swarm optimization (PSO), for traditional neural network’s training and parameter optimization. Thereafter the possibilities of applying meta-heuristic algorithms on DL training and parameter optimization are discussed

    ANALISIS COST VOLUME PROFIT SEBAGAI DASAR PERENCANAAN LABA PERUSAHAAN YANG DIHARAPKAN (STUDI KASUS SULTAN’S BARBERSHOP)

    Get PDF
    A company needs planning to assist management in estimating the level of profit to be obtained, with a Cost-Volume-Profit analysis that focuses on various factors that influence changes in the earnings component. This study aims to determine the application of CVP analysis as a basis for expected earnings planning for the second quarter of 2020. The method used is a descriptive method with a case study approach. Researchers gather company information and then conduct data analysis. CVP analysis is performed with break event point (BEP) analysis, contribution margin, and margin of safety. The results showed that in the first quarter the contribution margin was IDR 32,424,125. The minimum sales are IDR 19,330,018 and the break-even point is IDR 39,838,182. The company set a profit of 20% from the first quarter. To achieve the expected profit, sales are targeted at Rp. 62,775,909 in the second quarter. Management can apply CVP analysis to assist in planning earnings in the following quarter. &nbsp

    Nondestructive photon detection using a single rare-earth ion coupled to a photonic cavity

    Get PDF
    We study the possibility of using single rare-earth ions coupled to a photonic cavity with high cooperativity for performing nondestructive measurements of photons, which would be useful for global quantum networks and photonic quantum computing. We calculate the achievable fidelity as a function of the parameters of the rare-earth ion and photonic cavity, which include the ion's optical and spin dephasing rates, the cavity linewidth, the single-photon coupling to the cavity, and the detection efficiency. We suggest a promising experimental realization using current state-of-the-art technology in Nd:YVO_4

    Scan and Snap: Understanding Training Dynamics and Token Composition in 1-layer Transformer

    Full text link
    Transformer architecture has shown impressive performance in multiple research domains and has become the backbone of many neural network models. However, there is limited understanding on how it works. In particular, with a simple predictive loss, how the representation emerges from the gradient \emph{training dynamics} remains a mystery. In this paper, for 1-layer transformer with one self-attention layer plus one decoder layer, we analyze its SGD training dynamics for the task of next token prediction in a mathematically rigorous manner. We open the black box of the dynamic process of how the self-attention layer combines input tokens, and reveal the nature of underlying inductive bias. More specifically, with the assumption (a) no positional encoding, (b) long input sequence, and (c) the decoder layer learns faster than the self-attention layer, we prove that self-attention acts as a \emph{discriminative scanning algorithm}: starting from uniform attention, it gradually attends more to distinct key tokens for a specific next token to be predicted, and pays less attention to common key tokens that occur across different next tokens. Among distinct tokens, it progressively drops attention weights, following the order of low to high co-occurrence between the key and the query token in the training set. Interestingly, this procedure does not lead to winner-takes-all, but decelerates due to a \emph{phase transition} that is controllable by the learning rates of the two layers, leaving (almost) fixed token combination. We verify this \textbf{\emph{scan and snap}} dynamics on synthetic and real-world data (WikiText).Comment: Fix minor issues in the proofs and figures. Update figures to reflect the main conclusions more accuratel
    • …
    corecore