59 research outputs found

    Magnetic miniband and magnetotransport property of a graphene superlattice

    Full text link
    The eigen energy and the conductivity of a graphene sheet subject to a one-dimensional cosinusoidal potential and in the presence of a magnetic field are calculated. Such a graphene superlattice presents three distinct magnetic miniband structures as the magnetic field increases. They are, respectively, the triply degenerate Landau level spectrum, the nondegenerate minibands with finite dispersion and the same Landau level spectrum with the pristine graphene. The ratio of the magnetic length to the period of the potential function is the characteristic quantity to determine the electronic structure of the superlattice. Corresponding to these distinct electronic structures, the diagonal conductivity presents very strong anisotropy in the weak and moderate magnetic field cases. But the predominant magnetotransport orientation changes from the transverse to the longitudinal direction of the superlattice. More interestingly, in the weak magnetic field case, the superlattice exhibits half-integer quantum Hall effect, but with large jump between the Hall plateaux. Thus it is different from the one of the pristine graphene.Comment: 7 pages, 5 figure

    Multi-resolution Tensor Learning for Large-Scale Spatial Data

    Get PDF
    High-dimensional tensor models are notoriously computationally expensive to train. We present a meta-learning algorithm, MMT, that can significantly speed up the process for spatial tensor models. MMT leverages the property that spatial data can be viewed at multiple resolutions, which are related by coarsening and finegraining from one resolution to another. Using this property, MMT learns a tensor model by starting from a coarse resolution and iteratively increasing the model complexity. In order to not "over-train" on coarse resolution models, we investigate an information-theoretic fine-graining criterion to decide when to transition into higher-resolution models. We provide both theoretical and empirical evidence for the advantages of this approach. When applied to two real-world large-scale spatial datasets for basketball player and animal behavior modeling, our approach demonstrate 3 key benefits: 1) it efficiently captures higher-order interactions (i.e., tensor latent factors), 2) it is orders of magnitude faster than fixed resolution learning and scales to very fine-grained spatial resolutions, and 3) it reliably yields accurate and interpretable models

    Generating Long-term Trajectories Using Deep Hierarchical Networks

    Get PDF
    We study the problem of modeling spatiotemporal trajectories over long time horizons using expert demonstrations. For instance, in sports, agents often choose action sequences with long-term goals in mind, such as achieving a certain strategic position. Conventional policy learning approaches, such as those based on Markov decision processes, generally fail at learning cohesive long-term behavior in such high-dimensional state spaces, and are only effective when myopic modeling lead to the desired behavior. The key difficulty is that conventional approaches are "shallow" models that only learn a single state-action policy. We instead propose a hierarchical policy class that automatically reasons about both long-term and short-term goals, which we instantiate as a hierarchical neural network. We showcase our approach in a case study on learning to imitate demonstrated basketball trajectories, and show that it generates significantly more realistic trajectories compared to non-hierarchical baselines as judged by professional sports analysts.Comment: Published in NIPS 201

    Tunable pure spin currents in a triple-quantum-dot ring

    Full text link
    Electron transport properties in a triple-quantum-dot ring with three terminals are theoretically studied. By introducing local Rashba spin-orbit interaction on an individual quantum dot, we calculate the charge and spin currents in one lead. We find that a pure spin current appears in the absence of a magnetic field. The polarization direction of the spin current can be inverted by altering the bias voltage. In addition, by tuning the magnetic field strength, the charge and spin currents reach their respective peaks alternately.Comment: 5 pages, 2 Figure

    Generalized transfer matrix theory on electronic transport through graphene waveguide

    Full text link
    In the effective mass approximation, electronic property in graphene can be characterized by the relativistic Dirac equation. Within such a continuum model we investigate the electronic transport through graphene waveguides formed by connecting multiple segments of armchair-edged graphene nanoribbons of different widths. By using appropriate wavefunction connection conditions at the junction interfaces, we generalize the conventional transfer matrix approach to formulate the linear conductance of the graphene waveguide in terms of the structure parameters and the incident electron energy. In comparison with the tight-binding calculation, we find that the generalized transfer matrix method works well in calculating the conductance spectrum of a graphene waveguide even with a complicated structure and relatively large size. The calculated conductance spectrum indicates that the graphene waveguide exhibits a well-defined insulating band around the Dirac point, even though all the constituent ribbon segments are gapless. We attribute the occurrence of the insulating band to the antiresonance effect which is intimately associated with the edge states localized at the shoulder regions of the junctions. Furthermore, such an insulating band can be sensitively shifted by a gate voltage, which suggests a device application of the graphene waveguide as an electric nanoswitch.Comment: 11 pages, 5 figure

    Long-term Forecasting using Tensor-Train RNNs

    Get PDF
    We present Tensor-Train RNN (TT-RNN), a novel family of neural sequence architectures for multivariate forecasting in environments with nonlinear dynamics. Long-term forecasting in such systems is highly challenging, since there exist long-term temporal dependencies, higher-order correlations and sensitivity to error propagation. Our proposed tensor recurrent architecture addresses these issues by learning the nonlinear dynamics directly using higher order moments and high-order state transition functions. Furthermore, we decompose the higher-order structure using the tensor-train (TT) decomposition to reduce the number of parameters while preserving the model performance. We theoretically establish the approximation properties of Tensor-Train RNNs for general sequence inputs, and such guarantees are not available for usual RNNs. We also demonstrate significant long-term prediction improvements over general RNN and LSTM architectures on a range of simulated environments with nonlinear dynamics, as well on real-world climate and traffic data

    Generative Multi-Agent Behavioral Cloning

    Get PDF
    We propose and study the problem of generative multi-agent behavioral cloning, where the goal is to learn a generative, i.e., non-deterministic, multi-agent policy from pre-collected demonstration data. Building upon advances in deep generative models, we present a hierarchical policy framework that can tractably learn complex mappings from input states to distributions over multi-agent action spaces by introducing a hierarchy with macro-intent variables that encode long-term intent. In addition to synthetic settings, we show how to instantiate our framework to effectively model complex interactions between basketball players and generate realistic multi-agent trajectories of basketball gameplay over long time periods. We validate our approach using both quantitative and qualitative evaluations, including a user study comparison conducted with professional sports analysts
    • …
    corecore