10,948 research outputs found

    Resilient Modulus Characterization of Alaskan Granular Base Materials

    Get PDF
    INE/AUC 11.0

    Generation of Two-Flavor Vortex Atom Laser from a Five-State Medium

    Full text link
    Two-flavor atom laser in a vortex state is obtained and analyzed via electromagnetically induced transparency (EIT) technique in a five-level MM type system by using two probe lights with ±z\pm z-directional orbital angular momentum ±l\pm l\hbar, respectively. Together with the original transfer technique of quantum states from light to matter waves, the present result can be extended to generate continuous two-flavor vortex atom laser with non-classical atoms.Comment: 5 pages, 1 figure; The previous version (v2) is a wrong one; this is the published versio

    Financial Impact of Fines in the Unbound Pavement Layers

    Get PDF
    INE/AUTC 14.1

    Dynamics of Domain Wall in a Biaxial Ferromagnet With Spin-torque

    Full text link
    The dynamics of the domain wall (DW) in a biaxial ferromagnet interacting with a spin-polarized current are described by sine-gordon (SG) equation coupled with Gilbert damping term in this paper. Within our frame-work of this model, we obtain a threshold of the current in the motion of a single DW with the perturbation theory on kink soliton solution to the corresponding ferromagnetic system, and the threshold is shown to be dependent on the Gilbert damping term. Also, the motion properties of the DW are discussed for the zero- and nonzero-damping cases, which shows that our theory to describe the dynamics of the DW are self-consistent.Comment: 7pages, 3figure

    Learning to Generate Time-Lapse Videos Using Multi-Stage Dynamic Generative Adversarial Networks

    Full text link
    Taking a photo outside, can we predict the immediate future, e.g., how would the cloud move in the sky? We address this problem by presenting a generative adversarial network (GAN) based two-stage approach to generating realistic time-lapse videos of high resolution. Given the first frame, our model learns to generate long-term future frames. The first stage generates videos of realistic contents for each frame. The second stage refines the generated video from the first stage by enforcing it to be closer to real videos with regard to motion dynamics. To further encourage vivid motion in the final generated video, Gram matrix is employed to model the motion more precisely. We build a large scale time-lapse dataset, and test our approach on this new dataset. Using our model, we are able to generate realistic videos of up to 128×128128\times 128 resolution for 32 frames. Quantitative and qualitative experiment results have demonstrated the superiority of our model over the state-of-the-art models.Comment: To appear in Proceedings of CVPR 201
    corecore