306 research outputs found

    Magnetization Oscillation of a Spinor Condensate Induced by Magnetic Field Gradient

    Full text link
    We study the spin mixing dynamics of ultracold spin-1 atoms in a weak non-uniform magnetic field with field gradient GG, which can flip the spin from +1 to -1 so that the magnetization m=ρ+ρm=\rho_{+}-\rho_{-} is not any more a constant. The dynamics of mF=0m_F=0 Zeeman component ρ0\rho_{0}, as well as the system magnetization mm, are illustrated for both ferromagnetic and polar interaction cases in the mean-field theory. We find that the dynamics of system magnetization can be tuned between the Josephson-like oscillation similar to the case of double well, and the interesting self-trapping regimes, i.e. the spin mixing dynamics sustains a spontaneous magnetization. Meanwhile the dynamics of ρ0\rho_0 may be sufficiently suppressed for initially imbalanced number distribution in the case of polar interaction. A "beat-frequency" oscillation of the magnetization emerges in the case of balanced initial distribution for polar interaction, which vanishes for ferromagnetic interaction.Comment: 6 pages, 5 figures, Phys. Rev. A accepte

    Quantum tunneling of magnetization in dipolar spin-1 condensates under external fields

    Full text link
    We study the macroscopic quantum tunneling of magnetization of the F=1 spinor condensate interacting through dipole-dipole interaction with an external magnetic field applied along the longitudinal or transverse direction. We show that the ground state energy and the effective magnetic moment of the system exhibit an interesting macroscopic quantum oscillation phenomenon originating from the oscillating dependence of thermodynamic properties of the system on the vacuum angle. Tunneling between two degenerate minima are analyzed by means of an effective potential method and the periodic instanton method.Comment: 2 figures, accepted PR

    Vid2Act: Activate Offline Videos for Visual RL

    Full text link
    Pretraining RL models on offline video datasets is a promising way to improve their training efficiency in online tasks, but challenging due to the inherent mismatch in tasks, dynamics, and behaviors across domains. A recent model, APV, sidesteps the accompanied action records in offline datasets and instead focuses on pretraining a task-irrelevant, action-free world model within the source domains. We present Vid2Act, a model-based RL method that learns to transfer valuable action-conditioned dynamics and potentially useful action demonstrations from offline to online settings. The main idea is to use the world models not only as simulators for behavior learning but also as tools to measure the domain relevance for both dynamics representation transfer and policy transfer. Specifically, we train the world models to generate a set of time-varying task similarities using a domain-selective knowledge distillation loss. These similarities serve two purposes: (i) adaptively transferring the most useful source knowledge to facilitate dynamics learning, and (ii) learning to replay the most relevant source actions to guide the target policy. We demonstrate the advantages of Vid2Act over the action-free visual RL pretraining method in both Meta-World and DeepMind Control Suite

    (E)-2-[(4-Chloro-1,3-dimethyl-1H-pyrazol-5-yl)methyl­eneamino]benzamide

    Get PDF
    In the title compound, C13H13ClN4O, the dihedral angle between the aromatic rings is 33.47 (9)° and an intra­molecular N—H⋯N hydrogen bond generates an S(6) ring. In the crystal, inversion dimers linked by pairs of N—H⋯O hydrogen bonds occur, resulting in R 2 2(8) loops

    Unsupervised Object-Centric Voxelization for Dynamic Scene Understanding

    Full text link
    Understanding the compositional dynamics of multiple objects in unsupervised visual environments is challenging, and existing object-centric representation learning methods often ignore 3D consistency in scene decomposition. We propose DynaVol, an inverse graphics approach that learns object-centric volumetric representations in a neural rendering framework. DynaVol maintains time-varying 3D voxel grids that explicitly represent the probability of each spatial location belonging to different objects, and decouple temporal dynamics and spatial information by learning a canonical-space deformation field. To optimize the volumetric features, we embed them into a fully differentiable neural network, binding them to object-centric global features and then driving a compositional NeRF for scene reconstruction. DynaVol outperforms existing methods in novel view synthesis and unsupervised scene decomposition and allows for the editing of dynamic scenes, such as adding, deleting, replacing objects, and modifying their trajectories

    Model-Based Reinforcement Learning with Isolated Imaginations

    Full text link
    World models learn the consequences of actions in vision-based interactive systems. However, in practical scenarios like autonomous driving, noncontrollable dynamics that are independent or sparsely dependent on action signals often exist, making it challenging to learn effective world models. To address this issue, we propose Iso-Dream++, a model-based reinforcement learning approach that has two main contributions. First, we optimize the inverse dynamics to encourage the world model to isolate controllable state transitions from the mixed spatiotemporal variations of the environment. Second, we perform policy optimization based on the decoupled latent imaginations, where we roll out noncontrollable states into the future and adaptively associate them with the current controllable state. This enables long-horizon visuomotor control tasks to benefit from isolating mixed dynamics sources in the wild, such as self-driving cars that can anticipate the movement of other vehicles, thereby avoiding potential risks. On top of our previous work, we further consider the sparse dependencies between controllable and noncontrollable states, address the training collapse problem of state decoupling, and validate our approach in transfer learning setups. Our empirical study demonstrates that Iso-Dream++ outperforms existing reinforcement learning models significantly on CARLA and DeepMind Control.Comment: arXiv admin note: substantial text overlap with arXiv:2205.1381

    Collaborative World Models: An Online-Offline Transfer RL Approach

    Full text link
    Training visual reinforcement learning (RL) models in offline datasets is challenging due to overfitting issues in representation learning and overestimation problems in value function. In this paper, we propose a transfer learning method called Collaborative World Models (CoWorld) to improve the performance of visual RL under offline conditions. The core idea is to use an easy-to-interact, off-the-shelf simulator to train an auxiliary RL model as the online ``test bed'' for the offline policy learned in the target domain, which provides a flexible constraint for the value function -- Intuitively, we want to mitigate the overestimation problem of value functions outside the offline data distribution without impeding the exploration of actions with potential advantages. Specifically, CoWorld performs domain-collaborative representation learning to bridge the gap between online and offline hidden state distributions. Furthermore, it performs domain-collaborative behavior learning that enables the source RL agent to provide target-aware value estimation, allowing for effective offline policy regularization. Experiments show that CoWorld significantly outperforms existing methods in offline visual control tasks in DeepMind Control and Meta-World
    corecore