47,243 research outputs found

    Curvature and torsion in growing actin networks

    Full text link
    Intracellular pathogens such as Listeria monocytogenes and Rickettsia rickettsii move within a host cell by polymerizing a comet-tail of actin fibers that ultimately pushes the cell forward. This dense network of cross-linked actin polymers typically exhibits a striking curvature that causes bacteria to move in gently looping paths. Theoretically, tail curvature has been linked to details of motility by considering force and torque balances from a finite number of polymerizing filaments. Here we track beads coated with a prokaryotic activator of actin polymerization in three dimensions to directly quantify the curvature and torsion of bead motility paths. We find that bead paths are more likely to have low rather than high curvature at any given time. Furthermore, path curvature changes very slowly in time, with an autocorrelation decay time of 200 seconds. Paths with a small radius of curvature, therefore, remain so for an extended period resulting in loops when confined to two dimensions. When allowed to explore a 3D space, path loops are less evident. Finally, we quantify the torsion in the bead paths and show that beads do not exhibit a significant left- or right-handed bias to their motion in 3D. These results suggest that paths of actin-propelled objects may be attributed to slow changes in curvature rather than a fixed torque

    Practical application of pseudospectral optimization to robot path planning

    Get PDF
    To obtain minimum time or minimum energy trajectories for robots it is necessary to employ planning methods which adequately consider the platformā€™s dynamic properties. A variety of sampling, graph-based or local receding-horizon optimisation methods have previously been proposed. These typically use simpliļ¬ed kino-dynamic models to avoid the signiļ¬cant computational burden of solving this problem in a high dimensional state-space. In this paper we investigate solutions from the class of pseudospectral optimisation methods which have grown in favour amongst the optimal control community in recent years. These methods have high computational efficiency and rapid convergence properties. We present a practical application of such an approach to the robot path planning problem to provide a trajectory considering the robotā€™s dynamic properties. We extend the existing literature by augmenting the path constraints with sensed obstacles rather than predeļ¬ned analytical functions to enable real world application

    Embodied Question Answering

    Full text link
    We present a new AI task -- Embodied Question Answering (EmbodiedQA) -- where an agent is spawned at a random location in a 3D environment and asked a question ("What color is the car?"). In order to answer, the agent must first intelligently navigate to explore the environment, gather information through first-person (egocentric) vision, and then answer the question ("orange"). This challenging task requires a range of AI skills -- active perception, language understanding, goal-driven navigation, commonsense reasoning, and grounding of language into actions. In this work, we develop the environments, end-to-end-trained reinforcement learning agents, and evaluation protocols for EmbodiedQA.Comment: 20 pages, 13 figures, Webpage: https://embodiedqa.org

    PanoGen: Text-Conditioned Panoramic Environment Generation for Vision-and-Language Navigation

    Full text link
    Vision-and-Language Navigation (VLN) requires the agent to follow language instructions to navigate through 3D environments. One main challenge in VLN is the limited availability of photorealistic training environments, which makes it hard to generalize to new and unseen environments. To address this problem, we propose PanoGen, a generation method that can potentially create an infinite number of diverse panoramic environments conditioned on text. Specifically, we collect room descriptions by captioning the room images in existing Matterport3D environments, and leverage a state-of-the-art text-to-image diffusion model to generate the new panoramic environments. We use recursive outpainting over the generated images to create consistent 360-degree panorama views. Our new panoramic environments share similar semantic information with the original environments by conditioning on text descriptions, which ensures the co-occurrence of objects in the panorama follows human intuition, and creates enough diversity in room appearance and layout with image outpainting. Lastly, we explore two ways of utilizing PanoGen in VLN pre-training and fine-tuning. We generate instructions for paths in our PanoGen environments with a speaker built on a pre-trained vision-and-language model for VLN pre-training, and augment the visual observation with our panoramic environments during agents' fine-tuning to avoid overfitting to seen environments. Empirically, learning with our PanoGen environments achieves the new state-of-the-art on the Room-to-Room, Room-for-Room, and CVDN datasets. Pre-training with our PanoGen speaker data is especially effective for CVDN, which has under-specified instructions and needs commonsense knowledge. Lastly, we show that the agent can benefit from training with more generated panoramic environments, suggesting promising results for scaling up the PanoGen environments.Comment: Project Webpage: https://pano-gen.github.io
    • ā€¦
    corecore