524 research outputs found

    Number of Forts in Iterated Logistic Mapping

    Get PDF
    Using the theory of complete discrimination system and the computer algebra system MAPLE V.17, we compute the number of forts for the logistic mapping fλ(x)=λx(1-x) on [0,1] parameterized by λ∈(0,4]. We prove that if 0<λ≤2 then the number of forts does not increase under iteration and that if λ>2 then the number of forts is not bounded under iteration. Furthermore, we focus on the case of λ>2 and give for each k=1,…,7 some critical values of λ for the change of numbers of forts

    Motion-to-Matching: A Mixed Paradigm for 3D Single Object Tracking

    Full text link
    3D single object tracking with LiDAR points is an important task in the computer vision field. Previous methods usually adopt the matching-based or motion-centric paradigms to estimate the current target status. However, the former is sensitive to the similar distractors and the sparseness of point cloud due to relying on appearance matching, while the latter usually focuses on short-term motion clues (eg. two frames) and ignores the long-term motion pattern of target. To address these issues, we propose a mixed paradigm with two stages, named MTM-Tracker, which combines motion modeling with feature matching into a single network. Specifically, in the first stage, we exploit the continuous historical boxes as motion prior and propose an encoder-decoder structure to locate target coarsely. Then, in the second stage, we introduce a feature interaction module to extract motion-aware features from consecutive point clouds and match them to refine target movement as well as regress other target states. Extensive experiments validate that our paradigm achieves competitive performance on large-scale datasets (70.9% in KITTI and 51.70% in NuScenes). The code will be open soon at https://github.com/LeoZhiheng/MTM-Tracker.git.Comment: Accepted for publication at IEEE Robotics and Automation Letters (RAL

    Relations of blood lead levels to echocardiographic left ventricular structure and function in preschool children

    Get PDF
    Lead (Pb) has been proved to exert adverse effect on human cardiovascular system. However, the cardiotoxicity of Pb on children is still unclear. The aim of this study was to evaluate left ventricular (LV) structure and function, by using echocardiographic indices, in order to elucidate the effect of Pb on low-grade inflammation related to left ventricle in healthy preschool children. We recruited a total of 486 preschool children, 310 from Guiyu (e-waste-exposed area) and 176 from Haojiang (reference area). Blood Pb levels, complete blood counts, and LV parameters were evaluated. Associations between blood Pb levels and LV parameters and peripheral leukocyte counts were analyzed using linear regression models. The median blood level of Pb and the counts of white blood cells (WBCs), monocytes, and neutrophils were higher in exposed group. In addition, the exposed group showed smaller left ventricle (including interventricular septum, LV posterior wall, and LV mass index) and impaired LV systolic function (including LV fractional shortening and LV ejection fraction) regardless gender. After adjustment for confounding factors, elevated blood Pb levels were significantly associated with higher counts of WBCs and neutrophils, and lower levels of LV parameters. Furthermore, counts of WBCs, monocytes, and neutrophils were negatively correlated with LV parameters. Taken together, smaller left ventricle and impaired systolic function were found in e-waste-exposed children and associated with chronic low-grade inflammation and elevated blood Pb levels. It indicates that the heart health of e-waste-exposed children is at risk due to the long-term environmental chemical insults. (C) 2020 Elsevier Ltd. All rights reserved

    Distance-rank Aware Sequential Reward Learning for Inverse Reinforcement Learning with Sub-optimal Demonstrations

    Full text link
    Inverse reinforcement learning (IRL) aims to explicitly infer an underlying reward function based on collected expert demonstrations. Considering that obtaining expert demonstrations can be costly, the focus of current IRL techniques is on learning a better-than-demonstrator policy using a reward function derived from sub-optimal demonstrations. However, existing IRL algorithms primarily tackle the challenge of trajectory ranking ambiguity when learning the reward function. They overlook the crucial role of considering the degree of difference between trajectories in terms of their returns, which is essential for further removing reward ambiguity. Additionally, it is important to note that the reward of a single transition is heavily influenced by the context information within the trajectory. To address these issues, we introduce the Distance-rank Aware Sequential Reward Learning (DRASRL) framework. Unlike existing approaches, DRASRL takes into account both the ranking of trajectories and the degrees of dissimilarity between them to collaboratively eliminate reward ambiguity when learning a sequence of contextually informed reward signals. Specifically, we leverage the distance between policies, from which the trajectories are generated, as a measure to quantify the degree of differences between traces. This distance-aware information is then used to infer embeddings in the representation space for reward learning, employing the contrastive learning technique. Meanwhile, we integrate the pairwise ranking loss function to incorporate ranking information into the latent features. Moreover, we resort to the Transformer architecture to capture the contextual dependencies within the trajectories in the latent space, leading to more accurate reward estimation. Through extensive experimentation, our DRASRL framework demonstrates significant performance improvements over previous SOTA methods

    LivePhoto: Real Image Animation with Text-guided Motion Control

    Full text link
    Despite the recent progress in text-to-video generation, existing studies usually overlook the issue that only spatial contents but not temporal motions in synthesized videos are under the control of text. Towards such a challenge, this work presents a practical system, named LivePhoto, which allows users to animate an image of their interest with text descriptions. We first establish a strong baseline that helps a well-learned text-to-image generator (i.e., Stable Diffusion) take an image as a further input. We then equip the improved generator with a motion module for temporal modeling and propose a carefully designed training pipeline to better link texts and motions. In particular, considering the facts that (1) text can only describe motions roughly (e.g., regardless of the moving speed) and (2) text may include both content and motion descriptions, we introduce a motion intensity estimation module as well as a text re-weighting module to reduce the ambiguity of text-to-motion mapping. Empirical evidence suggests that our approach is capable of well decoding motion-related textual instructions into videos, such as actions, camera movements, or even conjuring new contents from thin air (e.g., pouring water into an empty glass). Interestingly, thanks to the proposed intensity learning mechanism, our system offers users an additional control signal (i.e., the motion intensity) besides text for video customization.Comment: Project page: https://xavierchen34.github.io/LivePhoto-Page

    Dual Contrastive Network for Sequential Recommendation with User and Item-Centric Perspectives

    Full text link
    With the outbreak of today's streaming data, sequential recommendation is a promising solution to achieve time-aware personalized modeling. It aims to infer the next interacted item of given user based on history item sequence. Some recent works tend to improve the sequential recommendation via randomly masking on the history item so as to generate self-supervised signals. But such approach will indeed result in sparser item sequence and unreliable signals. Besides, the existing sequential recommendation is only user-centric, i.e., based on the historical items by chronological order to predict the probability of candidate items, which ignores whether the items from a provider can be successfully recommended. The such user-centric recommendation will make it impossible for the provider to expose their new items and result in popular bias. In this paper, we propose a novel Dual Contrastive Network (DCN) to generate ground-truth self-supervised signals for sequential recommendation by auxiliary user-sequence from item-centric perspective. Specifically, we propose dual representation contrastive learning to refine the representation learning by minimizing the euclidean distance between the representations of given user/item and history items/users of them. Before the second contrastive learning module, we perform next user prediction to to capture the trends of items preferred by certain types of users and provide personalized exploration opportunities for item providers. Finally, we further propose dual interest contrastive learning to self-supervise the dynamic interest from next item/user prediction and static interest of matching probability. Experiments on four benchmark datasets verify the effectiveness of our proposed method. Further ablation study also illustrates the boosting effect of the proposed components upon different sequential models.Comment: 23 page

    Cryo-EM Structure of Dodecameric Vps4p and Its 2:1 Complex with Vta1p

    Get PDF
    The type I AAA (ATPase associated with a variety of cellular activities) ATPase Vps4 and its co-factor Vta1p/LIP5 function in membrane remodeling events that accompany cytokinesis, multivesicular body biogenesis, and retrovirus budding, apparently by driving disassembly and recycling of membrane-associated ESCRT (endosomal sorting complex required for transport)-III complexes. Here, we present electron cryomicroscopy reconstructions of dodecameric yeast Vps4p complexes with and without their microtubule interacting and transport (MIT) N-terminal domains and Vta1p co-factors. The ATPase domains of Vps4p form a bowl-like structure composed of stacked hexameric rings. The two rings adopt dramatically different conformations, with the “upper” ring forming an open assembly that defines the sides of the bowl and the lower ring forming a closed assembly that forms the bottom of the bowl. The N-terminal MIT domains of the upper ring localize on the symmetry axis above the cavity of the bowl, and the binding of six extended Vta1p monomers causes additional density to appear both above and below the bowl. The structures suggest models in which Vps4p MIT and Vta1p domains engage ESCRT-III substrates above the bowl and help transfer them into the bowl to be pumped through the center of the dodecameric assembly

    CCM: Adding Conditional Controls to Text-to-Image Consistency Models

    Full text link
    Consistency Models (CMs) have showed a promise in creating visual content efficiently and with high quality. However, the way to add new conditional controls to the pretrained CMs has not been explored. In this technical report, we consider alternative strategies for adding ControlNet-like conditional control to CMs and present three significant findings. 1) ControlNet trained for diffusion models (DMs) can be directly applied to CMs for high-level semantic controls but struggles with low-level detail and realism control. 2) CMs serve as an independent class of generative models, based on which ControlNet can be trained from scratch using Consistency Training proposed by Song et al. 3) A lightweight adapter can be jointly optimized under multiple conditions through Consistency Training, allowing for the swift transfer of DMs-based ControlNet to CMs. We study these three solutions across various conditional controls, including edge, depth, human pose, low-resolution image and masked image with text-to-image latent consistency models.Comment: Project Page: https://swiftforce.github.io/CC
    corecore