205 research outputs found

    Distributed human 3D pose estimation and action recognition.

    Get PDF
    In this paper, we propose a distributed solution for3D human pose estimation using a RGBD camera network. Thekey feature of our method is a dynamic hybrid consensus filter(DHCF) is introduced to fuse the multiple view informationof cameras. In contrast to the centralized fusion solution,the DHCF algorithm can be used in a distributed network,which requires no central information fusion center. Therefore,the DHCF based fusion algorithm can benefit from manyadvantages of distributed network. We also show that theproposed fusion algorithm can handle the occlusion problemseffectively, and achieve higher action recognition rate comparedto the ones using only single view information

    Simulating the Range Expansion of Spartina alterniflora

    Get PDF
    Environmental factors play an important role in the range expansion of Spartina alterniflora in estuarine salt marshes. CA models focusing on neighbor effect often failed to account for the influence of environmental factors. This paper proposed a CCA model that enhanced CA model by integrating constrain factors of tidal elevation, vegetation density, vegetation classification, and tidal channels in Chongming Dongtan wetland, China. Meanwhile, a positive feedback loop between vegetation and sedimentation was also considered in CCA model through altering the tidal accretion rate in different vegetation communities. After being validated and calibrated, the CCA model is more accurate than the CA model only taking account of neighbor effect. By overlaying remote sensing classification and the simulation results, the average accuracy increases to 80.75% comparing with the previous CA model. Through the scenarios simulation, the future of Spartina alterniflora expansion was analyzed. CCA model provides a new technical idea and method for salt marsh species expansion and control strategies research

    Visual SLAM based on dynamic object removal

    Get PDF
    Visual simultaneous localization and mapping (SLAM) is the core of intelligent robot navigation system. Many traditional SLAM algorithms assume that the scene is static. When a dynamic object appears in the environment, the accuracy of visual SLAM can degrade due to the interference of dynamic features of moving objects. This strong hypothesis limits the SLAM applications for service robot or driverless car in the real dynamic environment. In this paper, a dynamic object removal algorithm that combines object recognition and optical flow techniques is proposed in the visual SLAM framework for dynamic scenes. The experimental results show that our new method can detect moving object effectively and improve the SLAM performance compared to the state of the art methods

    Simultaneous monocular visual odometry and depth reconstruction with scale recovery

    Get PDF
    In this paper, we propose a deep neural net-work that can estimate camera poses and reconstruct thefull resolution depths of the environment simultaneously usingonly monocular consecutive images. In contrast to traditionalmonocular visual odometry methods, which cannot estimatescaled depths, we here demonstrate the recovery of the scaleinformation using a sparse depth image as a supervision signalin the training step. In addition, based on the scaled depth,the relative poses between consecutive images can be estimatedusing the proposed deep neural network. Another novelty liesin the deployment of view synthesis, which can synthesize anew image of the scene from a different view (camera pose)given an input image. The view synthesis is the core techniqueused for constructing a loss function for the proposed neuralnetwork, which requires the knowledge of the predicted depthsand relative poses, such that the proposed method couples thevisual odometry and depth prediction together. In this way,both the estimated poses and the predicted depths from theneural network are scaled using the sparse depth image as thesupervision signal during training. The experimental results onthe KITTI dataset show competitive performance of our methodto handle challenging environments

    Effects and Mechanisms of Surface Topography on the Antiwear Properties of Molluscan Shells ( Scapharca subcrenata

    Get PDF
    The surface topography (surface morphology and structure) of the left Scapharca subcrenata shell differs from that of its right shell. This phenomenon is closely related to antiwear capabilities. The objective of this study is to investigate the effects and mechanisms of surface topography on the antiwear properties of Scapharca subcrenata shells. Two models are constructed—a rib morphology model (RMM) and a coupled structure model (CSM)—to mimic the topographies of the right and left shells. The antiwear performance and mechanisms of the two models are studied using the fluid-solid interaction (FSI) method. The simulation results show that the antiwear capabilities of the CSM are superior to those of the RMM. The CSM is also more conducive to decreasing the impact velocity and energy of abrasive particles, reducing the probability of microcrack generation, extension, and desquamation. It can be deduced that in the real-world environment, Scapharca subcrenata’s left shell sustains more friction than its right shell. Thus, the coupled structure of the left shell is the result of extensive evolution

    Rethinking the Number of Shots in Robust Model-Agnostic Meta-Learning

    Full text link
    Robust Model-Agnostic Meta-Learning (MAML) is usually adopted to train a meta-model which may fast adapt to novel classes with only a few exemplars and meanwhile remain robust to adversarial attacks. The conventional solution for robust MAML is to introduce robustness-promoting regularization during meta-training stage. With such a regularization, previous robust MAML methods simply follow the typical MAML practice that the number of training shots should match with the number of test shots to achieve an optimal adaptation performance. However, although the robustness can be largely improved, previous methods sacrifice clean accuracy a lot. In this paper, we observe that introducing robustness-promoting regularization into MAML reduces the intrinsic dimension of clean sample features, which results in a lower capacity of clean representations. This may explain why the clean accuracy of previous robust MAML methods drops severely. Based on this observation, we propose a simple strategy, i.e., increasing the number of training shots, to mitigate the loss of intrinsic dimension caused by robustness-promoting regularization. Though simple, our method remarkably improves the clean accuracy of MAML without much loss of robustness, producing a robust yet accurate model. Extensive experiments demonstrate that our method outperforms prior arts in achieving a better trade-off between accuracy and robustness. Besides, we observe that our method is less sensitive to the number of fine-tuning steps during meta-training, which allows for a reduced number of fine-tuning steps to improve training efficiency

    Efficient multi-view multi-target tracking using a distributed camera network

    Get PDF
    In this paper, we propose a multi-target tracking method using a distributed camera network, which can effectively handle the occlusion and reidenfication problems by combining advanced deep learning and distributed information fusion. The targets are first detected using a fast object detection method based on deep learning. We then combine the deep visual feature information and spatial trajectory information in the Hungarian algorithm for robust targets association. The deep visual feature information is extracted from a convolutional neural network, which is pre-trained using a large-scale person reidentification dataset. The spatial trajectories of multiple targets in our framework are derived from a multiple view information fusion method, which employs an information weighted consensus filter for fusion and tracking. In addition, we also propose an efficient track processing method for ID assignment using multiple view information. The experiments on public datasets show that the proposed method is robust to solve the occlusion problem and reidentification problem, and can achieve superior performance compared to the state of the art methods
    • …
    corecore