427 research outputs found

    The Photonic Band theory and the negative refraction experiment of metallic helix metamaterials

    Full text link
    We develop a theory to compute and interpret the photonic band structure of a periodic array of metallic helices for the first time. Interesting features of band structure include the ingenuous longitudinal and circularly polarized eigenmodes, the wide polarization gap [Science 325, 1513 (2009)], and the helical symmetry guarantees the existence of negative group velocity bands at both sides of the polarization gap and band crossings pinned at the zone boundary with fixed frequencies. A direct proof of negative refraction via a chiral route [Science 306, 1353 (2004)] is achieved for the first time by measuring Gooshanchen shift through a slab of three dimensional bona fide helix metamaterial

    Numerical assessment of bond-slip relationships for naturally corroded plain reinforcement bars in concrete beams

    Get PDF
    Reinforced Concrete (RC) heritage structures are often affected by corrosion. Consequently, knowledge about the effect of corrosion on the bond between reinforcing bars and surrounding concrete is critical when assessing the structural performance of these structures. In earlier work, structural tests were carried out on segments of edge beams taken from a decommissioned RC bridge. The specimens had naturally corroded plain reinforcement bars and three-point bending tests were conducted, to investigate their anchorage capacity. In this study, non-linear finite element analyses (NLFEA) were carried out to gain further insight into the bond behaviour of the tested specimens, including the effect of corrosion on the bond-slip relationship. Two different, one-dimensional (1D), bond-slip relationships were calibrated for each tested bar, to account for loss of bond upon yielding. The calibration process was based on a comparison between significant numerical and experimental results, including load–deflection curve, crack pattern and asymmetrical distribution of the yield penetration along the length of the bar. Good agreement between the FE analyses and experimental tests was observed. Finally, the calibrated bond-slip relationships for nine beams with different corrosion levels, casting positions, and visual damage are presented and discussed. The loss of bond at yielding and yield penetration asymmetry are both shown to be crucial factors for adequately describing structural behaviour

    ProgressLabeller: Visual Data Stream Annotation for Training Object-Centric 3D Perception

    Full text link
    Visual perception tasks often require vast amounts of labelled data, including 3D poses and image space segmentation masks. The process of creating such training data sets can prove difficult or time-intensive to scale up to efficacy for general use. Consider the task of pose estimation for rigid objects. Deep neural network based approaches have shown good performance when trained on large, public datasets. However, adapting these networks for other novel objects, or fine-tuning existing models for different environments, requires significant time investment to generate newly labelled instances. Towards this end, we propose ProgressLabeller as a method for more efficiently generating large amounts of 6D pose training data from color images sequences for custom scenes in a scalable manner. ProgressLabeller is intended to also support transparent or translucent objects, for which the previous methods based on depth dense reconstruction will fail. We demonstrate the effectiveness of ProgressLabeller by rapidly create a dataset of over 1M samples with which we fine-tune a state-of-the-art pose estimation network in order to markedly improve the downstream robotic grasp success rates. ProgressLabeller is open-source at https://github.com/huijieZH/ProgressLabeller.Comment: IROS 2022 accepted paper; project page: https://progress.eecs.umich.edu/projects/progress-labeller

    Exploring Model Transferability through the Lens of Potential Energy

    Full text link
    Transfer learning has become crucial in computer vision tasks due to the vast availability of pre-trained deep learning models. However, selecting the optimal pre-trained model from a diverse pool for a specific downstream task remains a challenge. Existing methods for measuring the transferability of pre-trained models rely on statistical correlations between encoded static features and task labels, but they overlook the impact of underlying representation dynamics during fine-tuning, leading to unreliable results, especially for self-supervised models. In this paper, we present an insightful physics-inspired approach named PED to address these challenges. We reframe the challenge of model selection through the lens of potential energy and directly model the interaction forces that influence fine-tuning dynamics. By capturing the motion of dynamic representations to decline the potential energy within a force-driven physical model, we can acquire an enhanced and more stable observation for estimating transferability. The experimental results on 10 downstream tasks and 12 self-supervised models demonstrate that our approach can seamlessly integrate into existing ranking techniques and enhance their performances, revealing its effectiveness for the model selection task and its potential for understanding the mechanism in transfer learning. Code will be available at https://github.com/lixiaotong97/PED.Comment: Accepted by ICCV 202

    TransNet: Transparent Object Manipulation Through Category-Level Pose Estimation

    Full text link
    Transparent objects present multiple distinct challenges to visual perception systems. First, their lack of distinguishing visual features makes transparent objects harder to detect and localize than opaque objects. Even humans find certain transparent surfaces with little specular reflection or refraction, like glass doors, difficult to perceive. A second challenge is that depth sensors typically used for opaque object perception cannot obtain accurate depth measurements on transparent surfaces due to their unique reflective properties. Stemming from these challenges, we observe that transparent object instances within the same category, such as cups, look more similar to each other than to ordinary opaque objects of that same category. Given this observation, the present paper explores the possibility of category-level transparent object pose estimation rather than instance-level pose estimation. We propose \textit{\textbf{TransNet}}, a two-stage pipeline that estimates category-level transparent object pose using localized depth completion and surface normal estimation. TransNet is evaluated in terms of pose estimation accuracy on a large-scale transparent object dataset and compared to a state-of-the-art category-level pose estimation approach. Results from this comparison demonstrate that TransNet achieves improved pose estimation accuracy on transparent objects. Moreover, we use TransNet to build an autonomous transparent object manipulation system for robotic pick-and-place and pouring tasks

    mc-BEiT: Multi-choice Discretization for Image BERT Pre-training

    Full text link
    Image BERT pre-training with masked image modeling (MIM) becomes a popular practice to cope with self-supervised representation learning. A seminal work, BEiT, casts MIM as a classification task with a visual vocabulary, tokenizing the continuous visual signals into discrete vision tokens using a pre-learned dVAE. Despite a feasible solution, the improper discretization hinders further improvements of image pre-training. Since image discretization has no ground-truth answers, we believe that the masked patch should not be assigned with a unique token id even if a better tokenizer can be obtained. In this work, we introduce an improved BERT-style image pre-training method, namely mc-BEiT, which performs MIM proxy tasks towards eased and refined multi-choice training objectives. Specifically, the multi-choice supervision for the masked image patches is formed by the soft probability vectors of the discrete token ids, which are predicted by the off-the-shelf image tokenizer and further refined by high-level inter-patch perceptions resorting to the observation that similar patches should share their choices. Extensive experiments on classification, segmentation, and detection tasks demonstrate the superiority of our method, e.g., the pre-trained ViT-B achieves 84.1% top-1 fine-tuning accuracy on ImageNet-1K classification, 50.8% mIOU on ADE20K semantic segmentation, 51.2% AP^b and 44.3% AP^m of object detection and instance segmentation on COCO, outperforming the competitive counterparts
    • …
    corecore