297 research outputs found

    Statistical inference for function-on-function linear regression

    Get PDF
    We propose a reproducing kernel Hilbert space approach to estimate the slope in a function-on-function linear regression via penalised least squares, regularized by the thin-plate spline smoothness penalty. In contrast to most of the work on functional linear regression, our main focus is on statistical inference with respect to the sup-norm. This point of view is motivated by the fact that slope (surfaces) with rather different shapes may still be identified as similar when the difference is measured by an L2-type norm. However, in applications it is often desirable to use metrics reflecting the visualization of the objects in the statistical analysis. We prove the weak convergence of the slope surface estimator as a process in the space of all continuous functions. This allows us the construction of simultaneous confidence regions for the slope surface and simultaneous prediction bands. As a further consequence, we derive new tests for the hypothesis that the maximum deviation between the “true” slope surface and a given surface is less or equal than a given threshold. In other words: we are not trying to test for exact equality (because in many applications this hypothesis is hard to justify), but rather for pre-specified deviations under the null hypothesis. To ensure practicability, non-standard bootstrap procedures are developed addressing particular features that arise in these testing problems. As a by-product, we also derive several new results and statistical inference tools for the function-on-function linear regression model, such as minimax optimal convergence rates and likelihood-ratio tests. We also demonstrate that the new methods have good finite sample properties by means of a simulation study and illustrate their practicability by analyzing a data example

    Crosstalk Impacts on Homogeneous Weakly-Coupled Multicore Fiber Based IM/DD System

    Full text link
    We numerically discussed crosstalk impacts on homogeneous weakly-coupled multicore fiber based intensity modulation/direct-detection (IM/DD) systems taking into account mean crosstalk power fluctuation, walk-off between cores, laser frequency offset, and laser linewidth.Comment: 3 pages, 11 figures

    EBVCR: A Energy Balanced Virtual Coordinate Routing in Wireless Sensor Networks

    Get PDF
    AbstractGeographic routing can provide efficient routing at a fixed overhead. However, the performance of geographic routing is impacted by physical voids, and localization errors. Accordingly, virtual coordinate systems (VCS) were proposed as an alternative approach that is resilient to localization errors and that naturally routes around physical voids. However, since VCS faces virtual anomalies,existing geographic routing can’t work to banlance energy efficiently. Moreover, there are no effective complementary routing algorithm that can be used to address energy balance.In this paper we present An Energy Balanced virtual coordinate Routing in Wireless Sensor Networks(EBVCR),which combines both distance- and direction-based strategies in a flexible manner, is Proposed to resolve energy balance of Geographic routing in VCS .Our simulation results show that the proposed algorithm outperforms the best existing solution, over a variety of network densities and scenarios

    Joint Token Pruning and Squeezing Towards More Aggressive Compression of Vision Transformers

    Full text link
    Although vision transformers (ViTs) have shown promising results in various computer vision tasks recently, their high computational cost limits their practical applications. Previous approaches that prune redundant tokens have demonstrated a good trade-off between performance and computation costs. Nevertheless, errors caused by pruning strategies can lead to significant information loss. Our quantitative experiments reveal that the impact of pruned tokens on performance should be noticeable. To address this issue, we propose a novel joint Token Pruning & Squeezing module (TPS) for compressing vision transformers with higher efficiency. Firstly, TPS adopts pruning to get the reserved and pruned subsets. Secondly, TPS squeezes the information of pruned tokens into partial reserved tokens via the unidirectional nearest-neighbor matching and similarity-based fusing steps. Compared to state-of-the-art methods, our approach outperforms them under all token pruning intensities. Especially while shrinking DeiT-tiny&small computational budgets to 35%, it improves the accuracy by 1%-6% compared with baselines on ImageNet classification. The proposed method can accelerate the throughput of DeiT-small beyond DeiT-tiny, while its accuracy surpasses DeiT-tiny by 4.78%. Experiments on various transformers demonstrate the effectiveness of our method, while analysis experiments prove our higher robustness to the errors of the token pruning policy. Code is available at https://github.com/megvii-research/TPS-CVPR2023.Comment: Accepted to CVPR202

    Dynamic Token Pruning in Plain Vision Transformers for Semantic Segmentation

    Full text link
    Vision transformers have achieved leading performance on various visual tasks yet still suffer from high computational complexity. The situation deteriorates in dense prediction tasks like semantic segmentation, as high-resolution inputs and outputs usually imply more tokens involved in computations. Directly removing the less attentive tokens has been discussed for the image classification task but can not be extended to semantic segmentation since a dense prediction is required for every patch. To this end, this work introduces a Dynamic Token Pruning (DToP) method based on the early exit of tokens for semantic segmentation. Motivated by the coarse-to-fine segmentation process by humans, we naturally split the widely adopted auxiliary-loss-based network architecture into several stages, where each auxiliary block grades every token's difficulty level. We can finalize the prediction of easy tokens in advance without completing the entire forward pass. Moreover, we keep kk highest confidence tokens for each semantic category to uphold the representative context information. Thus, computational complexity will change with the difficulty of the input, akin to the way humans do segmentation. Experiments suggest that the proposed DToP architecture reduces on average 20%−35%20\% - 35\% of computational cost for current semantic segmentation methods based on plain vision transformers without accuracy degradation

    Boosting Generalization with Adaptive Style Techniques for Fingerprint Liveness Detection

    Full text link
    We introduce a high-performance fingerprint liveness feature extraction technique that secured first place in LivDet 2023 Fingerprint Representation Challenge. Additionally, we developed a practical fingerprint recognition system with 94.68% accuracy, earning second place in LivDet 2023 Liveness Detection in Action. By investigating various methods, particularly style transfer, we demonstrate improvements in accuracy and generalization when faced with limited training data. As a result, our approach achieved state-of-the-art performance in LivDet 2023 Challenges.Comment: 1st Place in LivDet2023 Fingerprint Representation Challeng
    • 

    corecore