306 research outputs found

    DVGaze: Dual-View Gaze Estimation

    Full text link
    Gaze estimation methods estimate gaze from facial appearance with a single camera. However, due to the limited view of a single camera, the captured facial appearance cannot provide complete facial information and thus complicate the gaze estimation problem. Recently, camera devices are rapidly updated. Dual cameras are affordable for users and have been integrated in many devices. This development suggests that we can further improve gaze estimation performance with dual-view gaze estimation. In this paper, we propose a dual-view gaze estimation network (DV-Gaze). DV-Gaze estimates dual-view gaze directions from a pair of images. We first propose a dual-view interactive convolution (DIC) block in DV-Gaze. DIC blocks exchange dual-view information during convolution in multiple feature scales. It fuses dual-view features along epipolar lines and compensates for the original feature with the fused feature. We further propose a dual-view transformer to estimate gaze from dual-view features. Camera poses are encoded to indicate the position information in the transformer. We also consider the geometric relation between dual-view gaze directions and propose a dual-view gaze consistency loss for DV-Gaze. DV-Gaze achieves state-of-the-art performance on ETH-XGaze and EVE datasets. Our experiments also prove the potential of dual-view gaze estimation. We release codes in https://github.com/yihuacheng/DVGaze.Comment: ICCV 202

    All Current Sensors Survivable IPMSM Drive with Reconfigurable Inverter

    Get PDF

    A Coarse-to-Fine Adaptive Network for Appearance-Based Gaze Estimation

    Full text link
    Human gaze is essential for various appealing applications. Aiming at more accurate gaze estimation, a series of recent works propose to utilize face and eye images simultaneously. Nevertheless, face and eye images only serve as independent or parallel feature sources in those works, the intrinsic correlation between their features is overlooked. In this paper we make the following contributions: 1) We propose a coarse-to-fine strategy which estimates a basic gaze direction from face image and refines it with corresponding residual predicted from eye images. 2) Guided by the proposed strategy, we design a framework which introduces a bi-gram model to bridge gaze residual and basic gaze direction, and an attention component to adaptively acquire suitable fine-grained feature. 3) Integrating the above innovations, we construct a coarse-to-fine adaptive network named CA-Net and achieve state-of-the-art performances on MPIIGaze and EyeDiap.Comment: 9 pages, 7figures, AAAI-2

    A Practical Response Adaptive Block Randomization Design with Analytic Type I Error Protection

    Full text link
    Response adaptive randomization is appealing in confirmatory adaptive clinical trials from statistical, ethical, and pragmatic perspectives, in the sense that subjects are more likely to be randomized to better performing treatment groups based on accumulating data. The Doubly Adaptive Biased Coin Design (DBCD) is a popular solution due to its asymptotic normal property of final allocations, which further justifies its asymptotic type I error rate control. As an alternative, we propose a Response Adaptive Block Randomization (RABR) design with pre-specified randomization ratios for the control and high-performing groups to robustly achieve desired final sample size per group under different underlying responses, which is usually required in industry-sponsored clinical studies. We show that the usual test statistic has a controlled type I error rate. Our simulations further highlight the advantages of the proposed design over the DBCD in terms of consistently achieving final sample allocations and of power performance. We further apply this design to a Phase III study evaluating the efficacy of two dosing regimens of adjunctive everolimus in treating tuberous sclerosis complex but with no previous dose-finding studies in this indication
    • …
    corecore