180 research outputs found

    Discrete Fourier analysis with lattices on planar domains

    Full text link
    A discrete Fourier analysis associated with translation lattices is developed recently by the authors. It permits two lattices, one determining the integral domain and the other determining the family of exponential functions. Possible choices of lattices are discussed in the case of lattices that tile \RR^2 and several new results on cubature and interpolation by trigonometric, as well as algebraic, polynomials are obtained

    Discrete Fourier analysis, Cubature and Interpolation on a Hexagon and a Triangle

    Full text link
    Several problems of trigonometric approximation on a hexagon and a triangle are studied using the discrete Fourier transform and orthogonal polynomials of two variables. A discrete Fourier analysis on the regular hexagon is developed in detail, from which the analysis on the triangle is deduced. The results include cubature formulas and interpolation on these domains. In particular, a trigonometric Lagrange interpolation on a triangle is shown to satisfy an explicit compact formula, which is equivalent to the polynomial interpolation on a planer region bounded by Steiner's hypocycloid. The Lebesgue constant of the interpolation is shown to be in the order of (logn)2(\log n)^2. Furthermore, a Gauss cubature is established on the hypocycloid.Comment: 29 page

    Discrete Fourier Analysis and Chebyshev Polynomials with G2G_2 Group

    Full text link
    The discrete Fourier analysis on the 30°30^{\degree}-60°60^{\degree}-90°90^{\degree} triangle is deduced from the corresponding results on the regular hexagon by considering functions invariant under the group G2G_2, which leads to the definition of four families generalized Chebyshev polynomials. The study of these polynomials leads to a Sturm-Liouville eigenvalue problem that contains two parameters, whose solutions are analogues of the Jacobi polynomials. Under a concept of mm-degree and by introducing a new ordering among monomials, these polynomials are shown to share properties of the ordinary orthogonal polynomials. In particular, their common zeros generate cubature rules of Gauss type

    Adaptive Multimodal Fusion For Facial Action Units Recognition

    Get PDF
    Multimodal facial action units (AU) recognition aims to build models that are capable of processing, correlating, and integrating information from multiple modalities (i.e., 2D images from a visual sensor, 3D geometry from 3D imaging, and thermal images from an infrared sensor). Although the multimodal data can provide rich information, there are two challenges that have to be addressed when learning from multimodal data: 1) the model must capture the complex cross-modal interactions in order to utilize the additional and mutual information effectively; 2) the model must be robust enough in the circumstance of unexpected data corruptions during testing, in case of a certain modality missing or being noisy. In this paper, we propose a novel Adaptive Multimodal Fusion method (AMF) for AU detection, which learns to select the most relevant feature representations from different modalities by a re-sampling procedure conditioned on a feature scoring module. The feature scoring module is designed to allow for evaluating the quality of features learned from multiple modalities. As a result, AMF is able to adaptively select more discriminative features, thus increasing the robustness to missing or corrupted modalities. In addition, to alleviate the over-fitting problem and make the model generalize better on the testing data, a cut-switch multimodal data augmentation method is designed, by which a random block is cut and switched across multiple modalities. We have conducted a thorough investigation on two public multimodal AU datasets, BP4D and BP4D+, and the results demonstrate the effectiveness of the proposed method. Ablation studies on various circumstances also show that our method remains robust to missing or noisy modalities during tests

    Enhancing Transformers without Self-supervised Learning: A Loss Landscape Perspective in Sequential Recommendation

    Full text link
    Transformer and its variants are a powerful class of architectures for sequential recommendation, owing to their ability of capturing a user's dynamic interests from their past interactions. Despite their success, Transformer-based models often require the optimization of a large number of parameters, making them difficult to train from sparse data in sequential recommendation. To address the problem of data sparsity, previous studies have utilized self-supervised learning to enhance Transformers, such as pre-training embeddings from item attributes or contrastive data augmentations. However, these approaches encounter several training issues, including initialization sensitivity, manual data augmentations, and large batch-size memory bottlenecks. In this work, we investigate Transformers from the perspective of loss geometry, aiming to enhance the models' data efficiency and generalization in sequential recommendation. We observe that Transformers (e.g., SASRec) can converge to extremely sharp local minima if not adequately regularized. Inspired by the recent Sharpness-Aware Minimization (SAM), we propose SAMRec, which significantly improves the accuracy and robustness of sequential recommendation. SAMRec performs comparably to state-of-the-art self-supervised Transformers, such as S3^3Rec and CL4SRec, without the need for pre-training or strong data augmentations
    corecore