69 research outputs found

    Classification of derivation-simple color algebras related to locally finite derivations

    Get PDF
    We classify the pairs (A,D)(A,D) consisting of an (ϵ,Γ)(\epsilon,\Gamma)-olor-commutative associative algebra AA with an identity element over an algebraically closed field FF of characteristic zero and a finite dimensional subspace DD of (ϵ,Γ)(\epsilon,\Gamma)-color-commutative locally finite color-derivations of AA such that AA is Γ\Gamma-graded DD-simple and the eigenspaces for elements of DD are Γ\Gamma-graded. Such pairs are the important ingredients in constructing some simple Lie color algebras which are in general not finitely-graded. As some applications, using such pairs, we construct new explicit simple Lie color algebras of generalized Witt type, Weyl type.Comment: 15 page

    An Unified Search and Recommendation Foundation Model for Cold-Start Scenario

    Full text link
    In modern commercial search engines and recommendation systems, data from multiple domains is available to jointly train the multi-domain model. Traditional methods train multi-domain models in the multi-task setting, with shared parameters to learn the similarity of multiple tasks, and task-specific parameters to learn the divergence of features, labels, and sample distributions of individual tasks. With the development of large language models, LLM can extract global domain-invariant text features that serve both search and recommendation tasks. We propose a novel framework called S\&R Multi-Domain Foundation, which uses LLM to extract domain invariant features, and Aspect Gating Fusion to merge the ID feature, domain invariant text features and task-specific heterogeneous sparse features to obtain the representations of query and item. Additionally, samples from multiple search and recommendation scenarios are trained jointly with Domain Adaptive Multi-Task module to obtain the multi-domain foundation model. We apply the S\&R Multi-Domain foundation model to cold start scenarios in the pretrain-finetune manner, which achieves better performance than other SOTA transfer learning methods. The S\&R Multi-Domain Foundation model has been successfully deployed in Alipay Mobile Application's online services, such as content query recommendation and service card recommendation, etc.Comment: CIKM 2023,6 page

    Simple algebras of Weyl type

    Full text link
    Over a field FF of any characteristic, for a commutative associative algebra AA with an identity element and for the polynomial algebra F[D]F[D] of a commutative derivation subalgebra DD of AA, the associative and the Lie algebras of Weyl type on the same vector space A[D]=A⊗F[D]A[D]=A\otimes F[D] are defined. It is proved that A[D]A[D], as a Lie algebra (modular its center) or as an associative algebra, is simple if and only if AA is DD-simple and A[D]A[D] acts faithfully on AA. Thus a lot of simple algebras are obtained.Comment: 9 pages, Late

    OpenShape: Scaling Up 3D Shape Representation Towards Open-World Understanding

    Full text link
    We introduce OpenShape, a method for learning multi-modal joint representations of text, image, and point clouds. We adopt the commonly used multi-modal contrastive learning framework for representation alignment, but with a specific focus on scaling up 3D representations to enable open-world 3D shape understanding. To achieve this, we scale up training data by ensembling multiple 3D datasets and propose several strategies to automatically filter and enrich noisy text descriptions. We also explore and compare strategies for scaling 3D backbone networks and introduce a novel hard negative mining module for more efficient training. We evaluate OpenShape on zero-shot 3D classification benchmarks and demonstrate its superior capabilities for open-world recognition. Specifically, OpenShape achieves a zero-shot accuracy of 46.8% on the 1,156-category Objaverse-LVIS benchmark, compared to less than 10% for existing methods. OpenShape also achieves an accuracy of 85.3% on ModelNet40, outperforming previous zero-shot baseline methods by 20% and performing on par with some fully-supervised methods. Furthermore, we show that our learned embeddings encode a wide range of visual and semantic concepts (e.g., subcategories, color, shape, style) and facilitate fine-grained text-3D and image-3D interactions. Due to their alignment with CLIP embeddings, our learned shape representations can also be integrated with off-the-shelf CLIP-based models for various applications, such as point cloud captioning and point cloud-conditioned image generation.Comment: Project Website: https://colin97.github.io/OpenShape

    Characterization of multi-wavelength polarized light transmission in the real sea spray environment

    Get PDF
    Sea spray particles are a type of non-uniform, non-spherical, non-isotropic, and complex medium, and the study of the transmission characteristics of polarized light in a real sea spray environment can provide reference values in many fields, such as polarization imaging, marine target detection, and LiDAR, which can make up for the vacancy of polarized light transmission in a complex sea spray environment. In this paper, a real sea fog test is carried out in the Qingdao Sea area of China in the horizontal/oblique direction, and a platform for generating and detecting polarized light with multiple tilt angles is constructed by using the active test method, which realizes the test scheme for the characteristics of energy change and polarization state change in the linearly polarized light and circularly polarized light at different visibility levels in sea fog environments. The results show that it is more difficult to deflect the circularly polarized light than linearly polarized light at the same sea spray visibility level. With the increase in the tilt angle, a decrease in the polarization is observed. The polarization of the near-infrared light is always larger than that of the visible light, which indicates that the circularly polarized light has better polarization preservation than the linearly polarized light and the polarization preservation of the near-infrared light is better than that of the visible light

    Dynamic Context-guided Capsule Network for Multimodal Machine Translation

    Full text link
    Multimodal machine translation (MMT), which mainly focuses on enhancing text-only translation with visual features, has attracted considerable attention from both computer vision and natural language processing communities. Most current MMT models resort to attention mechanism, global context modeling or multimodal joint representation learning to utilize visual features. However, the attention mechanism lacks sufficient semantic interactions between modalities while the other two provide fixed visual context, which is unsuitable for modeling the observed variability when generating translation. To address the above issues, in this paper, we propose a novel Dynamic Context-guided Capsule Network (DCCN) for MMT. Specifically, at each timestep of decoding, we first employ the conventional source-target attention to produce a timestep-specific source-side context vector. Next, DCCN takes this vector as input and uses it to guide the iterative extraction of related visual features via a context-guided dynamic routing mechanism. Particularly, we represent the input image with global and regional visual features, we introduce two parallel DCCNs to model multimodal context vectors with visual features at different granularities. Finally, we obtain two multimodal context vectors, which are fused and incorporated into the decoder for the prediction of the target word. Experimental results on the Multi30K dataset of English-to-German and English-to-French translation demonstrate the superiority of DCCN. Our code is available on https://github.com/DeepLearnXMU/MM-DCCN
    • …
    corecore