552 research outputs found

    A miniature short stroke tubular linear actuator and its control

    Full text link
    Miniature actuators are the critical components in the robotic applications with high intelligence, high mobility and small scales. Among various types of actuators, linear actuators show advantages in many aspects. A miniature short stroke PM tubular linear actuator for the micro robotic applications is presented in this paper. The actuator is deliberately designed based on the optimal force capability and a proper sensorless control scheme is developed for the driving of the actuator. Experiment both on the prototype of the actuator and the drive system show the validity of the design

    Binomial coefficients, Catalan numbers and Lucas quotients

    Full text link
    Let pp be an odd prime and let a,ma,m be integers with a>0a>0 and m≢0(modp)m \not\equiv0\pmod p. In this paper we determine ∑k=0pa−1(2kk+d)/mk\sum_{k=0}^{p^a-1}\binom{2k}{k+d}/m^k mod p2p^2 for d=0,1d=0,1; for example, ∑k=0pa−1(2kk)mk≡(m2−4mpa)+(m2−4mpa−1)up−(m2−4mp)(modp2),\sum_{k=0}^{p^a-1}\frac{\binom{2k}k}{m^k}\equiv\left(\frac{m^2-4m}{p^a}\right)+\left(\frac{m^2-4m}{p^{a-1}}\right)u_{p-(\frac{m^2-4m}{p})}\pmod{p^2}, where (−)(-) is the Jacobi symbol, and {un}n⩾0\{u_n\}_{n\geqslant0} is the Lucas sequence given by u0=0u_0=0, u1=1u_1=1 and un+1=(m−2)un−un−1u_{n+1}=(m-2)u_n-u_{n-1} for n=1,2,3,…n=1,2,3,\ldots. As an application, we determine ∑0<k<pa, k≡r(modp−1)Ck\sum_{0<k<p^a,\, k\equiv r\pmod{p-1}}C_k modulo p2p^2 for any integer rr, where CkC_k denotes the Catalan number (2kk)/(k+1)\binom{2k}k/(k+1). We also pose some related conjectures.Comment: 24 pages. Correct few typo

    Multi-view Contrastive Learning for Entity Typing over Knowledge Graphs

    Full text link
    Knowledge graph entity typing (KGET) aims at inferring plausible types of entities in knowledge graphs. Existing approaches to KGET focus on how to better encode the knowledge provided by the neighbors and types of an entity into its representation. However, they ignore the semantic knowledge provided by the way in which types can be clustered together. In this paper, we propose a novel method called Multi-view Contrastive Learning for knowledge graph Entity Typing (MCLET), which effectively encodes the coarse-grained knowledge provided by clusters into entity and type embeddings. MCLET is composed of three modules: i) Multi-view Generation and Encoder module, which encodes structured information from entity-type, entity-cluster and cluster-type views; ii) Cross-view Contrastive Learning module, which encourages different views to collaboratively improve view-specific representations of entities and types; iii) Entity Typing Prediction module, which integrates multi-head attention and a Mixture-of-Experts strategy to infer missing entity types. Extensive experiments show the strong performance of MCLET compared to the state-of-the-artComment: Accepted at EMNLP 2023 Mai

    HyperFormer:Enhancing entity and relation interaction for hyper-relational knowledge graph completion

    Get PDF
    Hyper-relational knowledge graphs (HKGs) extend standard knowledge graphs by associating attribute-value qualifiers to triples, which effectively represent additional fine-grained information about its associated triple. Hyper-relational knowledge graph completion (HKGC) aims at inferring unknown triples while considering its qualifiers. Most existing approaches to HKGC exploit a global-level graph structure to encode hyper-relational knowledge into the graph convolution message passing process. However, the addition of multi-hop information might bring noise into the triple prediction process. To address this problem, we propose HyperFormer, a model that considers local-level sequential information, which encodes the content of the entities, relations and qualifiers of a triple. More precisely, HyperFormer is composed of three different modules: an entity neighbor aggregator module allowing to integrate the information of the neighbors of an entity to capture different perspectives of it; a relation qualifier aggregator module to integrate hyper-relational knowledge into the corresponding relation to refine the representation of relational content; a convolution-based bidirectional interaction module based on a convolutional operation, capturing pairwise bidirectional interactions of entity-relation, entity-qualifier, and relation-qualifier. Furthermore, we introduce a Mixture-of-Experts strategy into the feed-forward layers of HyperFormer to strengthen its representation capabilities while reducing the amount of model parameters and computation. Extensive experiments on three well-known datasets with four different conditions demonstrate HyperFormer's effectiveness

    HyperFormer: Enhancing Entity and Relation Interaction for Hyper-Relational Knowledge Graph Completion

    Full text link
    Hyper-relational knowledge graphs (HKGs) extend standard knowledge graphs by associating attribute-value qualifiers to triples, which effectively represent additional fine-grained information about its associated triple. Hyper-relational knowledge graph completion (HKGC) aims at inferring unknown triples while considering its qualifiers. Most existing approaches to HKGC exploit a global-level graph structure to encode hyper-relational knowledge into the graph convolution message passing process. However, the addition of multi-hop information might bring noise into the triple prediction process. To address this problem, we propose HyperFormer, a model that considers local-level sequential information, which encodes the content of the entities, relations and qualifiers of a triple. More precisely, HyperFormer is composed of three different modules: an entity neighbor aggregator module allowing to integrate the information of the neighbors of an entity to capture different perspectives of it; a relation qualifier aggregator module to integrate hyper-relational knowledge into the corresponding relation to refine the representation of relational content; a convolution-based bidirectional interaction module based on a convolutional operation, capturing pairwise bidirectional interactions of entity-relation, entity-qualifier, and relation-qualifier. realize the depth perception of the content related to the current statement. Furthermore, we introduce a Mixture-of-Experts strategy into the feed-forward layers of HyperFormer to strengthen its representation capabilities while reducing the amount of model parameters and computation. Extensive experiments on three well-known datasets with four different conditions demonstrate HyperFormer's effectiveness. Datasets and code are available at https://github.com/zhiweihu1103/HKGC-HyperFormer.Comment: Accepted at CIKM'2

    Type-aware Embeddings for Multi-Hop Reasoning over Knowledge Graphs

    Get PDF

    Special Issue on “Advances in Condition Monitoring, Optimization and Control for Complex Industrial Processes’’

    Get PDF
    Complex industrial automation systems and processes, such as chemical processes, manufacturing systems, wireless network systems, power and energy systems, smart grids and so forth, have greatly contributed to our daily life. Complex engineering systems are rather expensive, with a high requirement for system reliability and control and production performance ..

    Face liveness detection by exploring multiple scenic clues

    Full text link
    Abstract—Liveness detection is an indispensable guarantee for reliable face recognition, which has recently received enormous attention. In this paper we propose three scenic clues, which are non-rigid motion, face-background consistency and imaging banding effect, to conduct accurate and efficient face liveness detection. Non-rigid motion clue indicates the facial motions that a genuine face can exhibit such as blink, and a low rank matrix decomposition based image alignment approach is designed to extract this non-rigid motion. Face-background consistency clue believes that the motion of face and background has high consistency for fake facial photos while low consistency for genuine faces, and this consistency can serve as an efficient liveness clue which is explored by GMM based motion detec-tion method. Image banding effect reflects the imaging quality defects introduced in the fake face reproduction, which can be detected by wavelet decomposition. By fusing these three clues, we thoroughly explore sufficient clues for liveness detection. The proposed face liveness detection method achieves 100 % accuracy on Idiap print-attack database and the best performance on self collected face antispoofing database. I

    Transformer-based entity typing in knowledge graphs

    Get PDF
    We investigate the knowledge graph entity typing task which aims at inferring plausible entity types. In this paper, we propose a novel Transformer-based Entity Typing (TET) approach, effectively encoding the content of neighbours of an entity by means of a transformer mechanism. More precisely, TET is composed of three different mechanisms: a local transformer allowing to infer missing entity types by independently encoding the information provided by each of its neighbours; a global transformer aggregating the information of all neighbours of an entity into a single long sequence to reason about more complex entity types; and a context transformer integrating neighbours content in a differentiated way through information exchange between neighbour pairs, while preserving the graph structure. Furthermore, TET uses information about class membership of types to semantically strengthen the representation of an entity. Experiments on two real-world datasets demonstrate the superior performance of TET compared to the state-of-the-art

    Leveraging intra-modal and inter-modal interaction for multi-modal entity alignment

    Get PDF
    Multi-modal entity alignment (MMEA) aims to identify equivalent entity pairs across different multi-modal knowledge graphs (MMKGs). Existing approaches focus on how to better encode and aggregate information from different modalities. However, it is not trivial to leverage multi-modal knowledge in entity alignment due to the modal heterogeneity. In this paper, we propose a Multi-Grained Interaction framework for Multi-Modal Entity Alignment (MIMEA), which effectively realizes multi-granular interaction within the same modality or between different modalities. MIMEA is composed of four modules: i) a Multi-modal Knowledge Embedding module, which extracts modality-specific representations with multiple individual encoders; ii) a Probability-guided Modal Fusion module, which employs a probability guided approach to integrate uni-modal representations into joint-modal embeddings, while considering the interaction between uni-modal representations; iii) an Optimal Transport Modal Alignment module, which introduces an optimal transport mechanism to encourage the interaction between uni-modal and joint-modal embeddings; iv) a Modal-adaptive Contrastive Learning module, which distinguishes the embeddings of equivalent entities from those of non-equivalent ones, for each modality. Extensive experiments conducted on two real-world datasets demonstrate the strong performance of MIMEA compared to the SoTA. Datasets and code have been submitted as supplementary materials
    • …
    corecore