392 research outputs found

    RulE: Neural-Symbolic Knowledge Graph Reasoning with Rule Embedding

    Full text link
    Knowledge graph (KG) reasoning is an important problem for knowledge graphs. It predicts missing links by reasoning on existing facts. Knowledge graph embedding (KGE) is one of the most popular methods to address this problem. It embeds entities and relations into low-dimensional vectors and uses the learned entity/relation embeddings to predict missing facts. However, KGE only uses zeroth-order (propositional) logic to encode existing triplets (e.g., ``Alice is Bob's wife."); it is unable to leverage first-order (predicate) logic to represent generally applicable logical \textbf{rules} (e.g., ``x,y ⁣:x is y’s wifey is x’s husband\forall x,y \colon x ~\text{is}~ y\text{'s wife} \rightarrow y ~\text{is}~ x\text{'s husband}''). On the other hand, traditional rule-based KG reasoning methods usually rely on hard logical rule inference, making it brittle and hardly competitive with KGE. In this paper, we propose RulE, a novel and principled framework to represent and model logical rules and triplets. RulE jointly represents entities, relations and logical rules in a unified embedding space. By learning an embedding for each logical rule, RulE can perform logical rule inference in a soft way and give a confidence score to each grounded rule, similar to how KGE gives each triplet a confidence score. Compared to KGE alone, RulE allows injecting prior logical rule information into the embedding space, which improves the generalization of knowledge graph embedding. Besides, the learned confidence scores of rules improve the logical rule inference process by softly controlling the contribution of each rule, which alleviates the brittleness of logic. We evaluate our method with link prediction tasks. Experimental results on multiple benchmark KGs demonstrate the effectiveness of RulE

    Text-to-3D with Classifier Score Distillation

    Full text link
    Text-to-3D generation has made remarkable progress recently, particularly with methods based on Score Distillation Sampling (SDS) that leverages pre-trained 2D diffusion models. While the usage of classifier-free guidance is well acknowledged to be crucial for successful optimization, it is considered an auxiliary trick rather than the most essential component. In this paper, we re-evaluate the role of classifier-free guidance in score distillation and discover a surprising finding: the guidance alone is enough for effective text-to-3D generation tasks. We name this method Classifier Score Distillation (CSD), which can be interpreted as using an implicit classification model for generation. This new perspective reveals new insights for understanding existing techniques. We validate the effectiveness of CSD across a variety of text-to-3D tasks including shape generation, texture synthesis, and shape editing, achieving results superior to those of state-of-the-art methods. Our project page is https://xinyu-andy.github.io/Classifier-Score-DistillationComment: Our project page is https://xinyu-andy.github.io/Classifier-Score-Distillatio

    Large Language Models are In-Context Semantic Reasoners rather than Symbolic Reasoners

    Full text link
    The emergent few-shot reasoning capabilities of Large Language Models (LLMs) have excited the natural language and machine learning community over recent years. Despite of numerous successful applications, the underlying mechanism of such in-context capabilities still remains unclear. In this work, we hypothesize that the learned \textit{semantics} of language tokens do the most heavy lifting during the reasoning process. Different from human's symbolic reasoning process, the semantic representations of LLMs could create strong connections among tokens, thus composing a superficial logical chain. To test our hypothesis, we decouple semantics from the language reasoning process and evaluate three kinds of reasoning abilities, i.e., deduction, induction and abduction. Our findings reveal that semantics play a vital role in LLMs' in-context reasoning -- LLMs perform significantly better when semantics are consistent with commonsense but struggle to solve symbolic or counter-commonsense reasoning tasks by leveraging in-context new knowledge. The surprising observations question whether modern LLMs have mastered the inductive, deductive and abductive reasoning abilities as in human intelligence, and motivate research on unveiling the magic existing within the black-box LLMs. On the whole, our analysis provides a novel perspective on the role of semantics in developing and evaluating language models' reasoning abilities. Code is available at {\url{https://github.com/XiaojuanTang/ICSR}}

    Localization of CO2_2 gas leakages through acoustic emission multi-sensor fusion based on wavelet-RBFN modeling

    Get PDF
    CO2_2 leakage from transmission pipelines in carbon capture and storage systems may seriously endanger the ecological environment and human health. Therefore, there is a pressing need of an accurate and reliable leak localization method for CO2_2 pipelines. In this study, a novel method based on the combination of a wavelet packet algorithm and a radial basis function network (RBFN) is proposed to realize the leak location. Multiple acoustic emission (AE) sensors are first deployed to collect leakage signals of CO2_2 pipelines. The characteristics of the leakage signals from the AE sensors under different pressures are then analyzed in both time and frequency domains. Further, leakage signals are decomposed into three layers using wavelet decomposition theory. Wavelet packet energy and maximum value, and time difference calculated by cross-correlation are selected as the input feature vectors of the RBFN. Experiments were carried out on a laboratory-scale test rig to verify the validity and correctness of the proposed method. Leakage signals at different positions under different pressures were obtained on the CO2_2 pipeline leakage test bench. Compared with the time difference of arrival method, the relative error obtained using the proposed method is less than 2%, which has certain engineering application prospects

    Improved field emission performance of carbon nanotube by introducing copper metallic particles

    Get PDF
    To improve the field emission performance of carbon nanotubes (CNTs), a simple and low-cost method was adopted in this article. We introduced copper particles for decorating the CNTs so as to form copper particle-CNT composites. The composites were fabricated by electrophoretic deposition technique which produced copper metallic particles localized on the outer wall of CNTs and deposited them onto indium tin oxide (ITO) electrode. The results showed that the conductivity increased from 10-5 to 4 × 10-5 S while the turn-on field was reduced from 3.4 to 2.2 V/μm. Moreover, the field emission current tended to be undiminished after continuous emission for 24 h. The reasons were summarized that introducing copper metallic particles to decorate CNTs could increase the surface roughness of the CNTs which was beneficial to field emission, restrain field emission current from saturating when the applied electric field was above the critical field. In addition, it could also improve the electrical contact by increasing the contact area between CNT and ITO electrode that was beneficial to the electron transport and avoided instable electron emission caused by thermal injury of CNTs

    Is synthetic data from generative models ready for image recognition?

    Full text link
    Recent text-to-image generation models have shown promising results in generating high-fidelity photo-realistic images. Though the results are astonishing to human eyes, how applicable these generated images are for recognition tasks remains under-explored. In this work, we extensively study whether and how synthetic images generated from state-of-the-art text-to-image generation models can be used for image recognition tasks, and focus on two perspectives: synthetic data for improving classification models in data-scarce settings (i.e. zero-shot and few-shot), and synthetic data for large-scale model pre-training for transfer learning. We showcase the powerfulness and shortcomings of synthetic data from existing generative models, and propose strategies for better applying synthetic data for recognition tasks. Code: https://github.com/CVMI-Lab/SyntheticData.Comment: ICLR 2023, spotligh

    AdaptivePose++: A Powerful Single-Stage Network for Multi-Person Pose Regression

    Full text link
    Multi-person pose estimation generally follows top-down and bottom-up paradigms. Both of them use an extra stage (e.g.,\boldsymbol{e.g.,} human detection in top-down paradigm or grouping process in bottom-up paradigm) to build the relationship between the human instance and corresponding keypoints, thus leading to the high computation cost and redundant two-stage pipeline. To address the above issue, we propose to represent the human parts as adaptive points and introduce a fine-grained body representation method. The novel body representation is able to sufficiently encode the diverse pose information and effectively model the relationship between the human instance and corresponding keypoints in a single-forward pass. With the proposed body representation, we further deliver a compact single-stage multi-person pose regression network, termed as AdaptivePose. During inference, our proposed network only needs a single-step decode operation to form the multi-person pose without complex post-processes and refinements. We employ AdaptivePose for both 2D/3D multi-person pose estimation tasks to verify the effectiveness of AdaptivePose. Without any bells and whistles, we achieve the most competitive performance on MS COCO and CrowdPose in terms of accuracy and speed. Furthermore, the outstanding performance on MuCo-3DHP and MuPoTS-3D further demonstrates the effectiveness and generalizability on 3D scenes. Code is available at https://github.com/buptxyb666/AdaptivePose.Comment: Submit to IEEE TCSVT; 11 pages. arXiv admin note: text overlap with arXiv:2112.1363

    Metal‐Organic Framework Thin Films as Ideal Matrices for Azide Photolysis in Vacuum

    Get PDF
    Studies on reactions in solutions are often hampered by solvent effects. In addition, detailed investigation on kinetics is limited to the small temperature regime where the solvent is liquid. Here, we report the in situ spectroscopic observation of UV-induced photochemical reactions of aryl azides within a crystalline matrix in vacuum. The matrices are formed by attaching the reactive moieties to ditopic linkers, which are then assembled to yield metal–organic frameworks (MOFs) and surface-mounted MOFs (SURMOFs). These porous, crystalline frameworks are then used as model systems to study azide-related chemical processes under ultrahigh vacuum (UHV) conditions, where solvent effects can be safely excluded and in a large temperature regime. Infrared reflection absorption spectroscopy (IRRAS) allowed us to monitor the photoreaction of azide in SURMOFs precisely. The in situ IRRAS data, in conjunction with XRD, MS, and XPS, reveal that illumination with UV light first leads to forming a nitrene intermediate. In the second step, an intramolecular rearrangement occurs, yielding an indoloindole derivative. These findings unveil a novel pathway for precisely studying azide-related chemical transformations. Reference experiments carried out for solvent-loaded SURMOFs reveal a huge diversity of other reaction schemes, thus highlighting the need for model systems studied under UHV conditions
    corecore