563 research outputs found
A Knowledge-Driven Approach to Classifying Object and Attribute Coreferences in Opinion Mining
Classifying and resolving coreferences of objects (e.g., product names) and
attributes (e.g., product aspects) in opinionated reviews is crucial for
improving the opinion mining performance. However, the task is challenging as
one often needs to consider domain-specific knowledge (e.g., iPad is a tablet
and has aspect resolution) to identify coreferences in opinionated reviews.
Also, compiling a handcrafted and curated domain-specific knowledge base for
each domain is very time consuming and arduous. This paper proposes an approach
to automatically mine and leverage domain-specific knowledge for classifying
objects and attribute coreferences. The approach extracts domain-specific
knowledge from unlabeled review data and trains a knowledgeaware neural
coreference classification model to leverage (useful) domain knowledge together
with general commonsense knowledge for the task. Experimental evaluation on
realworld datasets involving five domains (product types) shows the
effectiveness of the approach.Comment: Accepted to Proceedings of EMNLP 2020 (Findings
Graph Neural Networks are Inherently Good Generalizers: Insights by Bridging GNNs and MLPs
Graph neural networks (GNNs), as the de-facto model class for representation
learning on graphs, are built upon the multi-layer perceptrons (MLP)
architecture with additional message passing layers to allow features to flow
across nodes. While conventional wisdom commonly attributes the success of GNNs
to their advanced expressivity, we conjecture that this is not the main cause
of GNNs' superiority in node-level prediction tasks. This paper pinpoints the
major source of GNNs' performance gain to their intrinsic generalization
capability, by introducing an intermediate model class dubbed as
P(ropagational)MLP, which is identical to standard MLP in training, but then
adopts GNN's architecture in testing. Intriguingly, we observe that PMLPs
consistently perform on par with (or even exceed) their GNN counterparts, while
being much more efficient in training. This finding sheds new insights into
understanding the learning behavior of GNNs, and can be used as an analytic
tool for dissecting various GNN-related research problems. As an initial step
to analyze the inherent generalizability of GNNs, we show the essential
difference between MLP and PMLP at infinite-width limit lies in the NTK feature
map in the post-training stage. Moreover, by examining their extrapolation
behavior, we find that though many GNNs and their PMLP counterparts cannot
extrapolate non-linear functions for extremely out-of-distribution samples,
they have greater potential to generalize to testing samples near the training
data range as natural advantages of GNN architectures.Comment: Accepted to ICLR 2023. Codes in https://github.com/chr26195/PML
Design and Evaluation of Approximate Logarithmic Multipliers for Low Power Error-Tolerant Applications
In this work, the designs of both non-iterative and iterative approximate logarithmic multipliers (LMs) are studied to further reduce power consumption and improve performance. Non-iterative approximate LMs (ALMs) that use three inexact mantissa adders, are presented. The proposed iterative approximate logarithmic multipliers (IALMs) use a set-one adder in both mantissa adders during an iteration; they also use lower-part-or adders and approximate mirror adders for the final addition. Error analysis and simulation results are also provided; it is found that the proposed approximate LMs with an appropriate number of inexact bits achieve a higher accuracy and lower power consumption than conventional LMs using exact units. Compared with conventional LMs with exact units, the normalized mean error distance (NMED) of 16-bit approximate LMs is decreased by up to 18% and the power-delay product (PDP) has a reduction of up to 37%. The proposed approximate LMs are also compared with previous approximate multipliers; it is found that the proposed approximate LMs are best suitable for applications allowing larger errors, but requiring lower energy consumption and low power. Approximate Booth multipliers fit applications with less stringent power requirements, but also requiring smaller errors. Case studies for error-tolerant computing applications are provided
I3DOL: Incremental 3D Object Learning without Catastrophic Forgetting
3D object classification has attracted appealing attentions in academic
researches and industrial applications. However, most existing methods need to
access the training data of past 3D object classes when facing the common
real-world scenario: new classes of 3D objects arrive in a sequence. Moreover,
the performance of advanced approaches degrades dramatically for past learned
classes (i.e., catastrophic forgetting), due to the irregular and redundant
geometric structures of 3D point cloud data. To address these challenges, we
propose a new Incremental 3D Object Learning (i.e., I3DOL) model, which is the
first exploration to learn new classes of 3D object continually. Specifically,
an adaptive-geometric centroid module is designed to construct discriminative
local geometric structures, which can better characterize the irregular point
cloud representation for 3D object. Afterwards, to prevent the catastrophic
forgetting brought by redundant geometric information, a geometric-aware
attention mechanism is developed to quantify the contributions of local
geometric structures, and explore unique 3D geometric characteristics with high
contributions for classes incremental learning. Meanwhile, a score fairness
compensation strategy is proposed to further alleviate the catastrophic
forgetting caused by unbalanced data between past and new classes of 3D object,
by compensating biased prediction for new classes in the validation phase.
Experiments on 3D representative datasets validate the superiority of our I3DOL
framework.Comment: Accepted by Association for the Advancement of Artificial
Intelligence 2021 (AAAI 2021
- …