40,809 research outputs found
Transforming Graph Representations for Statistical Relational Learning
Relational data representations have become an increasingly important topic
due to the recent proliferation of network datasets (e.g., social, biological,
information networks) and a corresponding increase in the application of
statistical relational learning (SRL) algorithms to these domains. In this
article, we examine a range of representation issues for graph-based relational
data. Since the choice of relational data representation for the nodes, links,
and features can dramatically affect the capabilities of SRL algorithms, we
survey approaches and opportunities for relational representation
transformation designed to improve the performance of these algorithms. This
leads us to introduce an intuitive taxonomy for data representation
transformations in relational domains that incorporates link transformation and
node transformation as symmetric representation tasks. In particular, the
transformation tasks for both nodes and links include (i) predicting their
existence, (ii) predicting their label or type, (iii) estimating their weight
or importance, and (iv) systematically constructing their relevant features. We
motivate our taxonomy through detailed examples and use it to survey and
compare competing approaches for each of these tasks. We also discuss general
conditions for transforming links, nodes, and features. Finally, we highlight
challenges that remain to be addressed
Transformation Techniques for OCL Constraints
Constraints play a key role in the definition of conceptual schemas. In the UML, constraints are usually specified by means of invariants written in the OCL. However, due to the high expressiveness of the OCL, the designer has different syntactic alternatives to express each constraint. The techniques presented in this paper assist the designer during the definition of the constraints by means of generating equivalent alternatives for the initially defined ones. Moreover, in the context of the MDA, transformations between these different alternatives are required as part of the PIM-to-PIM, PIM-to-PSM or PIM-to-code transformations of the original conceptual schema
Graph Regularized Tensor Sparse Coding for Image Representation
Sparse coding (SC) is an unsupervised learning scheme that has received an
increasing amount of interests in recent years. However, conventional SC
vectorizes the input images, which destructs the intrinsic spatial structures
of the images. In this paper, we propose a novel graph regularized tensor
sparse coding (GTSC) for image representation. GTSC preserves the local
proximity of elementary structures in the image by adopting the newly proposed
tubal-tensor representation. Simultaneously, it considers the intrinsic
geometric properties by imposing graph regularization that has been
successfully applied to uncover the geometric distribution for the image data.
Moreover, the returned sparse representations by GTSC have better physical
explanations as the key operation (i.e., circular convolution) in the
tubal-tensor model preserves the shifting invariance property. Experimental
results on image clustering demonstrate the effectiveness of the proposed
scheme
- …