253,532 research outputs found
Probabilistic Relational Model Benchmark Generation
The validation of any database mining methodology goes through an evaluation
process where benchmarks availability is essential. In this paper, we aim to
randomly generate relational database benchmarks that allow to check
probabilistic dependencies among the attributes. We are particularly interested
in Probabilistic Relational Models (PRMs), which extend Bayesian Networks (BNs)
to a relational data mining context and enable effective and robust reasoning
over relational data. Even though a panoply of works have focused, separately ,
on the generation of random Bayesian networks and relational databases, no work
has been identified for PRMs on that track. This paper provides an algorithmic
approach for generating random PRMs from scratch to fill this gap. The proposed
method allows to generate PRMs as well as synthetic relational data from a
randomly generated relational schema and a random set of probabilistic
dependencies. This can be of interest not only for machine learning researchers
to evaluate their proposals in a common framework, but also for databases
designers to evaluate the effectiveness of the components of a database
management system
Relational Boosted Bandits
Contextual bandits algorithms have become essential in real-world user
interaction problems in recent years. However, these algorithms rely on context
as attribute value representation, which makes them unfeasible for real-world
domains like social networks are inherently relational. We propose Relational
Boosted Bandits(RB2), acontextual bandits algorithm for relational domains
based on (relational) boosted trees. RB2 enables us to learn interpretable and
explainable models due to the more descriptive nature of the relational
representation. We empirically demonstrate the effectiveness and
interpretability of RB2 on tasks such as link prediction, relational
classification, and recommendations.Comment: 8 pages, 3 figure
Microsimulation - A Survey of Methods and Applications for Analyzing Economic and Social Policy
This essential dimensions of microsimulation as an instrument to analyze and forecast the individual impacts of alternative economic and social policy measures are surveyed in this study. The basic principles of microsimulation, which is a tool for practical policy advising as well as for research and teaching, are pointed out and the static and dynamic (cross-section and life-cycle) approaches are compared to one another. Present and past developments of microsimulation models and their areas of application are reviewed, focusing on the US, Europe and Australia. Based on general requirements and components of microsimulation models a microsimulation model's actual working mechanism are discussed by a concrete example: the concept and realization of MICSIM, a PC microsimulation model based on a relational database system, an offspring of the Sfb 3 Statitic Microsimulation Model. Common issues of microsimulation modeling are regarded: micro/macro link, behavioural response and the important question of evaluating microsimulation results. The concluding remarks accentuate the increasing use of microcomputers for microsimulation models also for teaching purposes.Microsimulation, Microanalytic Simulation Models, Microanalysis, Economic and Social Policy Analysis
Relational contexts and conceptual model clustering
In recent years, there has been a growing interest in the use of reference conceptual models to capture information about complex and sensitive business domains (e.g., finance, healthcare, space). These models play a fundamental role in different types of critical semantic interoperability tasks. Therefore, it is essential that domain experts are able to understand and reason with their content. In other words, it is important for these reference conceptual models to be cognitively tractable. This paper contributes to this goal by proposing a model clustering technique that leverages the rich semantics of ontology-driven conceptual models (ODCM). In particular, the technique employs the notion of Relational Context to guide automated model breakdown. Such Relational Contexts capture all the information needed for understanding entities “qua players of roles” in the scope of an objectified (reified) relationship (relator)
A Symbolic Execution Algorithm for Constraint-Based Testing of Database Programs
In so-called constraint-based testing, symbolic execution is a common
technique used as a part of the process to generate test data for imperative
programs. Databases are ubiquitous in software and testing of programs
manipulating databases is thus essential to enhance the reliability of
software. This work proposes and evaluates experimentally a symbolic ex-
ecution algorithm for constraint-based testing of database programs. First, we
describe SimpleDB, a formal language which offers a minimal and well-defined
syntax and seman- tics, to model common interaction scenarios between pro-
grams and databases. Secondly, we detail the proposed al- gorithm for symbolic
execution of SimpleDB models. This algorithm considers a SimpleDB program as a
sequence of operations over a set of relational variables, modeling both the
database tables and the program variables. By inte- grating this relational
model of the program with classical static symbolic execution, the algorithm
can generate a set of path constraints for any finite path to test in the
control- flow graph of the program. Solutions of these constraints are test
inputs for the program, including an initial content for the database. When the
program is executed with respect to these inputs, it is guaranteed to follow
the path with re- spect to which the constraints were generated. Finally, the
algorithm is evaluated experimentally using representative SimpleDB models.Comment: 12 pages - preliminary wor
Brasilia’s Database Administrators
Database administration has gained an essential role in the management of new database technologies. Different data models are being created for supporting the enormous data volume, from the traditional relational database. These new models are called NoSQL (Not only SQL) databases. The adoption of best practices and
procedures, has become essential for the operation of database management systems. Thus, this paper investigates some of the techniques and tools used by database administrators. The study highlights features and particularities in databases within the area of Brasilia, the Capital of Brazil. The results point to which new
technologies regarding database management are currently the most relevant, as well as the central issues in this area
Shrinking Embeddings for Hyper-Relational Knowledge Graphs
Link prediction on knowledge graphs (KGs) has been extensively studied on
binary relational KGs, wherein each fact is represented by a triple. A
significant amount of important knowledge, however, is represented by
hyper-relational facts where each fact is composed of a primal triple and a set
of qualifiers comprising a key-value pair that allows for expressing more
complicated semantics. Although some recent works have proposed to embed
hyper-relational KGs, these methods fail to capture essential inference
patterns of hyper-relational facts such as qualifier monotonicity, qualifier
implication, and qualifier mutual exclusion, limiting their generalization
capability. To unlock this, we present \emph{ShrinkE}, a geometric
hyper-relational KG embedding method aiming to explicitly model these patterns.
ShrinkE models the primal triple as a spatial-functional transformation from
the head into a relation-specific box. Each qualifier ``shrinks'' the box to
narrow down the possible answer set and, thus, realizes qualifier monotonicity.
The spatial relationships between the qualifier boxes allow for modeling core
inference patterns of qualifiers such as implication and mutual exclusion.
Experimental results demonstrate ShrinkE's superiority on three benchmarks of
hyper-relational KGs.Comment: To appear in ACL 202
- …