3,444 research outputs found
Comparing fbeta-optimal with distance based merge functions
Merge functions informally combine information from a certain universe into a solution over that same universe. This typically results in a, preferably optimal, summarization. In previous research, merge functions over sets have been looked into extensively. A specic case concerns sets that allow elements to appear more than once, multisets. In this paper we compare two types of merge functions over multisets against each other. We examine both general properties as practical usability in a real world application
Test Set Diameter: Quantifying the Diversity of Sets of Test Cases
A common and natural intuition among software testers is that test cases need
to differ if a software system is to be tested properly and its quality
ensured. Consequently, much research has gone into formulating distance
measures for how test cases, their inputs and/or their outputs differ. However,
common to these proposals is that they are data type specific and/or calculate
the diversity only between pairs of test inputs, traces or outputs.
We propose a new metric to measure the diversity of sets of tests: the test
set diameter (TSDm). It extends our earlier, pairwise test diversity metrics
based on recent advances in information theory regarding the calculation of the
normalized compression distance (NCD) for multisets. An advantage is that TSDm
can be applied regardless of data type and on any test-related information, not
only the test inputs. A downside is the increased computational time compared
to competing approaches.
Our experiments on four different systems show that the test set diameter can
help select test sets with higher structural and fault coverage than random
selection even when only applied to test inputs. This can enable early test
design and selection, prior to even having a software system to test, and
complement other types of test automation and analysis. We argue that this
quantification of test set diversity creates a number of opportunities to
better understand software quality and provides practical ways to increase it.Comment: In submissio
On Formal Consistency between Value and Coordination Models
In information systems (IS) engineering dierent techniques for modeling
inter-organizational collaborations are applied. In particular, value models
estimate the profitability for involved stakeholders, whereas coordination models
are used to agree upon the inter-organizational processes before implementing
them. During the execution of inter-organizational collaboration, in addition, event
logs are collected by the individual organizations representing another view of the
IS. The combination of the two models and the event log represent the IS and they
should therefore be consistent, i.e., not contradict each other. Since these models
are provided by dierent user groups during design time and the event log is
collected during run-time consistency is not straight forward. Inconsistency occurs
when models contain a conflicting description of the same information, i.e.,
there exists a conflicting overlap between the models. In this paper we introduce
an abstraction of value models, coordination models and event logs which allows
ensuring and maintaining alignment between models and event log. We demonstrate
its use by outlining a proof of an inconsistency resolution result based on
this abstraction. Thus, the introduction of abstractions allows to explore formal
inter-model relations based on consistency
Collection analysis for Horn clause programs
We consider approximating data structures with collections of the items that
they contain. For examples, lists, binary trees, tuples, etc, can be
approximated by sets or multisets of the items within them. Such approximations
can be used to provide partial correctness properties of logic programs. For
example, one might wish to specify than whenever the atom is proved
then the two lists and contain the same multiset of items (that is,
is a permutation of ). If sorting removes duplicates, then one would like to
infer that the sets of items underlying and are the same. Such results
could be useful to have if they can be determined statically and automatically.
We present a scheme by which such collection analysis can be structured and
automated. Central to this scheme is the use of linear logic as a omputational
logic underlying the logic of Horn clauses
A Comparison of Well-Quasi Orders on Trees
Well-quasi orders such as homeomorphic embedding are commonly used to ensure
termination of program analysis and program transformation, in particular
supercompilation.
We compare eight well-quasi orders on how discriminative they are and their
computational complexity. The studied well-quasi orders comprise two very
simple examples, two examples from literature on supercompilation and four new
proposed by the author.
We also discuss combining several well-quasi orders to get well-quasi orders
of higher discriminative power. This adds 19 more well-quasi orders to the
list.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455
Combinatorial optimization over two random point sets
We analyze combinatorial optimization problems over a pair of random point
sets of equal cardinal. Typical examples include the matching of minimal
length, the traveling salesperson tour constrained to alternate between points
of each set, or the connected bipartite r-regular graph of minimal length. As
the cardinal of the sets goes to infinity, we investigate the convergence of
such bipartite functionals.Comment: 34 page
- …