137,149 research outputs found
Test Set Diameter: Quantifying the Diversity of Sets of Test Cases
A common and natural intuition among software testers is that test cases need
to differ if a software system is to be tested properly and its quality
ensured. Consequently, much research has gone into formulating distance
measures for how test cases, their inputs and/or their outputs differ. However,
common to these proposals is that they are data type specific and/or calculate
the diversity only between pairs of test inputs, traces or outputs.
We propose a new metric to measure the diversity of sets of tests: the test
set diameter (TSDm). It extends our earlier, pairwise test diversity metrics
based on recent advances in information theory regarding the calculation of the
normalized compression distance (NCD) for multisets. An advantage is that TSDm
can be applied regardless of data type and on any test-related information, not
only the test inputs. A downside is the increased computational time compared
to competing approaches.
Our experiments on four different systems show that the test set diameter can
help select test sets with higher structural and fault coverage than random
selection even when only applied to test inputs. This can enable early test
design and selection, prior to even having a software system to test, and
complement other types of test automation and analysis. We argue that this
quantification of test set diversity creates a number of opportunities to
better understand software quality and provides practical ways to increase it.Comment: In submissio
Group Analysis of Self-organizing Maps based on Functional MRI using Restricted Frechet Means
Studies of functional MRI data are increasingly concerned with the estimation
of differences in spatio-temporal networks across groups of subjects or
experimental conditions. Unsupervised clustering and independent component
analysis (ICA) have been used to identify such spatio-temporal networks. While
these approaches have been useful for estimating these networks at the
subject-level, comparisons over groups or experimental conditions require
further methodological development. In this paper, we tackle this problem by
showing how self-organizing maps (SOMs) can be compared within a Frechean
inferential framework. Here, we summarize the mean SOM in each group as a
Frechet mean with respect to a metric on the space of SOMs. We consider the use
of different metrics, and introduce two extensions of the classical sum of
minimum distance (SMD) between two SOMs, which take into account the
spatio-temporal pattern of the fMRI data. The validity of these methods is
illustrated on synthetic data. Through these simulations, we show that the
three metrics of interest behave as expected, in the sense that the ones
capturing temporal, spatial and spatio-temporal aspects of the SOMs are more
likely to reach significance under simulated scenarios characterized by
temporal, spatial and spatio-temporal differences, respectively. In addition, a
re-analysis of a classical experiment on visually-triggered emotions
demonstrates the usefulness of this methodology. In this study, the
multivariate functional patterns typical of the subjects exposed to pleasant
and unpleasant stimuli are found to be more similar than the ones of the
subjects exposed to emotionally neutral stimuli. Taken together, these results
indicate that our proposed methods can cast new light on existing data by
adopting a global analytical perspective on functional MRI paradigms.Comment: 23 pages, 5 figures, 4 tables. Submitted to Neuroimag
Approximating the Spectrum of a Graph
The spectrum of a network or graph with adjacency matrix ,
consists of the eigenvalues of the normalized Laplacian . This set of eigenvalues encapsulates many aspects of the structure
of the graph, including the extent to which the graph posses community
structures at multiple scales. We study the problem of approximating the
spectrum , of in the regime where the graph is too
large to explicitly calculate the spectrum. We present a sublinear time
algorithm that, given the ability to query a random node in the graph and
select a random neighbor of a given node, computes a succinct representation of
an approximation , such that . Our algorithm has query complexity and running time ,
independent of the size of the graph, . We demonstrate the practical
viability of our algorithm on 15 different real-world graphs from the Stanford
Large Network Dataset Collection, including social networks, academic
collaboration graphs, and road networks. For the smallest of these graphs, we
are able to validate the accuracy of our algorithm by explicitly calculating
the true spectrum; for the larger graphs, such a calculation is computationally
prohibitive.
In addition we study the implications of our algorithm to property testing in
the bounded degree graph model
Anomaly free U(1) chiral gauge theories on a two dimensional torus
We consider anomaly free combinations of chiral fermions coupled to
gauge fields on a 2D torus first in the continuum and then on the lattice in
the overlap formulation. Both in the continuum and on the lattice, when the
background consists of sufficiently large constant gauge potentials the action
induced by the fermions varies significantly under certain singular gauge
transformations. ``Ruling away'' such discontinuities cannot be justified in
the continuum framework and does not naturally fit on the lattice. Complete
gauge invariance in the continuum can be restored in some models by choosing
special boundary conditions for the fermions. Evidence is presented that gauge
averaging the overlap phases in these models produces correct continuum
results.Comment: 30 page
Sketching for Large-Scale Learning of Mixture Models
Learning parameters from voluminous data can be prohibitive in terms of
memory and computational requirements. We propose a "compressive learning"
framework where we estimate model parameters from a sketch of the training
data. This sketch is a collection of generalized moments of the underlying
probability distribution of the data. It can be computed in a single pass on
the training set, and is easily computable on streams or distributed datasets.
The proposed framework shares similarities with compressive sensing, which aims
at drastically reducing the dimension of high-dimensional signals while
preserving the ability to reconstruct them. To perform the estimation task, we
derive an iterative algorithm analogous to sparse reconstruction algorithms in
the context of linear inverse problems. We exemplify our framework with the
compressive estimation of a Gaussian Mixture Model (GMM), providing heuristics
on the choice of the sketching procedure and theoretical guarantees of
reconstruction. We experimentally show on synthetic data that the proposed
algorithm yields results comparable to the classical Expectation-Maximization
(EM) technique while requiring significantly less memory and fewer computations
when the number of database elements is large. We further demonstrate the
potential of the approach on real large-scale data (over 10 8 training samples)
for the task of model-based speaker verification. Finally, we draw some
connections between the proposed framework and approximate Hilbert space
embedding of probability distributions using random features. We show that the
proposed sketching operator can be seen as an innovative method to design
translation-invariant kernels adapted to the analysis of GMMs. We also use this
theoretical framework to derive information preservation guarantees, in the
spirit of infinite-dimensional compressive sensing
- âŠ