3,740,450 research outputs found

    Feature-Aware Verification

    Full text link
    A software product line is a set of software products that are distinguished in terms of features (i.e., end-user--visible units of behavior). Feature interactions ---situations in which the combination of features leads to emergent and possibly critical behavior--- are a major source of failures in software product lines. We explore how feature-aware verification can improve the automatic detection of feature interactions in software product lines. Feature-aware verification uses product-line verification techniques and supports the specification of feature properties along with the features in separate and composable units. It integrates the technique of variability encoding to verify a product line without generating and checking a possibly exponential number of feature combinations. We developed the tool suite SPLverifier for feature-aware verification, which is based on standard model-checking technology. We applied it to an e-mail system that incorporates domain knowledge of AT&T. We found that feature interactions can be detected automatically based on specifications that have only feature-local knowledge, and that variability encoding significantly improves the verification performance when proving the absence of interactions.Comment: 12 pages, 9 figures, 1 tabl

    Regression-aware decompositions

    Full text link
    Linear least-squares regression with a "design" matrix A approximates a given matrix B via minimization of the spectral- or Frobenius-norm discrepancy ||AX-B|| over every conformingly sized matrix X. Another popular approximation is low-rank approximation via principal component analysis (PCA) -- which is essentially singular value decomposition (SVD) -- or interpolative decomposition (ID). Classically, PCA/SVD and ID operate solely with the matrix B being approximated, not supervised by any auxiliary matrix A. However, linear least-squares regression models can inform the ID, yielding regression-aware ID. As a bonus, this provides an interpretation as regression-aware PCA for a kind of canonical correlation analysis between A and B. The regression-aware decompositions effectively enable supervision to inform classical dimensionality reduction, which classically has been totally unsupervised. The regression-aware decompositions reveal the structure inherent in B that is relevant to regression against A.Comment: 19 pages, 9 figures, 2 table

    Community-aware network sparsification

    Full text link
    Network sparsification aims to reduce the number of edges of a network while maintaining its structural properties; such properties include shortest paths, cuts, spectral measures, or network modularity. Sparsification has multiple applications, such as, speeding up graph-mining algorithms, graph visualization, as well as identifying the important network edges. In this paper we consider a novel formulation of the network-sparsification problem. In addition to the network, we also consider as input a set of communities. The goal is to sparsify the network so as to preserve the network structure with respect to the given communities. We introduce two variants of the community-aware sparsification problem, leading to sparsifiers that satisfy different connectedness community properties. From the technical point of view, we prove hardness results and devise effective approximation algorithms. Our experimental results on a large collection of datasets demonstrate the effectiveness of our algorithms.https://epubs.siam.org/doi/10.1137/1.9781611974973.48Accepted manuscrip

    Privacy-Aware MMSE Estimation

    Full text link
    We investigate the problem of the predictability of random variable YY under a privacy constraint dictated by random variable XX, correlated with YY, where both predictability and privacy are assessed in terms of the minimum mean-squared error (MMSE). Given that XX and YY are connected via a binary-input symmetric-output (BISO) channel, we derive the \emph{optimal} random mapping PZYP_{Z|Y} such that the MMSE of YY given ZZ is minimized while the MMSE of XX given ZZ is greater than (1ϵ)var(X)(1-\epsilon)\mathsf{var}(X) for a given ϵ0\epsilon\geq 0. We also consider the case where (X,Y)(X,Y) are continuous and PZYP_{Z|Y} is restricted to be an additive noise channel.Comment: 9 pages, 3 figure
    corecore