46,207 research outputs found

    Parameterized Inapproximability of Target Set Selection and Generalizations

    Full text link
    In this paper, we consider the Target Set Selection problem: given a graph and a threshold value thr(v)thr(v) for any vertex vv of the graph, find a minimum size vertex-subset to "activate" s.t. all the vertices of the graph are activated at the end of the propagation process. A vertex vv is activated during the propagation process if at least thr(v)thr(v) of its neighbors are activated. This problem models several practical issues like faults in distributed networks or word-to-mouth recommendations in social networks. We show that for any functions ff and ρ\rho this problem cannot be approximated within a factor of ρ(k)\rho(k) in f(k)nO(1)f(k) \cdot n^{O(1)} time, unless FPT = W[P], even for restricted thresholds (namely constant and majority thresholds). We also study the cardinality constraint maximization and minimization versions of the problem for which we prove similar hardness results

    The Complexity of Finding Effectors

    Full text link
    The NP-hard EFFECTORS problem on directed graphs is motivated by applications in network mining, particularly concerning the analysis of probabilistic information-propagation processes in social networks. In the corresponding model the arcs carry probabilities and there is a probabilistic diffusion process activating nodes by neighboring activated nodes with probabilities as specified by the arcs. The point is to explain a given network activation state as well as possible by using a minimum number of "effector nodes"; these are selected before the activation process starts. We correct, complement, and extend previous work from the data mining community by a more thorough computational complexity analysis of EFFECTORS, identifying both tractable and intractable cases. To this end, we also exploit a parameterization measuring the "degree of randomness" (the number of "really" probabilistic arcs) which might prove useful for analyzing other probabilistic network diffusion problems as well.Comment: 28 page

    Algorithm for Adapting Cases Represented in a Tractable Description Logic

    Full text link
    Case-based reasoning (CBR) based on description logics (DLs) has gained a lot of attention lately. Adaptation is a basic task in the CBR inference that can be modeled as the knowledge base revision problem and solved in propositional logic. However, in DLs, it is still a challenge problem since existing revision operators only work well for strictly restricted DLs of the \emph{DL-Lite} family, and it is difficult to design a revision algorithm which is syntax-independent and fine-grained. In this paper, we present a new method for adaptation based on the DL EL\mathcal{EL_{\bot}}. Following the idea of adaptation as revision, we firstly extend the logical basis for describing cases from propositional logic to the DL EL\mathcal{EL_{\bot}}, and present a formalism for adaptation based on EL\mathcal{EL_{\bot}}. Then we present an adaptation algorithm for this formalism and demonstrate that our algorithm is syntax-independent and fine-grained. Our work provides a logical basis for adaptation in CBR systems where cases and domain knowledge are described by the tractable DL EL\mathcal{EL_{\bot}}.Comment: 21 pages. ICCBR 201

    Narrowing the Gap: Random Forests In Theory and In Practice

    Full text link
    Despite widespread interest and practical use, the theoretical properties of random forests are still not well understood. In this paper we contribute to this understanding in two ways. We present a new theoretically tractable variant of random regression forests and prove that our algorithm is consistent. We also provide an empirical evaluation, comparing our algorithm and other theoretically tractable random forest models to the random forest algorithm used in practice. Our experiments provide insight into the relative importance of different simplifications that theoreticians have made to obtain tractable models for analysis.Comment: Under review by the International Conference on Machine Learning (ICML) 201
    corecore