69,531 research outputs found
The Complexity of Selecting Maximal Solutions
AbstractMany important computational problems involve finding a maximal (with respect to set inclusion) solution in some combinatorial context. We study such maximality problems from the complexity point of view, and categorize their complexity precisely in terms of tight upper and lower bounds. Our results give characterizations of coNP, DP, ΠP2, FPNP||, FNP//OptP [log n] and FPΣP||2 in terms of subclasses of maximality problems. An important consequence of our results is that finding an X-minimal satisfying truth assignment for a given CNF boolean formula is complete for FNP//OptP[log n], solving an open question by Papadimitriou [Proceedings of the 32nd IEEE Symposium on the Foundations of Computer Science, 1991, pp. 163-169]
Duality between Feature Selection and Data Clustering
The feature-selection problem is formulated from an information-theoretic
perspective. We show that the problem can be efficiently solved by an extension
of the recently proposed info-clustering paradigm. This reveals the fundamental
duality between feature selection and data clustering,which is a consequence of
the more general duality between the principal partition and the principal
lattice of partitions in combinatorial optimization
Making Robust Decisions in Discrete Optimization Problems as a Game against Nature
In this paper a discrete optimization problem under uncertainty is discussed. Solving such a problem can be seen as a game against nature. In order to choose a solution, the minmax and minmax regret criteria can be applied. In this paper an extension of the known minmax (regret) approach is proposed. It is shown how different types of uncertainty can be simultaneously taken into account. Some exact and approximation algorithms for choosing a best solution are constructed.Discrete optimization, minmax, minmax regret, game against nature
Scalable Exact Parent Sets Identification in Bayesian Networks Learning with Apache Spark
In Machine Learning, the parent set identification problem is to find a set
of random variables that best explain selected variable given the data and some
predefined scoring function. This problem is a critical component to structure
learning of Bayesian networks and Markov blankets discovery, and thus has many
practical applications, ranging from fraud detection to clinical decision
support. In this paper, we introduce a new distributed memory approach to the
exact parent sets assignment problem. To achieve scalability, we derive
theoretical bounds to constraint the search space when MDL scoring function is
used, and we reorganize the underlying dynamic programming such that the
computational density is increased and fine-grain synchronization is
eliminated. We then design efficient realization of our approach in the Apache
Spark platform. Through experimental results, we demonstrate that the method
maintains strong scalability on a 500-core standalone Spark cluster, and it can
be used to efficiently process data sets with 70 variables, far beyond the
reach of the currently available solutions
- …