A common approach for learning Bayesian networks (BNs) from data is based on the use of a scoring metric to evaluate the fitness of any given candidate network to the data and a method to explore the search space, which usually is the set of directed acyclic graphs (DAGs). The most efficient search methods used in this context are greedy hill climbing, either deterministic or stochastic. One of these methods that has been applied with some success is hill climbing with random restart. In this article we study a new algorithm of this type to restart a local search when it is trapped at a local optimum. It uses problem-specific knowledge about BNs and the information provided by the database itself (by testing the conditional independencies, which are true in the current solution of the search process). We also study a new definition of neighborhood for the space of DAGs by using the classical operators of arc addition and arc deletion together with a new operator for arc reversal. The proposed methods are empirically tested using two different domains: ALARM and INSURANCE. © 2003 Wiley Periodicals, Inc. 1
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.