3 research outputs found
A Tighter Error Bound for Decision Tree Learning Using PAC Learnability
Error bounds for decision trees are generally based on depth or breadth of the tree. In this paper, we propose a bound for error rate that depends both on the depth and the breadth of a specific decision tree constructed from the training samples. This bound is derived from sample complexity estimate based on PAC learnability. The proposed bound is compared with other traditional error bounds on several machine learning benchmark data sets as well as on an image data set used in Content Based Image Retrieval (CBIR). Experimental results demonstrate that the proposed bound gives tighter estimation of the empirical error.