The reductionist approach has proven a powerful guide for scientific advancement over the last 300 years; constructing the simplest modes consistent with the data remains a goal across the sciences. Yet are there instances where the blind pursuit of “simple” models is doomed from the start? Can we construct tests of internal consistency relating to the minimal duration of data from a given model? In short, if the aim is to model a phenomena, should we just do it or first ponder the possible outcomes? This question is addressed in the context of the datacomp.dat data set
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.