1 research outputs found
Establishing trust in automated reasoning
Since its beginnings in the 1940s, automated reasoning by computers has
become a tool of ever growing importance in scientific research. So far, the
rules underlying automated reasoning have mainly been formulated by humans, in
the form of program source code. Rules derived from large amounts of data, via
machine learning techniques, are a complementary approach currently under
intense development. The question of why we should trust these systems, and the
results obtained with their help, has been discussed by philosophers of science
but has so far received little attention by practitioners. The present work
focuses on independent reviewing, an important source of trust in science, and
identifies the characteristics of automated reasoning systems that affect their
reviewability. It also discusses possible steps towards increasing
reviewability and trustworthiness via a combination of technical and social
measures