2 research outputs found
Unsupervised Opinion Aggregation -- A Statistical Perspective
Complex decision-making systems rarely have direct access to the current
state of the world and they instead rely on opinions to form an understanding
of what the ground truth could be. Even in problems where experts provide
opinions without any intention to manipulate the decision maker, it is
challenging to decide which expert's opinion is more reliable -- a challenge
that is further amplified when decision-maker has limited, delayed, or no
access to the ground truth after the fact. This paper explores a statistical
approach to infer the competence of each expert based on their opinions without
any need for the ground truth. Echoing the logic behind what is commonly
referred to as \textit{the wisdom of crowds}, we propose measuring the
competence of each expert by their likeliness to agree with their peers. We
further show that the more reliable an expert is the more likely it is that
they agree with their peers. We leverage this fact to propose a completely
unsupervised version of the na\"{i}ve Bayes classifier and show that the
proposed technique is asymptotically optimal for a large class of problems. In
addition to aggregating a large block of opinions, we further apply our
technique for online opinion aggregation and for decision-making based on a
limited the number of opinions.Comment: This research was conducted during Noyan Sevuktekin's time at
University of Illinois at Urbana-Champaign and the results were first
presented in Chapter 3 of his dissertation, entitled "Learning From
Opinions". Permalink: https://hdl.handle.net/2142/11081