In a world where ideas flow freely between people across multiple platforms,
we often find ourselves relying on others' information without an objective
standard to judge whether those opinions are accurate. The present study tests
an agreement-in-confidence hypothesis of advice perception, which holds that
internal metacognitive evaluations of decision confidence play an important
functional role in the perception and use of social information, such as peers'
advice. We propose that confidence can be used, computationally, to estimate
advisors' trustworthiness and advice reliability. Specifically, these processes
are hypothesized to be particularly important in situations where objective
feedback is absent or difficult to acquire. Here, we use a judge-advisor system
paradigm to precisely manipulate the profiles of virtual advisors whose
opinions are provided to participants performing a perceptual decision making
task. We find that when advisors' and participants' judgments are independent,
people are able to discriminate subtle advice features, like confidence
calibration, whether or not objective feedback is available. However, when
observers' judgments (and judgment errors) are correlated - as is the case in
many social contexts - predictable distortions can be observed between feedback
and feedback-free scenarios. A simple model of advice reliability estimation,
endowed with metacognitive insight, is able to explain key patterns of results
observed in the human data. We use agent-based modeling to explore implications
of these individual-level decision strategies for network-level patterns of
trust and belief formation