This work addresses the problem of sharing partial information within social
learning strategies. In traditional social learning, agents solve a distributed
multiple hypothesis testing problem by performing two operations at each
instant: first, agents incorporate information from private observations to
form their beliefs over a set of hypotheses; second, agents combine the
entirety of their beliefs locally among neighbors. Within a sufficiently
informative environment and as long as the connectivity of the network allows
information to diffuse across agents, these algorithms enable agents to learn
the true hypothesis. Instead of sharing the entirety of their beliefs, this
work considers the case in which agents will only share their beliefs regarding
one hypothesis of interest, with the purpose of evaluating its validity, and
draws conditions under which this policy does not affect truth learning. We
propose two approaches for sharing partial information, depending on whether
agents behave in a self-aware manner or not. The results show how different
learning regimes arise, depending on the approach employed and on the inherent
characteristics of the inference problem. Furthermore, the analysis
interestingly points to the possibility of deceiving the network, as long as
the evaluated hypothesis of interest is close enough to the truth