Objectives: To review user signal rating activity within the Canadian Network for Public Health Intelligence’s (CNPHI’s) Knowledge Integration using Web-based Intelligence (KIWI) technology by answering the following questions: (1) who is rating, (2) how are users rating, and (3) how well are users rating?Methods: KIWI rating data was extracted from the CNPHI platform. Zoonotic & Emerging program signals with first rating occurring between January 1, 2016 and December 31, 2017 were included. Krippendorff’s alpha was used to estimate inter-rater reliability between users. A z-test was used to identify whether users tended to rate within 95% confidence interval (versus outside) the average community rating.Results: The 37 users who rated signals represented 20 organizations. 27.0% (n = 10) of users rated ≥10% of all rated signals, and their inter-rater reliability estimate was 72.4% (95% CI: 66.5-77.9%). Five users tended to rate significantly outside of the average community rating. An average user rated 58.4% of the time within the signal’s 95% CI. All users who significantly rated within the average community rating rated outside the 95% CI at least once.Discussion: A diverse community of raters participated in rating the signals. Krippendorff’s Alpha estimate revealed moderate reliability for users who rated ≥10% of signals. It was observed that inter-rater reliability increased for users with more experience rating signals.Conclusions: Diversity was observed between user ratings. It is hypothesized that rating diversity is influenced by differences in user expertise and experience, and that the number of times a user rates within and outside of a signal’s 95% CI can be used as a proxy for user expertise. The introduction of a weighted rating algorithm within KIWI that takes this into consideration could be beneficial