12 research outputs found

    Automatic Visual Tracking and Social Behaviour Analysis with Multiple Mice

    Get PDF
    <div><p>Social interactions are made of complex behavioural actions that might be found in all mammalians, including humans and rodents. Recently, mouse models are increasingly being used in preclinical research to understand the biological basis of social-related pathologies or abnormalities. However, reliable and flexible automatic systems able to precisely quantify social behavioural interactions of multiple mice are still missing. Here, we present a system built on two components. A module able to accurately track the position of multiple interacting mice from videos, regardless of their fur colour or light settings, and a module that automatically characterise social and non-social behaviours. The behavioural analysis is obtained by deriving a new set of specialised spatio-temporal features from the tracker output. These features are further employed by a learning-by-example classifier, which predicts for each frame and for each mouse in the cage one of the behaviours learnt from the examples given by the experimenters. The system is validated on an extensive set of experimental trials involving multiple mice in an open arena. In a first evaluation we compare the classifier output with the independent evaluation of two human graders, obtaining comparable results. Then, we show the applicability of our technique to multiple mice settings, using up to four interacting mice. The system is also compared with a solution recently proposed in the literature that, similarly to us, addresses the problem with a learning-by-examples approach. Finally, we further validated our automatic system to differentiate between C57B/6J (a commonly used reference inbred strain) and BTBR T+tf/J (a mouse model for autism spectrum disorders). Overall, these data demonstrate the validity and effectiveness of this new machine learning system in the detection of social and non-social behaviours in multiple (>2) interacting mice, and its versatility to deal with different experimental settings and scenarios.</p></div

    Fully automated analysis comparing the interactions of C57BL/6J mice in cages of two (N = 6) or four (N = 8) cage mates (Dataset B).

    No full text
    <p>The top graphs show the occurrence of each social and non-social interaction. For the same reasons exposed in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0074557#pone-0074557-g005" target="_blank">Fig. 5</a> the “StandTogether” behaviour is shown in a separate graph. The bottom graphs show the aggregate comparison of social and non-social interaction. The null hypothesis that mice show a similar social/non-social behaviour, regardless of the number of mice interacting, is rejected in both cases by a two-class two tail t-test that assumes equal variance. This shows an increase of social activity when C57BL/6J mice can interact with more littermates. The significance values of the t-test are (*) p<0.05, (**) p<0.005, (***) p<0.0005.</p

    Fully automated analysis comparing the interactions of C57BL/6J (N = 10) and BTBR (N = 6) mice of all experiments with two animals per cage (Dataset B).

    No full text
    <p>The top graphs show the overall occurrence of each social and non-social interaction. We generated a different graph for “StandTogether” behaviour since its classification as either social or non-social is arguable. The bottom graphs show the aggregate comparison of social and non-social interaction. The null hypothesis that the C57BL/6J mouse and BTBR mice show a similar social/non-social behaviour is rejected in both cases by a two-class, two tail t-test that assumes equal variance. This shows impaired social activity in the BTBR case. The significance values of the t-test are (*) p<0.05, (**) p<0.005, (***) p<0.0005.</p

    Comparison of classification performance between the method presented in this paper and the classification approach of Burgos-Artizzu et al. [23].

    No full text
    <p>The results are the average across all the mouse pairs of Dataset A, computed employing all the metrics of <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0074557#pone-0074557-t002" target="_blank">Table 2</a> and <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0074557#pone-0074557-t003" target="_blank">3</a>. In all instances our approach outperforms <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0074557#pone.0074557-BurgosArtizzu1" target="_blank">[23]</a>. The full results are available as additional material (<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0074557#pone.0074557.s001" target="_blank">Table S1</a>).</p

    Examples of challenging mice interactions.

    No full text
    <p>The red, blue and yellow lines represent the contours detected by the tracking algorithm. (a) The whole mice arena; (b,d,f,h) details of the algorithm output; (c,e,g,i) unprocessed IR details.</p

    Algorithm diagram summarizing the behaviour classification phases.

    No full text
    <p>The “Position Tracking” is composed of a pipeline of three modules. The blob detection module initialises the system, estimates the foreground shapes (i.e. it locates possible mice), and filters out unfeasible structures; the temporal watershed module identifies mouse positions and shapes, and their directionality; the mice matching module tracks the identities of each mouse. Then, a feature vector composed of 13 measurements describes relative position, movement and attitude of mice for all possible pairs. Finally, the continuous action description for the mice is generated thanks to our Temporal Random Forest approach, which evaluates ensembles of decision trees through time.</p

    Behaviour agreement among the two graders (top section) and quality of the system compared to the two graders (middle and bottom sections) on Dataset A (higher values are better ).

    No full text
    <p>The agreement is computed on a frame-by-frame basis. Each value represents the percentage of frames with class agreement. The different columns show the results for different types of behaviours:</p>★<p>accuracy on all behaviours considered separately;</p>†<p>accuracy on behaviours grouped into social and non-social meta-classes;</p>‡<p>precision on the social behaviours;</p><p>°precision on the non-social behaviours. Accuracy = (TP+TN)/(TP+TN+FN+FP) and Precision = TP/(TP+FP) where TP = True Positive, FP = False Positive, FN = False Negative and TN = True Negative. The average agreement grader/system is comparable to the average grader/grader agreement.</p
    corecore