42 research outputs found

    Genetic analysis of milking ability in Lacaune dairy ewes

    Get PDF
    The milking ability of Lacaune ewes was characterised by derived traits of milk flow patterns, in an INRA experimental farm, from a divergent selection experiment in order to estimate the correlated effects of selection for protein and fat yields. The analysis of selected divergent line effects (involving 34 616 data and 1204 ewes) indicated an indirect improvement of milking traits (+17% for maximum milk flow and -10% for latency time) with a 25% increase in milk yield. Genetic parameters were estimated by multi-trait analysis with an animal model, on 751 primiparous ewes. The heritabilities of the traits expressed on an annual basis were high, especially for maximum flow (0.54) and for latency time (0.55). The heritabilities were intermediate for average flow (0.30), time at maximum flow (0.42) and phase of increasing flow (0.43), and low for the phase of decreasing flow (0.16) and the plateau of high flow (0.07). When considering test-day data, the heritabilities of maximum flow and latency time remained intermediate and stable throughout the lactation. Genetic correlations between milk yield and milking traits were all favourable, but latency time was less milk yield dependent (-0.22) than maximum flow (+0.46). It is concluded that the current dairy ewe selection based on milk solid yield is not antagonistic to milking ability

    Conclusion

    No full text

    Les collaborations du Cati Sicpa au sein de l'INRA

    No full text
    La volontĂ© des DĂ©partements GĂ©nĂ©tique Animale (GA) et Physiologie Animale et SystĂšmes d’Élevage (Phase) de mutualiser les moyens humains de dĂ©veloppements informatiques dĂ©diĂ©s au phĂ©notypage animal et de mettre en place des systĂšmes d’informations (SI) communs pour les UnitĂ©s et Installations ExpĂ©rimentales (UE/IE) des deux DĂ©partements s’est concrĂ©tisĂ©e par la crĂ©ation du Cati SystĂšmes d’Informations et Calcul pour le PhĂ©notypage Animal (Sicpa). Cette collaboration des informaticiens de deux DĂ©partements autour de projets communs s’est progressivement et tout naturellement ouverte sur l’idĂ©e d’échanger et de collaborer avec d’autres collectifs de l’Institut sur des projets de dĂ©veloppement et des aspects mĂ©thodologiques ou dans l’idĂ©e de rĂ©unir les conditions nĂ©cessaires Ă  la mise en oeuvre des outils de phĂ©notypage sur le terrain. Nous avons choisi d’illustrer ces collaborations au travers de quelques exemples

    Automated Monitoring of Livestock Behavior UsingFrequency-Modulated Continuous-Wave Radars

    No full text
    International audienceIn animal production, behavioral selection is becoming increasingly important to improvethe docility of livestock. Several behavioral traits, including motion, are experimentally recorded in orderto characterize the reactivity of animals and investigate its genetic determinism. Behavioral analysesare often time consuming because large numbers of animals have to be compared. For this reason,automatization is needed to develop high throughput data recording and efficient phenotyping. Herewe introduce a new method to monitor the position and motion of an individual sheep using a 24 GHzfrequency-modulated continuous-wave radar in a classical experimental paradigm called thearena test.The measurement method is non-invasive, does not require equipping animals with electronic tags,and offers a depth measurement resolution less than 10 cm. Parasitic echoes (or “clutters”) that couldalter the sheep backscattered signal are removed by using the singular value decomposition analysis.In order to enhance the clutters mitigation, the direction-of-arrivals of electromagnetic backscatteredsignals are derived from applying the MUltiple Signals Classification algorithm. We discuss how theproposed automatized monitoring of individual sheep could be applied to a wider range of species andexperimental contexts for animal behavior research

    Predicting sow postures from video images: Comparison of convolutional neural networks and segmentation combined with support vector machines under various training and testing setups

    No full text
    International audienceThe use of CNN and segmentation to extract image features for the prediction of four postures for sows kept in crates was examined. The extracted features were used as input variables in an SVM classification method to estimate posture. The possibility of using a posture prediction model with images not necessarily obtained under the same conditions as those used for the training set was explored. As a reference case, the efficacy of the posture prediction model was explored when training and testing datasets were built using the same pool of images. In this case, all the models produced satisfactory results, with a maximum f1-score of 97.7% with CNNs and 93.3% with segmentation. To evaluate the impact of environmental variations, the models were trained and tested on different monitoring days. In this case, the best f1-score dropped to 86.7%. The impact of using the posture prediction model on animals that were not present in the training dataset was then explored. The best f1-score reduced to 63.4% when the posture prediction models were trained on one animal and tested on 11 other different animals. Conversely, when the models were tested on one animal and trained on the 11 others, the f1-score only decreased to 86% with the best model. On average, a decrease of around 17% caused by environmental and individual variations between training and testing was observed.Nous avons comparĂ© l’utilisation des CNN et de la segmentation pour extraire des caractĂ©ristiques d’intĂ©rĂȘts dans les images (features), afin de prĂ©dire la posture de truies allaitantes. Une fois extraite, les features peuvent ĂȘtre utilisĂ©es comme variable d’entrĂ©e d’une mĂ©thode de classification, de type SVM, pour prĂ©dire la posture de la truie. Nous avons explorĂ© l’impact de l’ensemble d’apprentissage sur la qualitĂ© de la prĂ©diction, notamment lorsque l’ensemble d’apprentissage et de test ne sont pas obtenues dans les mĂȘmes conditions. Nous avons d’abord considĂ©rĂ© un cas de rĂ©fĂ©rence, oĂč la mĂ©thode d’estimation de la posture Ă©tait entrainĂ©e et testĂ©e sur des images prises dans les mĂȘmes conditions. Dans ce cas, tous les modĂšles testĂ©s ont fournis des rĂ©sultats satisfaisants, avec un f1-score de 97.7% pour le meilleur CNN et de 93.3% pour la mĂ©thode basĂ©e sur la segmentation d’images. Pour Ă©valuer l’impact des changements environnementaux lors de la prise de vue des images, nous avons ensuite entrainĂ© et testĂ© les modĂšles sur des images provenants de deux jours diffĂ©rents. Dans ce cas, le meilleur f1-score est rĂ©duit Ă  86.7%. Enfin, nous avons considĂ©rĂ© le cas oĂč des animaux prĂ©sent dans l’ensemble test, ne le sont pas dans l’ensemble d’apprentissage. Lorsque le modĂšle de prediction de posture est entrainĂ© sur une truie, et testĂ© sur les 11 truies, le meilleur f1-score obtenu est de 63.4%. Lorsque le modĂšle est entrainĂ© sur 11 truies et testĂ© sur 1 truie, le f1-score est de 86%. En moyenne, nous avons observĂ© que les variations environnementales et individuelles faisaient baisser le f1-score de 17%
    corecore