4 research outputs found

    Limbs detection and tracking of head-fixed mice for behavioral phenotyping using motion tubes and deep learning

    Get PDF
    The broad accessibility of affordable and reliable recording equipment and its relative ease of use has enabled neuroscientists to record large amounts of neurophysiological and behavioral data. Given that most of this raw data is unlabeled, great effort is required to adapt it for behavioral phenotyping or signal extraction, for behavioral and neurophysiological data, respectively. Traditional methods for labeling datasets rely on human annotators which is a resource and time intensive process, which often produce data that that is prone to reproducibility errors. Here, we propose a deep learning-based image segmentation framework to automatically extract and label limb movements from movies capturing frontal and lateral views of head-fixed mice. The method decomposes the image into elemental regions (superpixels) with similar appearance and concordant dynamics and stacks them following their partial temporal trajectory. These 3D descriptors (referred as motion cues) are used to train a deep convolutional neural network (CNN). We use the features extracted at the last fully connected layer of the network for training a Long Short Term Memory (LSTM) network that introduces spatio-temporal coherence to the limb segmentation. We tested the pipeline in two video acquisition settings. In the first, the camera is installed on the right side of the mouse (lateral setting). In the second, the camera is installed facing the mouse directly (frontal setting). We also investigated the effect of the noise present in the videos and the amount of training data needed, and we found that reducing the number of training samples does not result in a drop of more than 5% in detection accuracy even when as little as 10% of the available data is used for training

    An Implantable Peripheral Nerve Recording and Stimulation System for Experiments on Freely Moving Animal Subjects

    Get PDF
    A new study with rat sciatic nerve model for peripheral nerve interfacing is presented using a fully-implanted inductively-powered recording and stimulation system in a wirelessly-powered standard homecage that allows animal subjects move freely within the homecage. The Wireless Implantable Neural Recording and Stimulation (WINeRS) system offers 32-channel peripheral nerve recording and 4-channel current-controlled stimulation capabilities in a 3 × 1.5 × 0.5 cm3 package. A bi-directional data link is established by on-off keying pulse-position modulation (OOK-PPM) in near field for narrow-band downlink and 433 MHz OOK for wideband uplink. An external wideband receiver is designed by adopting a commercial software defined radio (SDR) for a robust wideband data acquisition on a PC. The WINeRS-8 prototypes in two forms of battery-powered headstage and wirelessly-powered implant are validated in vivo, and compared with a commercial system. In the animal study, evoked compound action potentials were recorded to verify the stimulation and recording capabilities of the WINeRS-8 system with 32-ch penetrating and 4-ch cuff electrodes on the sciatic nerve of awake freely-behaving rats. Compared to the conventional battery-powered system, WINeRS can be used in closed-loop recording and stimulation experiments over extended periods without adding the burden of carrying batteries on the animal subject or interrupting the experiment

    Système de suivi de mouvement

    Get PDF
    Le comportement des petits animaux est important pour les chercheurs scientifiques et précliniques; ils veulent connaître les effets des interventions sur leur vie naturelle. Pour les maladies humaines, les rongeurs sont utilisés comme modèles. L’étude du comportement des rongeurs permet d’identifier et de développer de nouveaux médicaments pour les troubles psychiatriques et neurologiques. La surveillance des animaux peut être traitée et un grand nombre de données traitées peuvent conduire à de meilleurs résultats de recherche dans un temps plus court. Ce mémoire présente le système de suivi du comportement des rongeurs basé sur des techniques de vision numérique. En vision numérique, la détection d’un sujet consiste à balayer et à rechercher un objet dans une image ou une vidéo (qui n’est qu’une séquence d’images), mais la localisation d’un objet dans des images successives d’une vidéo est appelée suivi. Pour trouver la position d’un sujet dans une image, nous avons utilisé la détection du sujet et le suivi, car le suivi peut aider lorsque la détection échoue et vice et versa. Avec cette approche, nous pouvons suivre et détecter tout type du sujet (souris, headstage, ou par exemple un ballon). Il n’y a pas de dépendance au type de caméra. Pour trouver un sujet dans une image, nous utilisons l’algorithme AdaBoost en ligne qui est un algorithme de suivi du sujet et l’algorithme de Canny qui est un algorithme de détection du sujet, puis nous vérifions les résultats. Si l’algorithme Adaboost en ligne n’a pas pu trouver le sujet, nous utilisons l’algorithme Canny pour le trouver. En comparant les résultats de notre approche avec les résultats des algorithmes AdaBoost en ligne et Canny séparément, nous avons constaté que notre approche permet de mieux trouver le sujet dans l’image que lorsque nous utilisons ces deux algorithmes séparément. Dans ce mémoire, nous décrirons les algorithmes de détection et de suivi du sujet.Small animal behavior is important for science and preclinical researchers; they want to know the effects of interventions in their natural life. For human diseases, rodents are used as models; studying rodent behavior is good for identifying and developing new drugs for psychiatric and neurological disorders. Animal monitoring can be processed and a large number of data can lead to better research result in a shorter time. This thesis introduces the rodents’ behavior tracking system based on computer vision techniques. In computer vision, object detection is scanning and searching for an object in an image or a video (which is just a sequence of images) but locating an object in successive frames of a video is called tracking. To find the position of an object in an image, we use object detection and object tracking together because tracking can help when detection fails and inversely. With this approach, we can track and detect any objects (mouse, headstage, or a ball). There is no dependency to the camera type. To find an object in an image we use the online AdaBoost algorithm, which is an object tracking algorithm and the Canny algorithm, which is an object detection algorithm together, then we check the results. If the online Adaboost algorithm could not find the object, we use the Canny algorithm to find the object. By comparing the results of our approach with the results of the online AdaBoost and Canny algorithms separately, we found that our approach can find the object in the image better than when we use these two algorithms separately. In this thesis, we will describe implemented object detection and tracking algorithms

    The use of wearable sensors for animal behaviour assessment

    Get PDF
    PhD ThesisThe research outlined in this thesis presents novel applications of wearable sensors in the domain of animal behaviour assessment. The use of wearable sensing technology, and in particular accelerometry, has become a mainstay of behaviour assessment in humans, allowing for detailed analysis of movement based behaviour and health monitoring. In this thesis we look to apply these methodologies to animals and identify approaches towards monitoring their health and wellbeing. We investigate the use of the technology in the animal domain through a series of studies examining the problem across multiple species and in increasingly complex scenarios. A tightly constrained scenario is presented initially, in which horse behaviour was classi ed and assessed in the context of dressage performances. The assessment of lying behaviour in periparturient sows con ned to gestation crates examines a scenario in which the movement of the subject was constrained, but not predetermined. Expanding this work to include sows housed in free-farrowing environments removed the movement constraints imposed by the gestation crates. We examine the implications of the use of multiple sensors and how this might a ect the accuracy of the assessments. Finally, a system for behaviour recognition and assessment was developed for domestic cats. Study animals were free to move and behave at their own discretion whilst being monitored through the use of wearable sensors, in the least constrained of the studies. The scenarios outlined herein describe applications with an increasing level of complexity through the removal of constraints. Through this work we demonstrate that these techniques are applicable across species and hold value for the wellbeing of both commercial and companion animals.European Union's Seventh Framework Programme for research, technological development and demonstration under grant agreement number 613574 (PROHEALTH). This project has also received funding from the Biotechnology and Biological Sciences Research Council (BBSRC) in the form of a studentshi
    corecore