10 research outputs found

    Interactive extraction of diverse vocal units from a planar embedding without the need for prior sound segmentation

    Get PDF
    Annotating and proofreading data sets of complex natural behaviors such as vocalizations are tedious tasks because instances of a given behavior need to be correctly segmented from background noise and must be classified with minimal false positive error rate. Low-dimensional embeddings have proven very useful for this task because they can provide a visual overview of a data set in which distinct behaviors appear in different clusters. However, low-dimensional embeddings introduce errors because they fail to preserve distances; and embeddings represent only objects of fixed dimensionality, which conflicts with vocalizations that have variable dimensions stemming from their variable durations. To mitigate these issues, we introduce a semi-supervised, analytical method for simultaneous segmentation and clustering of vocalizations. We define a given vocalization type by specifying pairs of high-density regions in the embedding plane of sound spectrograms, one region associated with vocalization onsets and the other with offsets. We demonstrate our two-neighborhood (2N) extraction method on the task of clustering adult zebra finch vocalizations embedded with UMAP. We show that 2N extraction allows the identification of short and long vocal renditions from continuous data streams without initially committing to a particular segmentation of the data. Also, 2N extraction achieves much lower false positive error rate than comparable approaches based on a single defining region. Along with our method, we present a graphical user interface (GUI) for visualizing and annotating data

    Benchmarking nearest neighbor retrieval of zebra finch vocalizations across development

    Get PDF
    Vocalizations are highly specialized motor gestures that regulate social interactions. The reliable detection of vocalizations from raw streams of microphone data remains an open problem even in research on widely studied animals such as the zebra finch. A promising method for finding vocal samples from potentially few labelled examples (templates) is nearest neighbor retrieval, but this method has never been extensively tested on vocal segmentation tasks. We retrieve zebra finch vocalizations as neighbors of each other in the sound spectrogram space. Based on merely 50 templates, we find excellent retrieval performance in adults (F1 score of 0.93±0.07) but not in juveniles (F1 score of 0.64±0.18), presumably due to the larger vocal variability of the latter. The performance in juveniles improves when retrieval is based on fixed-size template slices (F1 score of 0.72±0.10) instead of entire templates. Among the several distance metrics we tested such as the cosine and the Euclidean distance, we find that the Spearman distance largely outperforms all others. We release our expert-curated dataset of more than 50’000 zebra finch vocal segments, which will enable training of data-hungry machine-learning approaches

    Bitopertin, a selective oral GLYT1 inhibitor, improves anemia in a mouse model of \u3b2-thalassemia

    Get PDF
    Anemia of \u3b2-thalassemia is caused by ineffective erythropoiesis and reduced red cell survival. Several lines of evidence indicate that iron/heme restriction is a potential therapeutic strategy for the disease. Glycine is a key initial substrate for heme and globin synthesis. We provide evidence that bitopertin, a glycine transport inhibitor administered orally, improves anemia, reduces hemolysis, diminishes ineffective erythropoiesis, and increases red cell survival in a mouse model of \u3b2-thalassemia (Hbbth3/+ mice). Bitopertin ameliorates erythroid oxidant damage, as indicated by a reduction in membrane-associated free \u3b1-globin chain aggregates, in reactive oxygen species cellular content, in membrane-bound hemichromes, and in heme-regulated inhibitor activation and eIF2\u3b1 phosphorylation. The improvement of \u3b2-thalassemic ineffective erythropoiesis is associated with diminished mTOR activation and Rab5, Lamp1, and p62 accumulation, indicating an improved autophagy. Bitopertin also upregulates liver hepcidin and diminishes liver iron overload. The hematologic improvements achieved by bitopertin are blunted by the concomitant administration of the iron chelator deferiprone, suggesting that an excessive restriction of iron availability might negate the beneficial effects of bitopertin. These data provide important and clinically relevant insights into glycine restriction and reduced heme synthesis strategies for the treatment of \u3b2-thalassemia

    Travelling waves in somitogenesis: collective cellular properties emerge from time-delayed juxtacrine oscillation coupling

    No full text
    The sculpturing of the vertebrate body plan into segments begins with the sequential formation of somites in the presomitic mesoderm (PSM). The rhythmicity of this process is controlled by travelling waves of gene expression which sweep across the PSM. These kinetic waves emerge from coupled cellular oscillators and travel in the direction of an increasing gradient of oscillation period. The oscillations are driven by autorepression of HES/HER genes and are synchronized via Notch signalling. These emergent properties have been studied in various models of increasing complexity. We design a reduced mechanistic model of the zebrafish PSM oscillator that recapitulates oscillator entrainments and travelling wave formation in the presence of spatiotemporal time delay gradients. Our model shows that three key parameters, the autorepression delay, the juxtacrine coupling delay, and the coupling strength, are sufficient to understand the emergence of the collective period, the collective amplitude, and the synchronization of neighbouring HES/HER oscillators. Our theoretical framework allows us to integrate and dissect key collective properties emerging from coupled oscillators. These emergent properties are likely to represent a fundamental principle governing also other developmental processes such as neurogenesis and angiogenesis

    Positional information encoded in the dynamic differences between neighboring oscillators during vertebrate segmentation

    No full text
    A central problem in developmental biology is to understand how cells interpret their positional information to give rise to spatial patterns, such as the process of periodic segmentation of the vertebrate embryo into somites. For decades, somite formation has been interpreted according to the clock-and-wavefront model. In this conceptual framework, molecular oscillators set the frequency of somite formation while the positional information is encoded in signaling gradients. Recent experiments using ex vivo explants have challenged this interpretation, suggesting that positional information is encoded in the properties of the oscillators, independent of long-range modulations such as signaling gradients. Here, we propose that positional information is encoded in the difference in the levels of neighboring oscillators. The differences gradually increase because both the amplitude and the period of the oscillators increase with time. When this difference exceeds a certain threshold, the segmentation program starts. Using this framework, we quantitatively fit experimental data from in vivo and ex vivo mouse segmentation, and propose mechanisms of somite scaling. Our results suggest a novel mechanism of spatial pattern formation based on the local interactions between dynamic molecular oscillators

    Interactive extraction of diverse vocal units from a planar embedding without the need for prior sound segmentation

    No full text
    Annotating and proofreading data sets of complex natural behaviors such as vocalizations are tedious tasks because instances of a given behavior need to be correctly segmented from background noise and must be classified with minimal false positive error rate. Low-dimensional embeddings have proven very useful for this task because they can provide a visual overview of a data set in which distinct behaviors appear in different clusters. However, low-dimensional embeddings introduce errors because they fail to preserve distances; and embeddings represent only objects of fixed dimensionality, which conflicts with vocalizations that have variable dimensions stemming from their variable durations. To mitigate these issues, we introduce a semi-supervised, analytical method for simultaneous segmentation and clustering of vocalizations. We define a given vocalization type by specifying pairs of high-density regions in the embedding plane of sound spectrograms, one region associated with vocalization onsets and the other with offsets. We demonstrate our two-neighborhood (2N) extraction method on the task of clustering adult zebra finch vocalizations embedded with UMAP. We show that 2N extraction allows the identification of short and long vocal renditions from continuous data streams without initially committing to a particular segmentation of the data. Also, 2N extraction achieves much lower false positive error rate than comparable approaches based on a single defining region. Along with our method, we present a graphical user interface (GUI) for visualizing and annotating data.ISSN:2673-764

    Multimodal system for recording individual-level behaviors in songbird groups

    No full text
    In longitudinal observations of animal groups, the goal is to identify individuals and to reliably detect their interactive behaviors including their vocalizations. However, to reliably extract individual vocalizations from their mixtures and other environmental sounds remains a serious challenge. Promising approaches are multi-modal systems that make use of animal-borne wireless sensors and that exploit the inherent signal redundancy. In this vein, we designed a modular recording system (BirdPark) that yields synchronized data streams and contains a custom software-defined radio receiver. We record pairs of songbirds with multiple cameras and microphones and record their body vibrations with custom low-power frequency-modulated (FM) radio transmitters. Our custom multi-antenna radio demodulation technique increases the signal-to-noise ratio of the received radio signals by 6 dB and reduces the signal loss rate by a factor of 87 to only 0.03% of the recording time compared to standard single-antenna demodulation techniques. Nevertheless, neither a single vibration channel nor a single sound channel is sufficient by itself to signal the complete vocal output of an individual, with each sensor modality missing on average about 3.7% of vocalizations. Our work emphasizes the need for high-quality recording systems and for multi-modal analysis of social behavior

    Benchmarking nearest neighbor retrieval of zebra finch vocalizations across development

    No full text
    Vocalizations are highly specialized motor gestures that regulate social interactions. The reliable detection of vocalizations from raw streams of microphone data remains an open problem even in research on widely studied animals such as the zebra finch. A promising method for finding vocal samples from potentially few labelled examples (templates) is nearest neighbor retrieval, but this method has never been extensively tested on vocal segmentation tasks. We retrieve zebra finch vocalizations as neighbors of each other in the sound spectrogram space. Based on merely 50 templates, we find excellent retrieval performance in adults (F1 score of 0.93 ± 0.07) but not in juveniles (F1 score of 0.64 ± 0.18), presumably due to the larger vocal variability of the latter. The performance in juveniles improves when retrieval is based on fixed-size template slices (F1 score of 0.72 ± 0.10) instead of entire templates. Among the several distance metrics we tested such as the cosine and the Euclidean distance, we find that the Spearman distance largely outperforms all others. We release our expert-curated dataset of more than 50’000 zebra finch vocal segments, which will enable training of data-hungry machine-learning approaches
    corecore