11 research outputs found

    Born to sing! Song development in a singing primate

    Get PDF
    In animal vocal communication, the development of adult-like vocalization is fundamental to interact appropriately with conspecifics. However, the factors that guide ontogenetic changes in the acoustic features remain poorly understood. In contrast with a historical view of nonhuman primate vocal production as substantially innate, recent research suggests that inheritance and physiological modification can only explain some of the developmental changes in call structure during growth. A particular case of acoustic communication is the indris’ singing behavior, a peculiar case among Strepsirrhine primates. Thanks to a decade of intense data collection, this work provides the first long-term quantitative analysis on song development in a singing primate. To understand the ontogeny of such a complex vocal output, we investigated juvenile and sub-adult indris’ vocal behavior, and we found that young individuals started participating in the chorus years earlier than previously reported. Our results indicated that spectro-temporal song parameters underwent essential changes during growth. In particular, the age and sex of the emitter influenced the indris’ vocal activity. We found that frequency parameters showed consistent changes across the sexes, but the temporal features showed different developmental trajectories for males and females. Given the low level of morphological sexual dimorphism and the marked differences in vocal behavior, we hypothesize that factors like social influences and auditory feedback may affect songs’ features, resulting in high vocal flexibility in juvenile indris. This trait may be pivotal in a species that engages in choruses with rapid vocal turn-taking

    Parent-offspring turn-taking dynamics influence parents’ song structure and elaboration in a singing primate

    Get PDF
    Parent-offspring interactions are essential to interpret animal social evolution and behavior, but their role in mediating acoustic communication in animals that interact vocally is still unclear. Increasing evidence shows that primate vocal communication is way more flexible than previously assumed, and research on this topic can provide further information on how the social environment shaped vocal plasticity during the evolution of the Primate order. Indris communicate through elaborated vocal emissions, usually termed songs. Songs are interactive vocal displays in which all members of the family group alternate their emissions, taking turns during chorusing events. We aimed to understand whether specific rules regulate the turn-taking of different group members and investigate the flexibility of indris’ vocal behavior when co-singing with their offspring. We found that social factors can influence the turn-taking organization in a chorus, as offspring were more likely to drop out from the parents’ duet than join in, and we speculate that overlap might signal competition by members of the same-sex. The duet between the reproductive pair was the most common type of singing organization, followed by a duet between mothers and sons and the triadic interaction between mother, father, and son. Interestingly, parents’ solo singing seems to stimulate offspring to vocalize, and we also found that mothers and fathers simplify, at least in part, song elaboration when chorusing with offspring. Our results indicate that indris can perform short-time adjustments to the number of co-emitters and their identity: our approach is advantageous in highlighting the multilevel influences on primate vocal flexibility. Moreover, it provides evidence that some aspects of our vocal plasticity were already present in the lemur lineage

    There You Are! Automated Detection of Indris’ Songs on Features Extracted from Passive Acoustic Recordings

    Get PDF
    From MDPI via Jisc Publications RouterHistory: received 2022-12-07, rev-recd 2022-12-21, accepted 2022-12-28, collection 2023-01, epub 2023-01-09Peer reviewed: TrueAcknowledgements: Acknowledgments: We are grateful to the local field guides and the assistants that helped during the data collection. We also wish to thank the GERP (Groupe d’Étude et de Recherche sur les Primates de Madagascar) for their unfailing support during the research activities and to the Parco Natura Viva for the financial and technical assistance. Data collection was carried out under research permits No. 118/19/MEDD/SG/DGEF/DSAP/DGRNE, 284/19/MEDD/SG/DGEF/DSAP/DGRNE, and 338/19/MEDD/G/DGEF/DSAP/DGRNE issued by the MinistĂšre de l’Environnement et du DĂ©veloppement Durable (MEDD). Data collection in 2021 did not require a permit because it was only performed by Malagasy citizens.Article version: VoRPublication status: PublishedFunder: University of TorinoFunder: Parco Natura Viva—Garda Zoological ParksFunder: UIZA—the Italian Association of Zoos and AquariaSimple Summary: Identifying vocalisations of given species from passive acoustic recordings is a common step in bioacoustics. While manual labelling and identification are widespread, this approach is time-consuming, prone to errors, and unsustainable in the long term, given the vast amount of data collected through passive monitoring. We developed an automated classifier based on a convolutional neural network (CNN) for passive acoustic data collected via an in situ monitoring protocol. In particular, we aimed to detect the vocalisations of the only singing lemur, Indri indri. Our network achieved a very high performance (accuracy >90% and recall >80%) in song detection. Our study contributes significantly to the automated wildlife detection research field because it represents a first attempt to combine a CNN and acoustic features based on a third-octave band system for song detection. Moreover, the automated detection provided insights that will improve field data collection and fine-tune conservation practices. Abstract: The growing concern for the ongoing biodiversity loss drives researchers towards practical and large-scale automated systems to monitor wild animal populations. Primates, with most species threatened by extinction, face substantial risks. We focused on the vocal activity of the indri (Indri indri) recorded in Maromizaha Forest (Madagascar) from 2019 to 2021 via passive acoustics, a method increasingly used for monitoring activities in different environments. We first used indris’ songs, loud distinctive vocal sequences, to detect the species’ presence. We processed the raw data (66,443 10-min recordings) and extracted acoustic features based on the third-octave band system. We then analysed the features extracted from three datasets, divided according to sampling year, site, and recorder type, with a convolutional neural network that was able to generalise to recording sites and previously unsampled periods via data augmentation and transfer learning. For the three datasets, our network detected the song presence with high accuracy (>90%) and recall (>80%) values. Once provided the model with the time and day of recording, the high-performance values ensured that the classification process could accurately depict both daily and annual habits of indris‘ singing pattern, critical information to optimise field data collection. Overall, using this easy-to-implement species-specific detection workflow as a preprocessing method allows researchers to reduce the time dedicated to manual classification

    There You Are! Automated Detection of Indris' Songs on Features Extracted from Passive Acoustic Recordings.

    No full text
    From Europe PMC via Jisc Publications RouterHistory: ppub 2023-01-01, epub 2023-01-09Publication status: PublishedThe growing concern for the ongoing biodiversity loss drives researchers towards practical and large-scale automated systems to monitor wild animal populations. Primates, with most species threatened by extinction, face substantial risks. We focused on the vocal activity of the indri (Indri indri) recorded in Maromizaha Forest (Madagascar) from 2019 to 2021 via passive acoustics, a method increasingly used for monitoring activities in different environments. We first used indris' songs, loud distinctive vocal sequences, to detect the species' presence. We processed the raw data (66,443 10-min recordings) and extracted acoustic features based on the third-octave band system. We then analysed the features extracted from three datasets, divided according to sampling year, site, and recorder type, with a convolutional neural network that was able to generalise to recording sites and previously unsampled periods via data augmentation and transfer learning. For the three datasets, our network detected the song presence with high accuracy (>90%) and recall (>80%) values. Once provided the model with the time and day of recording, the high-performance values ensured that the classification process could accurately depict both daily and annual habits of indris' singing pattern, critical information to optimise field data collection. Overall, using this easy-to-implement species-specific detection workflow as a preprocessing method allows researchers to reduce the time dedicated to manual classification

    There You Are! Automated Detection of Indris’ Songs on Features Extracted from Passive Acoustic Recordings

    Get PDF
    The growing concern for the ongoing biodiversity loss drives researchers towards practical and large-scale automated systems to monitor wild animal populations. Primates, with most species threatened by extinction, face substantial risks. We focused on the vocal activity of the indri (Indri indri) recorded in Maromizaha Forest (Madagascar) from 2019 to 2021 via passive acoustics, a method increasingly used for monitoring activities in different environments. We first used indris’ songs, loud distinctive vocal sequences, to detect the species’ presence. We processed the raw data (66,443 10-min recordings) and extracted acoustic features based on the third-octave band system. We then analysed the features extracted from three datasets, divided according to sampling year, site, and recorder type, with a convolutional neural network that was able to generalise to recording sites and previously unsampled periods via data augmentation and transfer learning. For the three datasets, our network detected the song presence with high accuracy (>90%) and recall (>80%) values. Once provided the model with the time and day of recording, the high-performance values ensured that the classification process could accurately depict both daily and annual habits of indris‘ singing pattern, critical information to optimise field data collection. Overall, using this easy-to-implement species-specific detection workflow as a preprocessing method allows researchers to reduce the time dedicated to manual classification
    corecore