1,206 research outputs found
Recommended from our members
FoxP2 isoforms delineate spatiotemporal transcriptional networks for vocal learning in the zebra finch.
Human speech is one of the few examples of vocal learning among mammals yet ~half of avian species exhibit this ability. Its neurogenetic basis is largely unknown beyond a shared requirement for FoxP2 in both humans and zebra finches. We manipulated FoxP2 isoforms in Area X, a song-specific region of the avian striatopallidum analogous to human anterior striatum, during a critical period for song development. We delineate, for the first time, unique contributions of each isoform to vocal learning. Weighted gene coexpression network analysis of RNA-seq data revealed gene modules correlated to singing, learning, or vocal variability. Coexpression related to singing was found in juvenile and adult Area X whereas coexpression correlated to learning was unique to juveniles. The confluence of learning and singing coexpression in juvenile Area X may underscore molecular processes that drive vocal learning in young zebra finches and, by analogy, humans
Robust sound event detection in bioacoustic sensor networks
Bioacoustic sensors, sometimes known as autonomous recording units (ARUs),
can record sounds of wildlife over long periods of time in scalable and
minimally invasive ways. Deriving per-species abundance estimates from these
sensors requires detection, classification, and quantification of animal
vocalizations as individual acoustic events. Yet, variability in ambient noise,
both over time and across sensors, hinders the reliability of current automated
systems for sound event detection (SED), such as convolutional neural networks
(CNN) in the time-frequency domain. In this article, we develop, benchmark, and
combine several machine listening techniques to improve the generalizability of
SED models across heterogeneous acoustic environments. As a case study, we
consider the problem of detecting avian flight calls from a ten-hour recording
of nocturnal bird migration, recorded by a network of six ARUs in the presence
of heterogeneous background noise. Starting from a CNN yielding
state-of-the-art accuracy on this task, we introduce two noise adaptation
techniques, respectively integrating short-term (60 milliseconds) and long-term
(30 minutes) context. First, we apply per-channel energy normalization (PCEN)
in the time-frequency domain, which applies short-term automatic gain control
to every subband in the mel-frequency spectrogram. Secondly, we replace the
last dense layer in the network by a context-adaptive neural network (CA-NN)
layer. Combining them yields state-of-the-art results that are unmatched by
artificial data augmentation alone. We release a pre-trained version of our
best performing system under the name of BirdVoxDetect, a ready-to-use detector
of avian flight calls in field recordings.Comment: 32 pages, in English. Submitted to PLOS ONE journal in February 2019;
revised August 2019; published October 201
Automatic Recognition of Non-Verbal Acoustic Communication Events With Neural Networks
Non-verbal acoustic communication is of high importance to humans and animals: Infants use the voice as a primary communication tool. Animals of all kinds employ acoustic communication, such as chimpanzees, which use pant-hoot vocalizations for long-distance communication.
Many applications require the assessment of such communication for a variety of analysis goals. Computational systems can support these areas through automatization of the assessment process. This is of particular importance in monitoring scenarios over large spatial and time scales, which are infeasible to perform manually.
Algorithms for sound recognition have traditionally been based on conventional machine learning approaches. In recent years, so-called representation learning approaches have gained increasing popularity. This particularly includes deep learning approaches that feed raw data to deep neural networks. However, there remain open challenges in applying these approaches to automatic recognition of non-verbal acoustic communication events, such as compensating for small data set sizes.
The leading question of this thesis is: How can we apply deep learning more effectively to automatic recognition of non-verbal acoustic communication events? The target communication types were specifically (1) infant vocalizations and (2) chimpanzee long-distance calls.
This thesis comprises four studies that investigated aspects of this question:
Study (A) investigated the assessment of infant vocalizations by laypersons. The central goal was to derive an infant vocalization classification scheme based on the laypersons' perception. The study method was based on the Nijmegen Protocol, where participants rated vocalization recordings through various items, such as affective ratings and class labels. Results showed a strong association between valence ratings and class labels, which was used to derive a classification scheme.
Study (B) was a comparative study on various neural network types for the automatic classification of infant vocalizations. The goal was to determine the best performing network type among the currently most prevailing ones, while considering the influence of their architectural configuration. Results showed that convolutional neural networks outperformed recurrent neural networks and that the choice of the frequency and time aggregation layer inside the network is the most important architectural choice.
Study (C) was a detailed investigation on computer vision-like convolutional neural networks for infant vocalization classification. The goal was to determine the most important architectural properties for increasing classification performance. Results confirmed the importance of the aggregation layer and additionally identified the input size of the fully-connected layers and the accumulated receptive field to be of major importance.
Study (D) was an investigation on compensating class imbalance for chimpanzee call detection in naturalistic long-term recordings. The goal was to determine which compensation method among a selected group improved performance the most for a deep learning system. Results showed that spectrogram denoising was most effective, while methods for compensating relative imbalance either retained or decreased performance.:1. Introduction
2. Foundations in Automatic Recognition of Acoustic Communication
3. State of Research
4. Study (A): Investigation of the Assessment of Infant Vocalizations by Laypersons
5. Study (B): Comparison of Neural Network Types for Automatic Classification of Infant Vocalizations
6. Study (C): Detailed Investigation of CNNs for Automatic Classification of Infant Vocalizations
7. Study (D): Compensating Class Imbalance for Acoustic Chimpanzee Detection With Convolutional Recurrent Neural Networks
8. Conclusion and Collected Discussion
9. AppendixNonverbale akustische Kommunikation ist fĂŒr Menschen und Tiere von groĂer Bedeutung: SĂ€uglinge nutzen die Stimme als primĂ€res Kommunikationsmittel. Schimpanse verwenden sogenannte 'Pant-hoots' und Trommeln zur Kommunikation ĂŒber weite Entfernungen.
Viele Anwendungen erfordern die Beurteilung solcher Kommunikation fĂŒr verschiedenste Analyseziele. Algorithmen können solche Bereiche durch die Automatisierung der Beurteilung unterstĂŒtzen. Dies ist besonders wichtig beim Monitoring langer Zeitspannen oder groĂer Gebiete, welche manuell nicht durchfĂŒhrbar sind.
Algorithmen zur GerĂ€uscherkennung verwendeten bisher gröĂtenteils konventionelle AnsĂ€tzen des maschinellen Lernens. In den letzten Jahren hat eine alternative Herangehensweise PopularitĂ€t gewonnen, das sogenannte Representation Learning. Dazu gehört insbesondere Deep Learning, bei dem Rohdaten in tiefe neuronale Netze eingespeist werden. Jedoch gibt es bei der Anwendung dieser AnsĂ€tze auf die automatische Erkennung von nonverbaler akustischer Kommunikation ungelöste Herausforderungen, wie z.B. die Kompensation der relativ kleinen Datenmengen.
Die Leitfrage dieser Arbeit ist: Wie können wir Deep Learning effektiver zur automatischen Erkennung nonverbaler akustischer Kommunikation verwenden? Diese Arbeit konzentriert sich speziell auf zwei Kommunikationsarten: (1) vokale Laute von SÀuglingen (2) Langstreckenrufe von Schimpansen.
Diese Arbeit umfasst vier Studien, welche Aspekte dieser Frage untersuchen:
Studie (A) untersuchte die Beurteilung von SĂ€uglingslauten durch Laien. Zentrales Ziel war die Ableitung eines Klassifikationsschemas fĂŒr SĂ€uglingslaute auf der Grundlage der Wahrnehmung von Laien. Die Untersuchungsmethode basierte auf dem sogenannten Nijmegen-Protokoll. Hier beurteilten die Teilnehmenden Lautaufnahmen von SĂ€uglingen anhand verschiedener Variablen, wie z.B. affektive Bewertungen und Klassenbezeichnungen. Die Ergebnisse zeigten eine starke Assoziation zwischen Valenzbewertungen und Klassenbezeichnungen, die zur Ableitung eines Klassifikationsschemas verwendet wurde.
Studie (B) war eine vergleichende Studie verschiedener Typen neuronaler Netzwerke fĂŒr die automatische Klassifizierung von SĂ€uglingslauten. Ziel war es, den leistungsfĂ€higsten Netzwerktyp unter den momentan verbreitetsten Typen zu ermitteln. Hierbei wurde der Einfluss verschiedener architektonischer Konfigurationen innerhalb der Typen berĂŒcksichtigt. Die Ergebnisse zeigten, dass Convolutional Neural Networks eine höhere Performance als Recurrent Neural Networks erreichten. AuĂerdem wurde gezeigt, dass die Wahl der Frequenz- und Zeitaggregationsschicht die wichtigste architektonische Entscheidung ist.
Studie (C) war eine detaillierte Untersuchung von Computer Vision-Ă€hnlichen Convolutional Neural Networks fĂŒr die Klassifizierung von SĂ€uglingslauten. Ziel war es, die wichtigsten architektonischen Eigenschaften zur Steigerung der Erkennungsperformance zu bestimmen. Die Ergebnisse bestĂ€tigten die Bedeutung der Aggregationsschicht. ZusĂ€tzlich Eigenschaften, die als wichtig identifiziert wurden, waren die EingangsgröĂe der vollstĂ€ndig verbundenen Schichten und das akkumulierte rezeptive Feld.
Studie (D) war eine Untersuchung zur Kompensation der Klassenimbalance zur Erkennung von Schimpansenrufen in Langzeitaufnahmen. Ziel war es, herauszufinden, welche Kompensationsmethode aus einer Menge ausgewÀhlter Methoden die Performance eines Deep Learning Systems am meisten verbessert. Die Ergebnisse zeigten, dass Spektrogrammentrauschen am effektivsten war, wÀhrend Methoden zur Kompensation des relativen Ungleichgewichts die Performance entweder gleichhielten oder verringerten.:1. Introduction
2. Foundations in Automatic Recognition of Acoustic Communication
3. State of Research
4. Study (A): Investigation of the Assessment of Infant Vocalizations by Laypersons
5. Study (B): Comparison of Neural Network Types for Automatic Classification of Infant Vocalizations
6. Study (C): Detailed Investigation of CNNs for Automatic Classification of Infant Vocalizations
7. Study (D): Compensating Class Imbalance for Acoustic Chimpanzee Detection With Convolutional Recurrent Neural Networks
8. Conclusion and Collected Discussion
9. Appendi
Using Self-Organizing Maps to Recognize Acoustic Units Associated with Information Content in Animal Vocalizations
Kohonen self-organizing neural networks, also called self-organizing maps (SOMs), have been used successfully to recognize human phonemes and in this way to aid in human speech recognition. This paper describes how SOMS also can be used to associate specific information content with animal vocalizations. A SOM was used to identify acoustic units in Gunnisonâs prairie dog alarm calls that were vocalized in the presence of three different predator species. Some of these acoustic units and their combinations were found exclusively in the alarm calls associated with a particular predator species and were used to associate predator species information with individual alarm calls. This methodology allowed individual alarm calls to be classified by predator species with an average of 91% accuracy. Furthermore, the topological structure of the SOM used in these experiments provided additional insights about the acoustic units and their combinations that were used to classify the target alarm calls. An important benefit of the methodology developed in this paper is that it could be used to search for groups of sounds associated with information content for any animal whose vocalizations are composed of multiple simultaneous frequency components
Acoustic sequences in non-human animals: a tutorial review and prospectus.
Animal acoustic communication often takes the form of complex sequences, made up of multiple distinct acoustic units. Apart from the well-known example of birdsong, other animals such as insects, amphibians, and mammals (including bats, rodents, primates, and cetaceans) also generate complex acoustic sequences. Occasionally, such as with birdsong, the adaptive role of these sequences seems clear (e.g. mate attraction and territorial defence). More often however, researchers have only begun to characterise - let alone understand - the significance and meaning of acoustic sequences. Hypotheses abound, but there is little agreement as to how sequences should be defined and analysed. Our review aims to outline suitable methods for testing these hypotheses, and to describe the major limitations to our current and near-future knowledge on questions of acoustic sequences. This review and prospectus is the result of a collaborative effort between 43 scientists from the fields of animal behaviour, ecology and evolution, signal processing, machine learning, quantitative linguistics, and information theory, who gathered for a 2013 workshop entitled, 'Analysing vocal sequences in animals'. Our goal is to present not just a review of the state of the art, but to propose a methodological framework that summarises what we suggest are the best practices for research in this field, across taxa and across disciplines. We also provide a tutorial-style introduction to some of the most promising algorithmic approaches for analysing sequences. We divide our review into three sections: identifying the distinct units of an acoustic sequence, describing the different ways that information can be contained within a sequence, and analysing the structure of that sequence. Each of these sections is further subdivided to address the key questions and approaches in that area. We propose a uniform, systematic, and comprehensive approach to studying sequences, with the goal of clarifying research terms used in different fields, and facilitating collaboration and comparative studies. Allowing greater interdisciplinary collaboration will facilitate the investigation of many important questions in the evolution of communication and sociality.This review was developed at an investigative workshop, âAnalyzing Animal Vocal Communication Sequencesâ that took place on October 21â23 2013 in Knoxville, Tennessee, sponsored by the National Institute for Mathematical and Biological Synthesis (NIMBioS). NIMBioS is an Institute sponsored by the National Science Foundation, the U.S. Department of Homeland Security, and the U.S. Department of Agriculture through NSF Awards #EF-0832858 and #DBI-1300426, with additional support from The University of Tennessee, Knoxville. In addition to the authors, Vincent Janik participated in the workshop. D.T.B.âs research is currently supported by NSF DEB-1119660. M.A.B.âs research is currently supported by NSF IOS-0842759 and NIH R01DC009582. M.A.R.âs research is supported by ONR N0001411IP20086 and NOPP (ONR/BOEM) N00014-11-1-0697. S.L.DeR.âs research is supported by the U.S. Office of Naval Research. R.F.-i-C.âs research was supported by the grant BASMATI (TIN2011-27479-C04-03) from the Spanish Ministry of Science and Innovation. E.C.G.âs research is currently supported by a National Research Council postdoctoral fellowship. E.E.V.âs research is supported by CONACYT, Mexico, award number I010/214/2012.This is the accepted manuscript. The final version is available at http://dx.doi.org/10.1111/brv.1216
A collection of best practices for the collection and analysis of bioacoustic data
The field of bioacoustics is rapidly developing and characterized by diverse methodologies, approaches and aims. For instance, bioacoustics encompasses studies on the perception of pure tones in meticulously controlled laboratory settings, documentation of speciesâ presence and activities using recordings from the field, and analyses of circadian calling patterns in animal choruses. Newcomers to the field are confronted with a vast and fragmented literature, and a lack of accessible reference papers or textbooks. In this paper we contribute towards filling this gap. Instead of a classical list of âdosâ and âdonâtsâ, we review some key papers which, we believe, embody best practices in several bioacoustic subfields. In the first three case studies, we discuss how bioacoustics can help identify the âwhoâ, âwhereâ and âhow manyâ of animals within a given ecosystem. Specifically, we review cases in which bioacoustic methods have been applied with success to draw inferences regarding species identification, population structure, and biodiversity. In fourth and fifth case studies, we highlight how structural properties in signal evolution can emerge via ecological constraints or cultural transmission. Finally, in a sixth example, we discuss acoustic methods that have been used to infer predatorâprey dynamics in cases where direct observation was not feasible. Across all these examples, we emphasize the importance of appropriate recording parameters and experimental design. We conclude by highlighting common best practices across studies as well as caveats about our own overview. We hope our efforts spur a more general effort in standardizing best practices across the subareas weâve highlighted in order to increase compatibility among bioacoustic studies and inspire cross-pollination across the discipline
A collection of best practices for the collection and analysis of bioacoustic data
The field of bioacoustics is rapidly developing and characterized by diverse methodologies, approaches and aims. For instance, bioacoustics encompasses studies on the perception of pure tones in meticulously controlled laboratory settings, documentation of speciesâ presence and activities using recordings from the field, and analyses of circadian calling patterns in animal choruses. Newcomers to the field are confronted with a vast and fragmented literature, and a lack of accessible reference papers or textbooks. In this paper we contribute towards filling this gap. Instead of a classical list of âdosâ and âdonâtsâ, we review some key papers which, we believe, embody best practices in several bioacoustic subfields. In the first three case studies, we discuss how bioacoustics can help identify the âwhoâ, âwhereâ and âhow manyâ of animals within a given ecosystem. Specifically, we review cases in which bioacoustic methods have been applied with success to draw inferences regarding species identification, population structure, and biodiversity. In fourth and fifth case studies, we highlight how structural properties in signal evolution can emerge via ecological constraints or cultural transmission. Finally, in a sixth example, we discuss acoustic methods that have been used to infer predatorâprey dynamics in cases where direct observation was not feasible. Across all these examples, we emphasize the importance of appropriate recording parameters and experimental design. We conclude by highlighting common best practices across studies as well as caveats about our own overview. We hope our efforts spur a more general effort in standardizing best practices across the subareas weâve highlighted in order to increase compatibility among bioacoustic studies and inspire cross-pollination across the discipline.Publisher PDFPeer reviewe
Sperm whale foraging behaviour: a predicted model based on 3D movement and acoustic data from Dtags
High-resolution sound and movement recording tags (e.g. Dtags, Acousonde tags, Atags)
offer unprecedented views of the fine-scale foraging behaviour of cetaceans,
especially those that use sound to forage, such as the sperm whale (Physeter
macrocephalus). However, access to these tags is difficult and expensive, limiting
studies of sperm whale foraging behaviour to small sample sizes and short time periods,
preventing inferences at the population level. The development of accurate foraging
indices from relatively inexpensive time-depth recorder (TDR) data would allow
obtaining data from a larger number of individuals, and capitalizing on datasets already
available, providing long-term analyses of foraging activity. In this study, data from
high-resolution acoustic and movement recording tags from 8 sperm whales was used to
build predictive models of the number of buzzes (i.e, indicative of prey capture attempts
(PCA)) for dive segments of different lengths, using dive metrics calculated from timedepth
data only. The number of buzzes per dive segments of 180s and 300s was best
predicted by the average depth, depth variance, vertical velocity variance and number of
wiggles. Model performance was best for 180s segments, accurately predicting the
number of buzzes in 63% of the segments used to construct the model and in 58% of the
segments for new individuals. Predictive accuracy reached 81%, when only presence or
absence of buzzes in segments was assessed. These results demonstrate the feasibility of
finding a reliable index of sperm whale foraging activity for time-depth data, when
combining different dive metrics. This index estimates the number of buzzes over short
dive segments (of 180s), enabling investigating and quantifying PCAs at very finescales.
Finally, this work contributes to leverage the potential of time-depth data for
studying the foraging ecology of sperm whales and the capacity of applying this
approach to a wide range of cetacean species.O cachalote (Physeter macrocephalus) Ă© um dos mais conhecidos predadores marinhos,
passando mais da metade da sua vida abaixo dos 500m de profundidade, onde se
alimenta principalmente de lulas meso e bentopelågicas, embora também possam
consumir outros cefalĂłpodes, peixes profundos e invertebrados. Apresenta uma
distribuição mundial e pode ser encontrado no arquipélago dos Açores durante todo o
ano, perto da costa, razão pela qual os Açores foram uma das regiÔes baleeiras mais
importantes.
O som desempenha um papel fundamental na vida dos cachalotes. Eles produzem sons
enquanto estĂŁo a socializar e na procura e captura de alimento. Foram identificados pelo
menos quatro tipos de cliques (cliques usuais, âbuzzesâ, codas e âslow clicksâ), dos
quais os cliques usuais e os âbuzzesâ estĂŁo envolvidos no comportamento de
alimentação. Os cliques usuais tĂȘm nĂveis sonoros elevados e sĂŁo altamente direcionais,
servindo como um biosonar para navegar pelo ambiente e eco-localizar presas. Os
âbuzzesâ, consistem em cliques de alta frequĂȘncia e baixa amplitude, produzidos em
intervalos rĂĄpidos. Por esta razĂŁo, tĂȘm um alcance mais curto do que os cliques usuais,
fornecendo uma resolução mais alta e, portanto, informaçÔes mais detalhadas sobre o
seu ambiente prĂłximo e presas.
A observação direta é uma das ferramentas mais poderosas para estudar o
comportamento animal, nĂŁo obstante, no caso dos cachalotes Ă© altamente limitada,
consequĂȘncia dos longos perĂodos que passam em profundidade. Por este motivo, os
estudos sobre o comportamento do cachalote, e de outros predadores marinhos de
mergulho profundo, dependem da utilização de diferentes ferramentas que permitem
obter informaçÔes sobre o seu comportamento subaquåtico. Os hidrofones e as marcas
colocadas em animais estĂŁo entre as ferramentas mais importantes para estudos sobre o
comportamento dos cetĂĄceos, permitindo o registo contĂnuo de sons produzidos debaixo
de ĂĄgua e o seguimento, tambĂ©m contĂnuo, de movimento e outras variĂĄveis de
mergulho.
A incorporação de hidrofones em marcas para colocação em animais, como as marcas
acĂșsticas digitais (âDtagsâ), marcas âAcousondeâ ou âA-tagsâ revolucionou o estudo do
comportamento dos cetĂĄceos. Estas marcas fornecem dados de movimento tridimensional
e acĂșsticos de alta resolução, simultaneamente registando informação sobre o comportamento do animal, possibilitando, por exemplo, a compreensĂŁo de como os
cachalotes usam o som durante a alimentação. Estudos baseados na anålise de dados de
âDtagsâ revelaram que a presença de picos de velocidade na parte mais profunda do
mergulho e movimentos rĂĄpidos da mandĂbula estavam relacionados com a produção de
âbuzzesâ. Consequentemente, foi sugerido que os âbuzzesâ sĂŁo emitidos durante a fase
terminal de captura de presas, a fim de obter informação de alta resolução sobre o alvo.
Desde então, a produção de cliques tem sido usada como um indicador de esforço de
alimentação e a produção de âbuzzesâ, considerada como o melhor indicador de
tentativa de captura de presas.
NĂŁo obstante, o acesso a estas marcas de alta resolução acĂșstica e movimento Ă©
extremamente difĂcil e caro, limitando o estudo do comportamento de alimentação do
cachalote a amostras pequenas e curtos perĂodos de tempo. Por esta razĂŁo, o
desenvolvimento de um Ăndice de esforço de alimentação exato, a partir de dados de
mergulho 2D de dados de tempo-profundidade como os âtime-depth recordersâ (TDR),
permitiria capitalizar um conjunto de dados de mergulho jĂĄ disponĂveis, analisando
séries temporais de atividade de alimentação e avaliando alteraçÔes ligadas a mudanças
climåticas ou antropogénicas.
No presente estudo, dados de alta resolução com informação acĂșstica de oito cachalotes
marcados com âDtagsâ no arquipĂ©lago dos Açores foram usados para construir um
modelo preditivo do nĂșmero de âbuzzesâ, baseado exclusivamente em dados
profundidade-tempo e com resolução måxima de 1m de profundidade, correspondendo,
portanto, Ă s capacidades de registo de um TDR. O nĂșmero total de âbuzzesâ por
segmento foi modelado a partir de um conjunto de variåveis que descrevem a média e
variabilidade de profundidade, tempo passado na fase profunda do mergulho,
velocidade vertical, aceleração vertical e nĂșmero de excursĂ”es verticais, usando um
modelo linear generalizado misto (GLMM), com o indivĂduo como um efeito aleatĂłrio.
De um total de 816 âbuzzesâ analisados, 95% apresentaram uma duração de 2 a 14
segundos. Portanto, inicialmente os mergulhos foram divididos em segmentos de curta
duração. Porém, as primeiras anålises demonstraram fracas capacidades preditivas e
finalmente optou-se por usar segmentos de 180s e 300s.
Os melhores modelos de nĂșmero de buzzes por segmento de 180s e 300s incluĂram a
profundidade mĂ©dia, a variĂąncia de profundidade, a variĂąncia da velocidade vertical e o nĂșmero de âwigglesâ por segmento. Os segmentos de mergulho com âbuzzesâ
apresentaram uma maior profundidade média, menor variùncia de profundidade, maior
variĂąncia de velocidade e maior presença de âwigglesâ, sendo a profundidade mĂ©dia a
mĂ©trica mais relevante do modelo. Estes resultados confirmam que os âbuzzesâ ocorrem
nas partes profundas do mergulho e sugerem que as vĂĄrias tentativas de captura podem
ocorrer numa extensão de profundidade limitada, demonstrado pela pequena variação de
profundidade, maior variação de velocidade e presença de âwigglesâ.
O desempenho do modelo foi melhor para segmentos de 180s, resultando em deteçÔes
corretas do nĂșmero de âbuzzesâ em 63% dos segmentos usados para construir o modelo
e em 58% dos segmentos para novos indivĂduos usados para testar o modelo. Assim
mesmo, o modelo resultou em 81% de deteçÔes corretas quando avaliada apenas a
presença ou ausĂȘncia de âbuzzesâ nos segmentos. Apesar do nosso modelo ter algumas
deficiĂȘncias preditivas, os resultados preditivos sĂŁo similares Ă queles obtidos com
modelos desenvolvidos anteriormente, para prever tentativas de captura de presas em
conjuntos de dados 2D de baixa resolução em outras espécies. Porém, ao contrårio
desses modelos que previram tentativas de captura de presas na escala de mergulho ou,
na melhor das hipĂłteses, em escalas de 30 minutos e de uma hora, o modelo
desenvolvido neste estudo previu tentativas de captura de presas a cada 3 minutos.
Este Ă© o primeiro estudo a desenvolver um modelo que prevĂȘ o nĂșmero de tentativas de
captura de presas e, consequentemente, o esforço de alimentação em cachalotes a partir
de perfis de mergulho 2D. O presente método poderå ser aplicado a conjuntos de dados
de profundidade de tempo jĂĄ disponĂveis, a fim de conduzir anĂĄlises retrospetivas do
comportamento de alimentação. Porém, o aumento do tamanho da amostra e uma
anålise de dados mais detalhada permitiria obter previsÔes mais precisas. Finalmente, a
presente abordagem de estimativa de alimentação Ă© baseada na previsĂŁo do nĂșmero de
âbuzzesâ e, portanto, poderia ser potencialmente aplicada a uma sĂ©rie de espĂ©cies de
odontocetes, potencialmente permitindo estimativas mais precisas do esforço de
alimentação, do que os Ăndices grosseiros e gerais tipicamente derivados de perfis de
mergulho 2D
Anthropogenic Noise is Associated with Reductions in the Productivity of Breeding Eastern Bluebirds (Sialia sialis)
Although previous studies have related variations in environmental noise levels with alterations in communication behaviors of birds, little work has investigated the potential long-term implications of living or breeding in noisy habitats. However, noise has the potential to reduce fitness, both directly (because it is a physiological stressor) and indirectly (by masking important vocalizations and/or leading to behavioral changes). Here, we quantified acoustic conditions in active breeding territories of male Eastern Bluebirds (Sialia sialis). Simultaneously, we measured four fitness indicators: cuckoldry rates, brood growth rate and condition, and number of fledglings produced (i.e., productivity). Increases in environmental noise tended to be associated with smaller brood sizes and were more strongly related to reductions in productivity. Although the mechanism responsible for these patterns is not yet clear, the breeding depression experienced by this otherwise disturbance-tolerant species indicates that anthropogenic noise may have damaging effects on individual fitness and, by extraction, the persistence of populations in noisy habitats. We suggest that managers might protect avian residents from potentially harmful noise by keeping acoustically dominant anthropogenic habitat features as far as possible from favored songbird breeding habitats, limiting noisy human activities, and/or altering habitat structure in order to minimize the propagation of noise pollution
Melanotan-II reverses autistic features in a maternal immune activation mouse model of autism
Autism spectrum disorder (ASD) is a complex neurodevelopmental disorder characterized by impaired social interactions, difficulty with communication, and repetitive behavior patterns. In humans affected by ASD, there is a male pre-disposition towards the condition with a male to female ratio of 4:1. In part due to the complex etiology of ASD including genetic and environmental interplay, there are currently no available medical therapies to improve the social deficits of ASD. Studies in rodent models and humans have shown promising therapeutic effects of oxytocin in modulating social adaptation. One pharmacological approach to stimulating oxytocinergic activity is the melanocortin receptor 4 agonist Melanotan-II (MT-II). Notably the effects of oxytocin on environmental rodent autism models has not been investigated to date. We used a maternal immune activation (MIA) mouse model of autism to assess the therapeutic potential of MT-II on autism-like features in adult male mice. The male MIA mice exhibited autism-like features including impaired social behavioral metrics, diminished vocal communication, and increased repetitive behaviors. Continuous administration of MT-II to male MIA mice over a seven-day course resulted in rescue of social behavioral metrics. Normal background C57 male mice treated with MT-II showed no significant alteration in social behavioral metrics. Additionally, there was no change in anxiety-like or repetitive behaviors following MT-II treatment of normal C57 mice, though there was significant weight loss following subacute treatment. These data demonstrate MT-II as an effective agent for improving autism-like behavioral deficits in the adult male MIA mouse model of autism
- âŠ