28 research outputs found
Measuring context dependency in birdsong using artificial neural networks
Context dependency is a key feature in sequential structures of human language, which requires reference between words far apart in the produced sequence. Assessing how long the past context has an effect on the current status provides crucial information to understand the mechanism for complex sequential behaviors. Birdsongs serve as a representative model for studying the context dependency in sequential signals produced by non-human animals, while previous reports were upper-bounded by methodological limitations. Here, we newly estimated the context dependency in birdsongs in a more scalable way using a modern neural-network-based language model whose accessible context length is sufficiently long. The detected context dependency was beyond the order of traditional Markovian models of birdsong, but was consistent with previous experimental investigations. We also studied the relation between the assumed/auto-detected vocabulary size of birdsong (i.e., fine- vs. coarse-grained syllable classifications) and the context dependency. It turned out that the larger vocabulary (or the more fine-grained classification) is assumed, the shorter context dependency is detected
Chick-computer interaction using sounds
The 11th International Symposium on Adaptive Motion of Animals and Machines. Kobe University, Japan. 2023-06-06/09. Adaptive Motion of Animals and Machines Organizing Committee.Poster Session P4
Undirected singing rate as a non-invasive tool for welfare monitoring in isolated male zebra finches
Research on the songbird zebra finch (Taeniopygia guttata) has advanced our behavioral, hormonal, neuronal, and genetic understanding of vocal learning. However, little is known about the impact of typical experimental manipulations on the welfare of these birds. Here we explore whether the undirected singing rate can be used as an indicator of welfare. We tested this idea by performing a post hoc analysis of singing behavior in isolated male zebra finches subjected to interactive white noise, to surgery, or to tethering. We find that the latter two experimental manipulations transiently but reliably decreased singing rates. By contraposition, we infer that a high-sustained singing rate is suggestive of successful coping or improved welfare in these experiments. Our analysis across more than 300 days of song data suggests that a singing rate above a threshold of several hundred song motifs per day implies an absence of an acute stressor or a successful coping with stress. Because singing rate can be measured in a completely automatic fashion, its observation can help to reduce experimenter bias in welfare monitoring. Because singing rate measurements are non-invasive, we expect this study to contribute to the refinement of the current welfare monitoring tools in zebra finches.Fil: Yamahachi, Homare. Universitat Zurich; SuizaFil: Zai, Anja T.. Universitat Zurich; SuizaFil: Tachibana, Ryosuke O.. Universitat Zurich; SuizaFil: Stepien, Anna E.. Universitat Zurich; SuizaFil: Rodrigues, Diana I.. Universitat Zurich; SuizaFil: Cavé Lopez, Sophie. Universitat Zurich; SuizaFil: Lorenz, Corinna. Universite Paris Saclay; Francia. Universitat Zurich; SuizaFil: Arneodo, Ezequiel Matías. Universitat Zurich; Suiza. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - La Plata. Instituto de Física La Plata. Universidad Nacional de La Plata. Facultad de Ciencias Exactas. Instituto de Física La Plata; ArgentinaFil: Giret, Nicolas. Universite Paris Saclay; FranciaFil: Hahnloser, Richard H. R.. Universitat Zurich; Suiz
Thermal Infrared Imaging Experiments of C-Type Asteroid 162173 Ryugu on Hayabusa2
The thermal infrared imager TIR onboard Hayabusa2 has been developed to investigate thermo-physical properties of C-type, near-Earth asteroid 162173 Ryugu. TIR is one of the remote science instruments on Hayabusa2 designed to understand the nature of a volatile-rich solar system small body, but it also has significant mission objectives to provide information on surface physical properties and conditions for sampling site selection as well as the assessment of safe landing operations. TIR is based on a two-dimensional uncooled micro-bolometer array inherited from the Longwave Infrared Camera LIR on Akatsuki (Fukuhara et al., 2011). TIR takes images of thermal infrared emission in 8 to 12 μm with a field of view of 16×12∘ and a spatial resolution of 0.05∘ per pixel. TIR covers the temperature range from 150 to 460 K, including the well calibrated range from 230 to 420 K. Temperature accuracy is within 2 K or better for summed images, and the relative accuracy or noise equivalent temperature difference (NETD) at each of pixels is 0.4 K or lower for the well-calibrated temperature range. TIR takes a couple of images with shutter open and closed, the corresponding dark frame, and provides a true thermal image by dark frame subtraction. Data processing involves summation of multiple images, image processing including the StarPixel compression (Hihara et al., 2014), and transfer to the data recorder in the spacecraft digital electronics (DE). We report the scientific and mission objectives of TIR, the requirements and constraints for the instrument specifications, the designed instrumentation and the pre-flight and in-flight performances of TIR, as well as its observation plan during the Hayabusa2 mission
Human Vocal Variability and Adaptability
Assessing relationship between vocal pitch fluctuation and compensation rate against auditory feedback modificatio
Switching perception of musical meters by listening to different acoustic cues of biphasic sound stimulus
Meter is one of the core features of music perception. It is the cognitive grouping of regular sound sequences, typically for every 2, 3, or 4 beats. Previous studies have suggested that one can not only passively perceive the meter from acoustic cues such as loudness, pitch, and duration of sound elements, but also actively perceive it by paying attention to isochronous sound events without any acoustic cues. Studying the interaction of top-down and bottom-up processing in meter perception leads to understanding the cognitive system’s ability to perceive the entire structure of music. The present study aimed to demonstrate that meter perception requires the top-down process (which maintains and switches attention between cues) as well as the bottom-up process for discriminating acoustic cues. We created a “biphasic” sound stimulus, which consists of successive tone sequences designed to provide cues for both the triple and quadruple meters in different sound attributes, frequency, and duration, and measured how participants perceived meters from the stimulus in a five-point scale (ranged from “strongly triple” to “strongly quadruple”). Participants were asked to focus on differences in frequency and duration. We found that well-trained participants perceived different meters by switching their attention to specific cues, while untrained participants did not. This result provides evidence for the idea that meter perception involves the interaction between top-down and bottom-up processes, which training can facilitate