116 research outputs found

    Mean reaction times and SEMs for each condition.

    No full text
    <p>Mean reaction times and SEMs for each condition.</p

    Sub-divisions in notes.

    No full text
    <p>(a) An example input spectrogram (upper) and the true note intervals (lower). Gray rectangles with letters indicate note intervals with note classes. (b) Example outputs of a DCNN without sub-division in notes (upper) and the recognized sequence (lower). First three rows in the DCNN outputs correspond to three note classes and the last to the class for silence. (c) Example outputs with notes divided into two parts. First six rows in the DCNN outputs correspond to three note classes with two sub-divisions. The last row corresponds to the silence. (d) Example outputs with notes divided into three parts. First nine rows in the DCNN outputs correspond to the three note classes with three sub-divisions.</p

    Validation errors.

    No full text
    <p>(a) Note ERs of the results trained on two and eight minutes of training data sets. ERs in each bird are shown in open circles. (b) Timing ERs. (c) Note & timing ERs. ***: <i>p</i> < 0.001.</p

    Three arrangements of methods for birdsong recognition.

    No full text
    <p>Flow diagrams for the three arrangements compared in this study. (a) BD β†’ LC β†’ GS arrangement. The colored letters A, B, and C indicate the note classes, and the white regions indicate the detected inter-note silent intervals. (b) LC β†’ BD & GS arrangement. The white letter S indicates the silent intervals. (c) LC & GS β†’ BD & GS arrangement.</p

    Recognition results in the BD β†’ LC β†’ GS arrangement.

    No full text
    <p>(a) A recognition result in one bird. From upper to lower: an input spectrogram, amplitude, outputs of local classification, recognized note intervals, true note intervals, and correctly recognized intervals. Rows in the classification outputs correspond to the note classes. The black areas are putative silent intervals detected in the boundary detection step. Gray rectangles with letters indicate note intervals and classes. The correctly recognized intervals are indicated by black bars. (b) A result in another bird with poorer recognition accuracy.</p

    Syntax models used in HMMs.

    No full text
    <p>Schematic diagrams of song syntax modeled with a second-order Markov model. In this figure examples with two note classes (A & B) are shown. (a) A transition diagram in the BD β†’ LC β†’ GS arrangement. The initial state is indicated by the letter β€œe”. The transition probabilities of orange arrows were computed from the training data sets. Those of black arrows were uniformly distributed (ie. all transition probabilities from states β€œe”, β€œA”, and β€œB” are 0.5). Sequence generation is allowed to stop at any states. (b) In the LC β†’ BD & GS and the LC & GS β†’ BD & GS arrangements, each state in (a) except the initial state was divided into four. The letter β€œX” and β€œY” denote any note classes or the initial state.</p

    Phase-Specific Vocalizations of Male Mice at the Initial Encounter during the Courtship Sequence

    No full text
    <div><p>Mice produce ultrasonic vocalizations featuring a variety of syllables. Vocalizations are observed during social interactions. In particular, males produce numerous syllables during courtship. Previous studies have shown that vocalizations change according to sexual behavior, suggesting that males vary their vocalizations depending on the phase of the courtship sequence. To examine this process, we recorded large sets of mouse vocalizations during male–female interactions and acoustically categorized these sounds into 12 vocal types. We found that males emitted predominantly short syllables during the first minute of interaction, more long syllables in the later phases, and mainly harmonic sounds during mounting. These context- and time-dependent changes in vocalization indicate that vocal communication during courtship in mice consists of at least three stages and imply that each vocalization type has a specific role in a phase of the courtship sequence. Our findings suggest that recording for a sufficiently long time and taking the phase of courtship into consideration could provide more insights into the role of vocalization in mouse courtship behavior in future study.</p></div

    Average note ERs, timing ERs, and note and timing ERs.

    No full text
    <p>Average note ERs, timing ERs, and note and timing ERs.</p

    Mean amplitudes and SEMs of ERPs for each condition.

    No full text
    <p>ERP amplitudes for standards, emotionally congruent deviants, and emotionally incongruent deviants in negative and positive sequences. For all of the time windows, ERP amplitudes were averaged across three occipital electrodes (O1, O2, and Oz). In addition, for 100–200 ms and 200–260 ms, ERP amplitudes were averaged across two posterior electrodes over the right hemisphere (TP8 and P8), and two homologous electrodes over the left hemisphere (TP7 and P7). For 300–360 ms, ERP amplitudes were averaged across three centro-parietal electrodes (FCz, Cz, and CPz).</p

    Note & timing error rate.

    No full text
    <p>An example of an input, a true sequence, output sequences, and correctly recognized intervals that would be obtained in computing note & timing error rate. (a) An example spectrogram and the true labels and boundaries. Note classes are indicated by letters. (b) An example recognition output (upper), and the correctly recognized intervals (lower). In the correctly recognized intervals, colored bars indicate the correctly recognized note intervals, longest overlaps with the true label intervals. Gray bars indicate the correctly recognized silent intervals. The correctly recognized intervals appear to capture the performances properly even in such cases that a single note is recognized as two (notes B) and two notes are recognized as one (notes C). In both cases either of two overlapping intervals (the one with a longer overlap) was counted as correctly recognized intervals. There could be a case in which no matched interval is assigned (note A). (c) Another example showing the recognition outputs and the correctly recognized intervals with lower note & timing error rate.</p
    • …
    corecore