72 research outputs found

    Transmission of Food Preference does Not Require Socially Relevant Cues in a Mouse Strain with Low Sociability

    Get PDF
    The social transmission of food preference task (STFP) is based on the principle that dietary information can be communicated between rodents during social interaction (Galef and Kennett, 1987). Briefly, a demonstrator mouse consumes a novel flavor, and then freely interacts with an observer mouse. The observer mouse is now “socially cued ” toward that flavor, and will prefer it in a choice paradigm over another novel “un-cued ” flavor. Socially relevant cues are required for this transmission of food preference in adult rats (Galef and Kennet, 1987) and C57BL/6J mice (Ryan et al., 2008). This evidence indicates that the STFP task is an appropriate measure of social communication in rodents. Since impaired communication is a diagnostic criterion for autism (DSMIV), several studies have utilized this protocol to investigate autistic-like behavior in mice (Boylan et al., 2007; McFarlane et al., 2008; Ryan et al., 2010). We performed the STFP task, as previously described (McFarlane et al., 2008), to evaluate social communication in mice with a mixed C57BL/6J 129S3/SvImJ background (B6129S3) (Zaccaria et al., 2010). This mouse strain exhibits low social approach and lack of preference for social novelty (Zaccaria et al., 2010). Therefore, it was surprising that B6129S3 mice consumed significantly more cued than noncued food (

    Proof-Pattern Recognition and Lemma Discovery in ACL2

    Full text link
    We present a novel technique for combining statistical machine learning for proof-pattern recognition with symbolic methods for lemma discovery. The resulting tool, ACL2(ml), gathers proof statistics and uses statistical pattern-recognition to pre-processes data from libraries, and then suggests auxiliary lemmas in new proofs by analogy with already seen examples. This paper presents the implementation of ACL2(ml) alongside theoretical descriptions of the proof-pattern recognition and lemma discovery methods involved in it

    MATHsAiD: Automated Mathematical Theory Exploration

    Get PDF
    The aim of the MATHsAiD project is to build a tool for automated theorem-discovery; to design and build a tool to automatically conjecture and prove theorems (lemmas, corollaries, etc.) from a set of user-supplied axioms and definitions. No other input is required. This tool would, for instance, allow a mathematician to try several versions of a particular definition, and in a relatively small amount of time, be able to see some of the consequences, in terms of the resulting theorems, of each version. Moreover, the automatically discovered theorems could perhaps help the users to discover and prove further theorems for themselves. The tool could also easily be used by educators (to generate exercise sets, for instance) and by students as well. In a similar fashion, it might also prove useful in enabling automated theorem provers to dispatch many of the more difficult proof obligations arising in software verification, by automatically generating lemmas which are needed by the prover, in order to finish these proofs

    Perception of clear fricatives by normal-hearing and simulated hearing-impaired listeners

    Get PDF
    This is the publisher's version, also available electronically from http://scitation.aip.org/content/asa/journal/jasa/123/2/10.1121/1.2821966.Speakers may adapt the phonetic details of their productions when they anticipate perceptual difficulty or comprehension failure on the part of a listener. Previous research suggests that a speaking style known as clear speech is more intelligible overall than casual, conversational speech for a variety of listener populations. However, it is unknown whether clear speech improves the intelligibility of fricative consonants specifically, or how its effects on fricative perception might differ depending on listener population. The primary goal of this study was to determine whether clear speech enhances fricative intelligibility for normal-hearing listeners and listeners with simulated impairment. Two experiments measured babble signal-to-noise ratio thresholds for fricative minimal pair distinctions for 14 normal-hearing listeners and 14 listeners with simulated sloping, recruiting impairment. Results indicated that clear speech helped both groups overall. However, for impaired listeners, reliable clear speech intelligibility advantages were not found for non-sibilant pairs. Correlation analyses comparing acoustic and perceptual data indicated that a shift of energy concentration toward higher frequency regions and greater source strength contributed to the clear speecheffect for normal-hearing listeners. Correlations between acoustic and perceptual data were less consistent for listeners with simulated impairment, and suggested that lower-frequency information may play a role

    A reafferent and feed-forward model of song syntax generation in the Bengalese finch

    Get PDF
    Adult Bengalese finches generate a variable song that obeys a distinct and individual syntax. The syntax is gradually lost over a period of days after deafening and is recovered when hearing is restored. We present a spiking neuronal network model of the song syntax generation and its loss, based on the assumption that the syntax is stored in reafferent connections from the auditory to the motor control area. Propagating synfire activity in the HVC codes for individual syllables of the song and priming signals from the auditory network reduce the competition between syllables to allow only those transitions that are permitted by the syntax. Both imprinting of song syntax within HVC and the interaction of the reafferent signal with an efference copy of the motor command are sufficient to explain the gradual loss of syntax in the absence of auditory feedback. The model also reproduces for the first time experimental findings on the influence of altered auditory feedback on the song syntax generation, and predicts song- and species-specific low frequency components in the LFP. This study illustrates how sequential compositionality following a defined syntax can be realized in networks of spiking neurons

    Neural processing of natural sounds

    Full text link
    Natural sounds include animal vocalizations, environmental sounds such as wind, water and fire noises and non-vocal sounds made by animals and humans for communication. These natural sounds have characteristic statistical properties that make them perceptually salient and that drive auditory neurons in optimal regimes for information transmission.Recent advances in statistics and computer sciences have allowed neuro-physiologists to extract the stimulus-response function of complex auditory neurons from responses to natural sounds. These studies have shown a hierarchical processing that leads to the neural detection of progressively more complex natural sound features and have demonstrated the importance of the acoustical and behavioral contexts for the neural responses.High-level auditory neurons have shown to be exquisitely selective for conspecific calls. This fine selectivity could play an important role for species recognition, for vocal learning in songbirds and, in the case of the bats, for the processing of the sounds used in echolocation. Research that investigates how communication sounds are categorized into behaviorally meaningful groups (e.g. call types in animals, words in human speech) remains in its infancy.Animals and humans also excel at separating communication sounds from each other and from background noise. Neurons that detect communication calls in noise have been found but the neural computations involved in sound source separation and natural auditory scene analysis remain overall poorly understood. Thus, future auditory research will have to focus not only on how natural sounds are processed by the auditory system but also on the computations that allow for this processing to occur in natural listening situations.The complexity of the computations needed in the natural hearing task might require a high-dimensional representation provided by ensemble of neurons and the use of natural sounds might be the best solution for understanding the ensemble neural code

    Interaction between auditory and motor activities in an avian song control nucleus.

    Get PDF
    Discrete telencephalic nuclei HVc (hyperstriatum ventrale, pars caudale) and RA (nucleus robustus archistriatalis) have been implicated by lesion studies in the control of vocalization in songbirds. We demonstrate directly the role of HVc in vocalization by presenting neuronal recordings taken from HVc of singing birds. Intracellular recordings from anesthetized birds have shown that many neurons in HVc respond to auditory stimuli. We confirm this result in the extracellular recordings from awake-behaving birds and further demonstrate responses of HVc neurons to playback of the bird's own song. The functional significance of these responses is not yet clear, but behavioral studies show that auditory feedback plays a crucial role in the development of normal song. We show that the song-correlated temporal pattern of neural activity persists even in the deaf bird. Furthermore, we show that in the normal bird, the activity pattern correlated with production of certain song elements can be clearly distinguished from the pattern of auditory responses to the same song elements. This result implies that an interaction occurs in HVc of the singing bird between motor and auditory activity. Through experiments involving playback of sound while the bird is singing, we show that the interaction consists of motor inhibition of auditory activity in HVc and that this inhibition decays slowly over a period of seconds after the song terminates

    Interaction between auditory and motor activities in an avian song control nucleus.

    No full text
    corecore