8 research outputs found

    Stares at the experimenter differed according to trial outcomes and between piglet groups (n = 28).

    No full text
    <p>(a) Mean proportions of stares (± S.E.M.) are presented according to trial outcomes. White bars represent successful trials, grey bars represent failed trials. (b) Mean proportions of stares (± S.E.M.) are presented according to previous signal exposure. Dotted bars represent signals+ piglets previously exposed to signals in test A, striped bars represent signals- piglets not exposed to test A. Bars with different letters are significantly different (<i>p</i> < 0.05). Types of signals: S. pointing & Voice: static pointing gesture and voice directed to the reward; D. pointing & Voice: dynamic pointing gesture and voice directed to the reward. Tests: test B: pointing and voice test 1, test C: dynamic pointing and voice test, test D: pointing and voice test 2.</p

    Dimensions and equipment of the test arena.

    No full text
    <p>Circled numbers indicate experimenters; the suspended camera above the test area door is not represented</p

    Correct choices according to the signal given by the experimenter in each test.

    No full text
    <p>Bars represent mean success rates (± S.E.M.) for tests A, B, C and D, or median success rate (Q25-Q75) for test E. *: p < 0.05 correspond to values above chance level at group level. Bars with different letters are significantly different (<i>p</i> < 0.05). Numbers above bars indicate the numbers of individuals that reached the success criterion over the total number of subjects that participated and numbers with different letters are significantly different (<i>p</i> < 0.05). Types of signals: S. Pointing: static pointing gesture, Voice: voice directed to the reward, S. pointing & Voice: static pointing gesture and voice directed to the reward directed to the reward; D. pointing & Voice: dynamic pointing gesture and voice directed to the reward, Voice & Presence: voice directed to the reward and experimenter’s presence. Test A: “previous signals exposure test”, test B: pointing and voice test 1, test C: dynamic pointing and voice test, test D: pointing and voice test 2, test E: voice test.</p

    Piglets Learn to Use Combined Human-Given Visual and Auditory Signals to Find a Hidden Reward in an Object Choice Task

    No full text
    <div><p>Although animals rarely use only one sense to communicate, few studies have investigated the use of combinations of different signals between animals and humans. This study assessed for the first time the spontaneous reactions of piglets to human pointing gestures and voice in an object-choice task with a reward. Piglets (<i>Sus scrofa domestica</i>) mainly use auditory signals–individually or in combination with other signals—to communicate with their conspecifics. Their wide hearing range (42 Hz to 40.5 kHz) fits the range of human vocalisations (40 Hz to 1.5 kHz), which may induce sensitivity to the human voice. However, only their ability to use visual signals from humans, especially pointing gestures, has been assessed to date. The current study investigated the effects of signal type (visual, auditory and combined visual and auditory) and piglet experience on the piglets’ ability to locate a hidden food reward over successive tests. Piglets did not find the hidden reward at first presentation, regardless of the signal type given. However, they subsequently learned to use a combination of auditory and visual signals (human voice and static or dynamic pointing gestures) to successfully locate the reward in later tests. This learning process may result either from repeated presentations of the combination of static gestures and auditory signals over successive tests, or from transitioning from static to dynamic pointing gestures, again over successive tests. Furthermore, piglets increased their chance of locating the reward either if they did not go straight to a bowl after entering the test area or if they stared at the experimenter before visiting it. Piglets were not able to use the voice direction alone, indicating that a combination of signals (pointing and voice direction) is necessary. Improving our communication with animals requires adapting to their individual sensitivity to human-given signals.</p></div

    Results on individual latencies (s) to the first visit of a bowl, main statistical effects and probabilities.

    No full text
    <p>Results are presented either by mean ± S.E.M. or median and interquartile (Q25–Q75). Friedman statistics are presented for test A: the “previous signals exposure test”; Fisher’s statistics for test B: the pointing and voice test 1, for test C: the dynamic pointing and voice test, for test D: the pointing and voice test 2, and for the comparison between test B, C and D; Wilcoxon and Mann and Whitney statistics for test E: the voice direction test. Values with different bold letters are significantly different on a same row and between two rows for groups of piglets.</p

    Timeline of the familiarisation period: familiarisation steps from weaning day (D0) with their duration and frequency.

    No full text
    <p>Timeline of the familiarisation period: familiarisation steps from weaning day (D0) with their duration and frequency.</p

    The general procedure was divided into five test steps: description of their principle.

    No full text
    <p>Numbers of animals involved are written in black circles and types of signals broadcast are specified inside test steps.</p
    corecore