24 research outputs found

    Revisiting the Kuleshov Effect with first-time viewers

    Get PDF
    Researchers have recently suggested that historically mixed findings in studies of the Kuleshov effect (a classic film editing–related phenomenon whereby meaning is extracted from the interaction of sequential camera shots) might reflect differences in the relative sophistication of early versus modern cinema audiences. Relative to experienced audiences, first-time film viewers might be less predisposed and/or able to forge the required conceptual and perceptual links between the edited shots in order to demonstrate the effect. This article recreates the conditions that traditionally elicit this effect (whereby a neutral face comes to be perceived as expressive after being juxtaposed with independent images: a bowl of soup, a gravestone, a child playing) to directly compare “continuity” perception in first-time and more experienced film viewers. Results confirm the presence of the Kuleshov effect for experienced viewers (explicitly only in the sadness condition) but not the first-time viewers, who failed to perceive continuity between the shots

    Testing the developmental foundations of cinematic continuity

    Full text link
    To make sense of moving images viewers need to perceive the continuity across film cuts. In the early days of cinema, most films filmed in a single run (no cut) from a static camera. Shortly thereafter filmmakers combined multiple shots to create more compelling visual narratives. A suite of editing conventions that allowed viewers to effortlessly perceive continuity across film cuts emerged through trial and error. Most of these conventions were in common usage by 1918 (Bordwell, Staiger & Thompson, 1985) and they permeate much of visual media -including infant-directed ones- today. One of these conventions is the eye-line match between two juxtaposed shots, which is based on the premise that an audience will want to see what the character on-screen is seeing. A film sequence with an eye-line match begins with a character looking at something off-screen, followed by a cut of another object or person. From a developmental perspective, this refers to gaze following, which typically begins to emerge very early in infancy (D’Entremont et al. 1997; Farroni et al. 2004; Hood et al. 1998; Scaife and Bruner 1975). In an eye tracking study gaze following emerged between 2 and 4 months and stabilized between 6 and 8 months of age (Gredeback et al., 2010). A recent study (McClure, Chentsova-Dutton, Holochwost, Parrott, Barr, 2017) examining gaze following across video chat showed a similar developmental trajectory for gaze-following in video as in the real-world (McClure et. al.,2017). It is not known however if the infants would still be able to follow the gaze of the others on screen if the videos were edited. As a matter of fact, adults who have never previously encountered moving images perceive film shots as individual images (Ildirar & Schwan, 2015). In the present study, we examined the role of film editing on infants' ability of following other's gaze. Twelve-month-old Infants (N=20) and adult controls (N=20) watched videos depicting an actor turning her head toward one of two objects either in a single long shot as in traditional gaze following studies or in two multiple shots edited together, one of which shows the actor turning her head and the other shows the gazed-at object as in commercial infant-directed videos. Participant eye movements were recorded using a Tobii TX300. Each video ended with a still long shot showing the actor and the two objects, one of which had previously been gazed-at. Analysis of gaze behavior during this test shot showed clear gaze following in the adult control group (increased dwell time on the gazed-at object compared to the other object) for both edited and unedited versions. Data collection for the infant sample is on-going but preliminary results indicate that 12 month-olds can successfully followed gaze in the unedited version but are less successful across edits

    How Infants Perceive Animated Films

    Full text link
    Today, many infants begin consistently viewing videos at 4 to 9 months of age. Due to their reduced mobility and linguistic immaturity younger infants are good watchers, spending a lot of time sitting and watching the actions and (also emotional) reactions of both real and televised people as well as animated characters. Since babies can perceive the similarity between a 2-dimensional image and the real 3-dimensional entity that is depicted, they respond to the video image of another person with smiles and increased activity, much as they would to the actual person. Furthermore, emotional reaction of a televised person can influence their behaviour. Infant attention to films as to natural scenes begins by being stimulus-driven and progresses to top-down control as the child matures cognitively and acquires general world knowledge. The producers of infant-directed animations however use low-level visual features to guide infants’ attention to semantic information which might explain infants’ preference for them. In this chapter, we will discuss the developmental foundations of (animated) film cognition, focusing mainly on the perception of emotional cues based on recent empirical findings

    How infants perceive animated films

    Get PDF
    Book synopsis: Ranging from blockbuster movies to experimental shorts or documentaries to scientific research, computer animation shapes a great part of media communication processes today. Be it the portrayal of emotional characters in moving films or the creation of controllable emotional stimuli in scientific contexts, computer animation’s characteristic artificiality makes it ideal for various areas connected to the emotional: with the ability to move beyond the constraints of the empirical "real world," animation allows for an immense freedom. This book looks at international film productions using animation techniques to display and/or to elicit emotions, with a special attention to the aesthetics, characters and stories of these films, and to the challenges and benefits of using computer techniques for these purposes

    Testing the developmental foundations of cinematic continuity

    Get PDF
    To make sense of moving images viewers need to perceive the continuity across film cuts. In the early days of cinema, most films filmed in a single run (no cut) from a static camera. Shortly thereafter filmmakers combined multiple shots to create more compelling visual narratives. A suite of editing conventions that allowed viewers to effortlessly perceive continuity across film cuts emerged through trial and error. Most of these conventions were in common usage by 1918 (Bordwell, Staiger & Thompson, 1985) and they permeate much of visual media -including infant-directed ones- today. One of these conventions is the eye-line match between two juxtaposed shots, which is based on the premise that an audience will want to see what the character on-screen is seeing. A film sequence with an eye-line match begins with a character looking at something off-screen, followed by a cut of another object or person. From a developmental perspective, this refers to gaze following, which typically begins to emerge very early in infancy (D’Entremont et al. 1997; Farroni et al. 2004; Hood et al. 1998; Scaife and Bruner 1975). In an eye tracking study gaze following emerged between 2 and 4 months and stabilized between 6 and 8 months of age (Gredeback et al., 2010). A recent study (McClure, Chentsova-Dutton, Holochwost, Parrott, Barr, 2017) examining gaze following across video chat showed a similar developmental trajectory for gaze-following in video as in the real-world (McClure et. al.,2017). It is not known however if the infants would still be able to follow the gaze of the others on screen if the videos were edited. As a matter of fact, adults who have never previously encountered moving images perceive film shots as individual images (Ildirar & Schwan, 2015). In the present study, we examined the role of film editing on infants' ability of following other's gaze. Twelve-month-old Infants (N=20) and adult controls (N=20) watched videos depicting an actor turning her head toward one of two objects either in a single long shot as in traditional gaze following studies or in two multiple shots edited together, one of which shows the actor turning her head and the other shows the gazed-at object as in commercial infant-directed videos. Participant eye movements were recorded using a Tobii TX300. Each video ended with a still long shot showing the actor and the two objects, one of which had previously been gazed-at. Analysis of gaze behavior during this test shot showed clear gaze following in the adult control group (increased dwell time on the gazed-at object compared to the other object) for both edited and unedited versions. Data collection for the infant sample is on-going but preliminary results indicate that 12 month-olds can successfully followed gaze in the unedited version but are less successful across edits

    Infants’ anticipation of others’ action in edited film sequences

    Get PDF
    Adults (Flanagan & Johnson, 2003), as well as 12-month-old babies (Falck-Ytter, Gredebaek & Hofsten, 2006) perform goal-directed, anticipatory eye movements when observing real and filmed actions performed by others. The study we will present aims to find out what happens when the observed action is presented in an edited film sequence. For anticipating future actions segmenting event into units is critical. Infants could use these initial groupings to discover more abstract cues to event structure, such as the actor’s intentions, which are known to play a role in adults’ global event segmentation (e.g., Wilder, 1978; Zacks, 2004; Zacks & Tversky, 2001).Visual sequence learning is a primary mechanism for event segmentation and research show that eight-month-old infants are sensitive to the sequential statistics of actions performed by a human agent (Roseberry et al., 2011). Adults (Baldwin, Andersson, Saffran, & Meyer, 2008) as well as infants in their 1st year of life (Stahl, Romberg, Roseberry, Golinkoff & Hirsh‐Pasek, 2014) can segment a continuous action sequence based on sequential predictability alone, which suggest that before infants have top-down knowledge of intentions, they may begin to segment events based on sequential predictability. The stimuli used in the above mentioned infant studies present actions recorded from one camera angle in a single run (no cut). However in the commercial films –even the ones produced for very little ones- we see actions recorded from different angles and edited together later on. Regarding the fact that today many infants begin consistently watching television at 4 months of age (Christakis, 2011), become regular viewers when they are two years old and those exposed to television spend between 1 to 2 hours per day doing so (Zimmerman, Christakis & Meltzoff, 2007), it is important to understand how infants perceive televised actions and events as they are presented in popular media. For the present study, we produced two sets of film clips depicting two conditions. In the first set of films an adult sitting in front of a table moved the objects placed to the one side of the table to the other side. In the second set of films, a child clapped her hands and stomped her feet in turn. In the Single Shot Condition, actions were shown in one long single shot. In the Multiple Shot Condition the actions were segmented into sub actions through multiple close-up shots. All film clips end with a long single shot paused after three repetitions of the actions (test shot) to measure anticipatory saccades. Infants (N=20) and adult controls (N=20) watched videos. Participant eye movements were recorded using a Tobii TX300. Analysis of gaze behaviour during this test shot showed clear anticipation in the adult control group in both conditions. Data collection for the infant sample is on-going but preliminary results indicate that 12 month-olds can successfully anticipate the actions in the unedited version but are less successful across edit
    corecore