1,165,737 research outputs found

    What You See is What You Get

    Get PDF
    This thesis statement describes and comments on the four videos that compose my thesis exhibition, What you See is What You Get. Together, they constitute a self-portrait. Videos require multiple mediums and software, each with their own purpose and aesthetic. The three key ingredients these four videos make use of are recorded performances, 2D animation and narrative. This statement will detail the visuals and purpose of each one, the technology and editing process, and the inspirations that have led to the thesis exhibition. The statement describes the choice of color, movement and rhythm to carry the narrative, which is not based chronologically but emotionally. Finally, some attention is given to the tone of the videos and the progression between my past and present self. The overall theme of these videos is self-exploration. A large part of who I am today comes from my early love of cartoons and video games; from consumer I have become creator. I have learned to apply my skills with AfterEffects, Photoshop, GIMP, Game Maker and iMovie to create content that is revealing of my progress through life

    WYSIWYP: What You See Is What You Pick

    Full text link

    Is what you see what you get? representations, metaphors and tools in mathematics didactics

    No full text
    This paper is exploratory in character. The aim is to investigate ways in which it is possible to use the theoretical concepts of representations, tools and metaphors to try to understand what learners of mathematics ‘see’ during classroom interactions (in their widest sense) and what they might get from such interactions. Through an analysis of a brief classroom episode, the suggestion is made that what learners see may not be the same as what they get. From each of several theoretical perspectives utilised in this paper, what learners ‘get’ appears to be something extra. According to our analysis, this something ‘extra’ is likely to depend on the form of technology being used and the representations and metaphors that are available to both teacher and learner

    What you see is what you feel:Top-down emotional effects in face detection

    Get PDF
    Face detection is an initial step of many social interactions involving a comparison between a visual input and a mental representation of faces, built from previous experience. Furthermore, whilst emotional state has been found to affect the way humans attend to faces, little research has explored the effects of emotions on the mental representation of faces. In four studies and a computational model, we investigated how emotions affect mental representations of faces and how facial representations could be used to transmit and communicate people’s emotional states. To this end, we used an adapted reverse correlation techniquesuggested by Gill et al. (2019) which was based on an earlier idea of the ‘Superstitious Approach’ (Gosselin & Schyns, 2003). In Experiment 1 we measured how naturally occurring anxiety and depression, caused by external factors, affected people’s mental representations of faces. In two sessions, on separate days, participants (coders) were presented with ‘colourful’ visual noise stimuli and asked to detect faces, which they were told were present. Based on the noise fragments that were identified by the coders as a face, we reconstructed the pictorial mental representation utilised by each participant in the identification process. Across coders, we found significant correlations between changes in the size of the mental representation of faces and changes in their level of depression. Our findings provide a preliminary insight about the way emotions affect appearance expectation of faces. To further understand whether the facial expressions of participants’ mental representations can reflect their emotional state, we conducted a validation study (Experiment 2) with a group of naïve participants (verifiers) who were asked to classify the reconstructed mental representations of faces by emotion. Thus, we assessed whether the mental representations communicate coders’ emotional states to others. The analysis showed no significant correlation between coders’ emotional states, depicted in their mental representation of faces and verifiers’ evaluation scores. In Experiment 3, we investigated how different induced moods, negative and positive, affected mental representation of faces. Coders underwent two different mood induction conditions during two separate sessions. They were presented with the same ‘colourful’ noise stimuli used in Experiment 1 and asked to detect faces. We were able to reconstruct pictorial mental representations of faces based on the identified fragments. The analysis showed a significant negative correlation between changes in coders’ mood along the dimension of arousal and changes in size of their mental representation of faces. Similar to Experiment 2, we conducted a validation study (Experiment 4) to investigate if coders’ mood could have been communicated to others through their mental representations of faces. Similarly, to Experiment 2, we found no correlation between coders’ mood, depicted in their mental representations of faces and verifiers’ evaluation of the intensity of transmitted emotional expression. Lastly, we tested a preliminary computational model (Experiment 5) to classify and predict coders’ emotional states based on their reconstructed mental representations of faces. In spite of the small number of training examples and the high dimensionality of the input, the model was successful just above chance level. Future studies should look at the possibility of improving the computational model by using a larger training set and testing other classifiers. Overall, the present work confirmed the presence of facial templates used during face detection. It provides an adapted version of a reverse correlation technique that can be used to access mental representation of faces, with a significant reduction in number of trials. Lastly, it provides evidence on how emotions can influence and affect the size of mental representations of faces

    Empirical market segmentation: what you see is what you get

    Get PDF
    The aim of the chapter is to discuss and illustrate different approaches taken in the area of empirical market segmentation in tourism, and to raise conceptual, practical and methodological problems in this context. The chapter is limited to the discussion of empirical market segmentation, which means that an empirical data set (typically resulting from a tourist survey) represents the basis. Purely conceptual derivation of market segments or tourist typologies is not treated. Given this aim, the reader should be provided with an overview of empirical market segmentation in tourism and realize how much unexploited potential for improvement remains in this area

    What You See Is Not What You Know: Studying Deception in Deepfake Video Manipulation

    Get PDF
    Research indicates that deceitful videos tend to spread rapidly online and influence people’s opinions and ideas. Because of this, video misinformation via deepfake video manipulation poses a significant online threat. This study aims to discover what factors can influence viewers’ capability to distinguish deepfake videos from genuine video footage. This work focuses on exploring deepfake videos’ potential use for deception and misinformation by exploring people’s ability to determine whether videos are deepfakes in a survey consisting of deepfake videos and original unedited videos. The participants viewed a set of four videos and were asked to judge whether the videos shown were deepfakes or originals. The survey varied the familiarity that the viewers had with the subjects of the videos. Also, the number of videos shown at one time was manipulated. This survey showed that familiarity with subjects has a statistically significant impact on how well people can determine a deepfake. Notably, however, almost two-thirds of study participants (102 out of 154, or 66.23%) were unable to correctly identify a sequence of just four videos as either genuine or deepfake. This study provides insights into possible considerations for countering disinformation and deception resulting from the misuse of deepfakes

    What You See Is What You Get: Integrating Visual Performance Methodology Into Vocal Pedagogy

    Get PDF
    In many ways, singing is an aural event. Since much of the instrument cannot be seen in a normal setting, voice teachers and voice admirers must rely on their ears to evaluate what they hear. However, singing is also a visual event. In the context of voice studios, teachers need to train students to not only achieve a healthy singing technique but also to convey a message to the audience. Each performer must ask herself, “what do I need to do as a performer to show the music?” In the article Sight Over Sound in the Judgment of Music Performance, Chia-Jung Tsay, a Professor of Organizational Behavior at University College London, studied the influence of visual versus aural in several experiments. “The results highlight our natural, automatic, and nonconscious dependence on visual cues. The dominance of visual information emerges to the degree that it is overweighted [sic] relative to auditory information, even when sound is consciously valued as the core domain content” (Tsay 2013). The visual aspect of performance dominates the aural aspect of performance. Therefore, visual performance methodology must be habitually taught in the singing studio

    What You See Is What You Detect: Towards better Object Densification in 3D detection

    Full text link
    Recent works have demonstrated the importance of object completion in 3D Perception from Lidar signal. Several methods have been proposed in which modules were used to densify the point clouds produced by laser scanners, leading to better recall and more accurate results. Pursuing in that direction, we present, in this work, a counter-intuitive perspective: the widely-used full-shape completion approach actually leads to a higher error-upper bound especially for far away objects and small objects like pedestrians. Based on this observation, we introduce a visible part completion method that requires only 11.3\% of the prediction points that previous methods generate. To recover the dense representation, we propose a mesh-deformation-based method to augment the point set associated with visible foreground objects. Considering that our approach focuses only on the visible part of the foreground objects to achieve accurate 3D detection, we named our method What You See Is What You Detect (WYSIWYD). Our proposed method is thus a detector-independent model that consists of 2 parts: an Intra-Frustum Segmentation Transformer (IFST) and a Mesh Depth Completion Network(MDCNet) that predicts the foreground depth from mesh deformation. This way, our model does not require the time-consuming full-depth completion task used by most pseudo-lidar-based methods. Our experimental evaluation shows that our approach can provide up to 12.2\% performance improvements over most of the public baseline models on the KITTI and NuScenes dataset bringing the state-of-the-art to a new level. The codes will be available at \textcolor[RGB]{0,0,255}{\url{{https://github.com/Orbis36/WYSIWYD}

    What you see is not what you get:The impact of vision impairment on judo performance

    Get PDF
    Paralympic sports provide opportunities for athletes with impairment to participate and excel in sport competitions. To compete in Paralympic sports, all athletes must undergo classification to determine whether they are eligible to compete and to allocate eligible athletes to different sport classes. Classification criteria need to be based on scientific evidence showing the relationship between impairment and performance. Judo is a Paralympic sport for athletes with vision impairment (VI), which does not yet have an evidence-based classification system in place. The aim of this thesis was therefore to examine the impact of vision impairment on performance in judo. The results of this thesis suggested that vision impairment impacts performance in judo, both when starting without and with a grip in place. On the basis of the results of this thesis, new classification criteria for VI judo are currently proposed and being discussed by the International Blind Sports Federation in consultation with the wider VI judo community. The proposed changes include new minimum impairment criteria as well as split of VI judo competition into separate sport classes for partially sighted and functionally blind judokas. Besides providing practical recommendations for the organisation of judo for individuals with vision impairment, the current thesis exemplifies the remarkable capabilities of skilled performers to functionally adapt under suboptimal visual conditions
    • 

    corecore