237 research outputs found

    Modelling the human perception of shape-from-shading

    Get PDF
    Shading conveys information on 3-D shape and the process of recovering this information is called shape-from-shading (SFS). This thesis divides the process of human SFS into two functional sub-units (luminance disambiguation and shape computation) and studies them individually. Based on results of a series of psychophysical experiments it is proposed that the interaction between first- and second-order channels plays an important role in disambiguating luminance. Based on this idea, two versions of a biologically plausible model are developed to explain the human performances observed here and elsewhere. An algorithm sharing the same idea is also developed as a solution to the problem of intrinsic image decomposition in the field of image processing. With regard to the shape computation unit, a link between luminance variations and estimated surface norms is identified by testing participants on simple gratings with several different luminance profiles. This methodology is unconventional but can be justified in the light of past studies of human SFS. Finally a computational algorithm for SFS containing two distinct operating modes is proposed. This algorithm is broadly consistent with the known psychophysics on human SFS

    Paralinguistic and Rhetorical Capabilities of Emojis in Marketing Communication

    Get PDF
    Consumers and social media marketers have over 3,000 emojis at their fingertips. Despite the popularity of emojis on social media, marketing research on emojis remains limited. Extant marketing research on emojis that does exist primarily focuses on the emotional and reinforcement capabilities, a remnant of the limitations of the emoticon ancestor, and largely ignores the additional paralinguistic and rhetorical potential of emojis. In this dissertation, emojis as a paralanguage are explored with a particular focus on the creation of meaning on social media (Essay 1), and emojis as a full (Essay 2) and partial (Essay 3) substitute for text in marketing communication. Essay 1 is a conceptual piece that examines the perpetual evolution of emoji meaning on social media through the lens of symbolic interactionism and liquid consumption. Essay 2 looks at how consumers evaluate strings of emojis and shows that emoji only communication has a negative (positive) effect on brand attitude via processing fluency (fun) when compared to the equivalent textual translation. Essay 3 focuses on emojis as partial substitutes for promotions on social media (e.g., “buy one get one” becomes “buy ☝ get ☝). This essay demonstrates the positive effect of gesture emojis on promotion evaluation via heightened processing fluency, when compared to object emojis. However, when the message includes haptic imagery, processing fluency and promotion evaluation are similar for gesture and object emojis. Overall, this dissertation explores the paralinguistic and rhetorical potential of emojis in marketing communication and provides insights to marketers that use emojis on social media

    7th Tübingen Perception Conference: TWK 2004

    No full text

    Multimodal interactions in virtual environments using eye tracking and gesture control.

    Get PDF
    Multimodal interactions provide users with more natural ways to interact with virtual environments than using traditional input methods. An emerging approach is gaze modulated pointing, which enables users to perform virtual content selection and manipulation conveniently through the use of a combination of gaze and other hand control techniques/pointing devices, in this thesis, mid-air gestures. To establish a synergy between the two modalities and evaluate the affordance of this novel multimodal interaction technique, it is important to understand their behavioural patterns and relationship, as well as any possible perceptual conflicts and interactive ambiguities. More specifically, evidence shows that eye movements lead hand movements but the question remains that whether the leading relationship is similar when interacting using a pointing device. Moreover, as gaze modulated pointing uses different sensors to track and detect user behaviours, its performance relies on users perception on the exact spatial mapping between the virtual space and the physical space. It raises an underexplored issue that whether gaze can introduce misalignment of the spatial mapping and lead to users misperception and interactive errors. Furthermore, the accuracy of eye tracking and mid-air gesture control are not comparable with the traditional pointing techniques (e.g., mouse) yet. This may cause pointing ambiguity when fine grainy interactions are required, such as selecting in a dense virtual scene where proximity and occlusion are prone to occur. This thesis addresses these concerns through experimental studies and theoretical analysis that involve paradigm design, development of interactive prototypes, and user study for verification of assumptions, comparisons and evaluations. Substantial data sets were obtained and analysed from each experiment. The results conform to and extend previous empirical findings that gaze leads pointing devices movements in most cases both spatially and temporally. It is testified that gaze does introduce spatial misperception and three methods (Scaling, Magnet and Dual-gaze) were proposed and proved to be able to reduce the impact caused by this perceptual conflict where Magnet and Dual-gaze can deliver better performance than Scaling. In addition, a coarse-to-fine solution is proposed and evaluated to compensate the degradation introduced by eye tracking inaccuracy, which uses a gaze cone to detect ambiguity followed by a gaze probe for decluttering. The results show that this solution can enhance the interaction accuracy but requires a compromise on efficiency. These findings can be used to inform a more robust multimodal inter- face design for interactions within virtual environments that are supported by both eye tracking and mid-air gesture control. This work also opens up a technical pathway for the design of future multimodal interaction techniques, which starts from a derivation from natural correlated behavioural patterns, and then considers whether the design of the interaction technique can maintain perceptual constancy and whether any ambiguity among the integrated modalities will be introduced
    corecore