219,755 research outputs found

    Analytical tools of strategic marketing management

    Get PDF
    This thesis deals with three topics; Bayesian tracking, shape matching and visual servoing. These topics are bound together by the goal of visual control of robotic systems. The work leading to this thesis was conducted within two European projects, COSPAL and DIPLECS, both with the stated goal of developing artificial cognitive systems. Thus, the ultimate goal of my research is to contribute to the development of artificial cognitive systems. The contribution to the field of Bayesian tracking is in the form of a framework called Channel Based Tracking (CBT). CBT has been proven to perform competitively with particle filter based approaches but with the added advantage of not having to specify the observation or system models. CBT uses channel representation and correspondence free learning in order to acquire the observation and system models from unordered sets of observations and states. We demonstrate how this has been used for tracking cars in the presence of clutter and noise. The shape matching part of this thesis presents a new way to match Fourier Descriptors (FDs). We show that it is possible to take rotation and index shift into account while matching FDs without explicitly de-rotate the contours or neglecting the phase. We also propose to use FDs for matching locally extracted shapes in contrast to the traditional way of using FDs to match the global outline of an object. We have in this context evaluated our matching scheme against the popular Affine Invariant FDs and shown that our method is clearly superior. In the visual servoing part we present a visual servoing method that is based on an action precedes perception approach. By applying random action with a system, e.g. a robotic arm, it is possible to learn a mapping between action space and percept space. In experiments we show that it is possible to achieve high precision positioning of a robotic arm without knowing beforehand how the robotic arm looks like or how it is controlled

    Language learning in context: an investigation of the processing and learning of new linguistic information.

    Get PDF
    Naturalistic language learning is contextually grounded. When people learn their first (L1) and often their second (L2) language, they do so in various contexts. In this dissertation I examine the effect of various contexts on language development. Part 1 describes the effects of textual, linguistic context in reading. I employed an eye-tracking and a think-aloud experiment to examine how native and non-native speakers of English process new words presented in full sentences. The results from the mixed-methods approach indicate similar processes of semantic integration for both speaker groups, with the L2 group putting greater intentionality and effort into the task and engaging in deeper processing. Subsequently, I operationalized context as additional information present in the learning environment, linguistic or visual. In two sets of related studies, I used self-paced reading (Part 3) and eye-tracking (Part 4) to track the learning process of L2 morphosyntax, as well as a series of offline receptive and productive tasks to evaluate learning outcomes. The results suggest a facilitative role for contextual information, both linguistic (L1 translations) and visual (images depicting sentence content). When no additional support was offered, learning was significantly diminished. The multi-method approach allowed me to operationalize ‘learning’ both as a process and as a product and to measure the various nuances of the construct. Results show how reading/reaction times gradually reduce as a result of learning; subsequent receptive and productive tasks reveal high accuracy and confirm that the L2 morphosyntax had been learned. Taken together, the results of this dissertation projects underscore the importance of context for language learning and show that when we manipulate contextual information, we alter both the learning process and its outcomes

    Deep learning investigation for chess player attention prediction using eye-tracking and game data

    Get PDF
    This article reports on an investigation of the use of convolutional neural networks to predict the visual attention of chess players. The visual attention model described in this article has been created to generate saliency maps that capture hierarchical and spatial features of chessboard, in order to predict the probability fixation for individual pixels Using a skip-layer architecture of an autoencoder, with a unified decoder, we are able to use multiscale features to predict saliency of part of the board at different scales, showing multiple relations between pieces. We have used scan path and fixation data from players engaged in solving chess problems, to compute 6600 saliency maps associated to the corresponding chess piece configurations. This corpus is completed with synthetically generated data from actual games gathered from an online chess platform. Experiments realized using both scan-paths from chess players and the CAT2000 saliency dataset of natural images, highlights several results. Deep features, pretrained on natural images, were found to be helpful in training visual attention prediction for chess. The proposed neural network architecture is able to generate meaningful saliency maps on unseen chess configurations with good scores on standard metrics. This work provides a baseline for future work on visual attention prediction in similar contexts
    corecore