12 research outputs found

    Ranking algorithms for implicit feedback

    No full text
    This report presents novel algorithms to use eye movements as an implicit relevance feedback in order to improve the performance of the searches. The algorithms are evaluated on "Transport Rank Five" Dataset which were previously collected in Task 8.3. We demonstrated that simple linear combination or tensor product of eye movement and image features can improve the retrieval accuracy

    Morning-evening Types Ä°n Kindergarten, Time-of-day And Performance On Basic Learning Skills

    Get PDF
    Research on the combined effect of diurnal type and time of day on school/ preschool performance is still scarce, probably because until recently there were no non-invasive questionnaires measuring diurnal type in younger children. To our knowledge, in the literature studies on the so-called synchrony effect only exist for adolescents and adults and no work has been conducted on prepubertal children. This study investigated in kindergarten the relationship between morningevening types with time-of-day and performance on a battery of tests covering basic skills involved in preschool learning. The sample comprised 80 children between 5 and 6 years old (M = 5.42, SD ± 0.495): 36 morning (45%) and 44 evening (55%) types, classified according to the Children’s Chronotype Questionnaire (Werner et al., 2009; PT version, Couto et al., 2014). The children completed a battery of tests related to kindergarten learning (Vitória de La Cruz, PT version, 2012) at four times in the kindergarten day (9:30-10:00; 11:30-12:00; 13:30-14:00; 15:00-15:30). Analyses indicated: an asynchrony effect on the Constancy of Form test, as M-E types performed better in their non-optimal moments, reaching significance in M-types; time-of-day effects in the Verbal (13:3014:00 > 11:30-12:00), Quantitative Concepts (15:00-15:30 > 9:30-10:00/ 11:30-12:00/ 13: 30-14:00) and Position in Space (11:30-12:00 > 13:30-14:00) tests. These results suggest the “synchrony effect” may be a simplistic hypothesis, and better performances are not necessarily associated to early times in the school day. Replication studies are necessary

    Design and evaluation of advanced electronic cockpit displays for instrument approach information

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 1991.Includes bibliographical references (leaves 67-68).by Mark G. Mykityshyn.M.S

    Implicit image annotation by using gaze analysis

    Get PDF
    PhDThanks to the advances in technology, people are storing a massive amount of visual information in the online databases. Today it is normal for a person to take a photo of an event with their smartphone and effortlessly upload it to a host domain. For later quick access, this enormous amount of data needs to be indexed by providing metadata for their content. The challenge is to provide suitable captions for the semantics of the visual content. This thesis investigates the possibility of extracting and using the valuable information stored inside human’s eye movements when interacting with digital visual content in order to provide information for image annotation implicitly. A non-intrusive framework is developed which is capable of inferring gaze movements to classify the visited images by a user into two classes when the user is searching for a Target Concept (TC) in the images. The first class is formed of the images that contain the TC and it is called the TC+ class and the second class is formed of the images that do not contain the TC and it is called the TC- class. By analysing the eye-movements only, the developed framework was able to identify over 65% of the images that the subject users were searching for with the accuracy over 75%. This thesis shows that the existing information in gaze patterns can be employed to improve the machine’s judgement of image content by assessment of human attention to the objects inside virtual environments.European Commission funded Network of Excellence PetaMedi

    Interactive video retrieval using implicit user feedback.

    Get PDF
    PhDIn the recent years, the rapid development of digital technologies and the low cost of recording media have led to a great increase in the availability of multimedia content worldwide. This availability places the demand for the development of advanced search engines. Traditionally, manual annotation of video was one of the usual practices to support retrieval. However, the vast amounts of multimedia content make such practices very expensive in terms of human effort. At the same time, the availability of low cost wearable sensors delivers a plethora of user-machine interaction data. Therefore, there is an important challenge of exploiting implicit user feedback (such as user navigation patterns and eye movements) during interactive multimedia retrieval sessions with a view to improving video search engines. In this thesis, we focus on automatically annotating video content by exploiting aggregated implicit feedback of past users expressed as click-through data and gaze movements. Towards this goal, we have conducted interactive video retrieval experiments, in order to collect click-through and eye movement data in not strictly controlled environments. First, we generate semantic relations between the multimedia items by proposing a graph representation of aggregated past interaction data and exploit them to generate recommendations, as well as to improve content-based search. Then, we investigate the role of user gaze movements in interactive video retrieval and propose a methodology for inferring user interest by employing support vector machines and gaze movement-based features. Finally, we propose an automatic video annotation framework, which combines query clustering into topics by constructing gaze movement-driven random forests and temporally enhanced dominant sets, as well as video shot classification for predicting the relevance of viewed items with respect to a topic. The results show that exploiting heterogeneous implicit feedback from past users is of added value for future users of interactive video retrieval systems

    Spatiotemporal enabled Content-based Image Retrieval

    Full text link

    Deriving Locational Reference through Implicit Information Retrieval

    No full text
    corecore