73,374 research outputs found

    How do field of view and resolution affect the information content of panoramic scenes for visual navigation? A computational investigation

    Get PDF
    The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal’s behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently

    Agents, Bookmarks and Clicks: A topical model of Web traffic

    Full text link
    Analysis of aggregate and individual Web traffic has shown that PageRank is a poor model of how people navigate the Web. Using the empirical traffic patterns generated by a thousand users, we characterize several properties of Web traffic that cannot be reproduced by Markovian models. We examine both aggregate statistics capturing collective behavior, such as page and link traffic, and individual statistics, such as entropy and session size. No model currently explains all of these empirical observations simultaneously. We show that all of these traffic patterns can be explained by an agent-based model that takes into account several realistic browsing behaviors. First, agents maintain individual lists of bookmarks (a non-Markovian memory mechanism) that are used as teleportation targets. Second, agents can retreat along visited links, a branching mechanism that also allows us to reproduce behaviors such as the use of a back button and tabbed browsing. Finally, agents are sustained by visiting novel pages of topical interest, with adjacent pages being more topically related to each other than distant ones. This modulates the probability that an agent continues to browse or starts a new session, allowing us to recreate heterogeneous session lengths. The resulting model is capable of reproducing the collective and individual behaviors we observe in the empirical data, reconciling the narrowly focused browsing patterns of individual users with the extreme heterogeneity of aggregate traffic measurements. This result allows us to identify a few salient features that are necessary and sufficient to interpret the browsing patterns observed in our data. In addition to the descriptive and explanatory power of such a model, our results may lead the way to more sophisticated, realistic, and effective ranking and crawling algorithms.Comment: 10 pages, 16 figures, 1 table - Long version of paper to appear in Proceedings of the 21th ACM conference on Hypertext and Hypermedi

    Contextualizing the blogosphere: A comparison of traditional and novel user interfaces for the web

    Get PDF
    In this paper, we investigate how contextual user interfaces affect blog reading experience. Based on a review of previous research, we argue why and how contextualization may result in (H1) enhanced blog reading experiences. In an eyetracking experiment, we tested 3 different web-based user interfaces for information spaces. The StarTree interface (by Inxight) and the Focus-Metaphor interface are compared with a standard blog interface. Information tasks have been used to evaluate and compare task performance and user satisfaction between these three interfaces. We found that both contextual user interfaces clearly outperformed the traditional blog interface, both in terms of task performance as well as user satisfaction. © 2007 Laqua, S., Ogbechie, N. and Sasse, M. A

    Combined Data Structure for Previous- and Next-Smaller-Values

    Get PDF
    Let AA be a static array storing nn elements from a totally ordered set. We present a data structure of optimal size at most nlog2(3+22)+o(n)n\log_2(3+2\sqrt{2})+o(n) bits that allows us to answer the following queries on AA in constant time, without accessing AA: (1) previous smaller value queries, where given an index ii, we wish to find the first index to the left of ii where AA is strictly smaller than at ii, and (2) next smaller value queries, which search to the right of ii. As an additional bonus, our data structure also allows to answer a third kind of query: given indices i<ji<j, find the position of the minimum in A[i..j]A[i..j]. Our data structure has direct consequences for the space-efficient storage of suffix trees.Comment: to appear in Theoretical Computer Scienc

    Intelligent Self-Repairable Web Wrappers

    Get PDF
    The amount of information available on the Web grows at an incredible high rate. Systems and procedures devised to extract these data from Web sources already exist, and different approaches and techniques have been investigated during the last years. On the one hand, reliable solutions should provide robust algorithms of Web data mining which could automatically face possible malfunctioning or failures. On the other, in literature there is a lack of solutions about the maintenance of these systems. Procedures that extract Web data may be strictly interconnected with the structure of the data source itself; thus, malfunctioning or acquisition of corrupted data could be caused, for example, by structural modifications of data sources brought by their owners. Nowadays, verification of data integrity and maintenance are mostly manually managed, in order to ensure that these systems work correctly and reliably. In this paper we propose a novel approach to create procedures able to extract data from Web sources -- the so called Web wrappers -- which can face possible malfunctioning caused by modifications of the structure of the data source, and can automatically repair themselves.\u
    corecore