36,198 research outputs found

    The Impact of Modes of Mediation on the Web Retrieval Process

    Get PDF

    Capturing the Visitor Profile for a Personalized Mobile Museum Experience: an Indirect Approach

    Get PDF
    An increasing number of museums and cultural institutions around the world use personalized, mostly mobile, museum guides to enhance visitor experiences. However since a typical museum visit may last a few minutes and visitors might only visit once, the personalization processes need to be quick and efficient, ensuring the engagement of the visitor. In this paper we investigate the use of indirect profiling methods through a visitor quiz, in order to provide the visitor with specific museum content. Building on our experience of a first study aimed at the design, implementation and user testing of a short quiz version at the Acropolis Museum, a second parallel study was devised. This paper introduces this research, which collected and analyzed data from two environments: the Acropolis Museum and social media (i.e. Facebook). Key profiling issues are identified, results are presented, and guidelines towards a generalized approach for the profiling needs of cultural institutions are discussed

    Contextualised Browsing in a Digital Library's Living Lab

    Full text link
    Contextualisation has proven to be effective in tailoring \linebreak search results towards the users' information need. While this is true for a basic query search, the usage of contextual session information during exploratory search especially on the level of browsing has so far been underexposed in research. In this paper, we present two approaches that contextualise browsing on the level of structured metadata in a Digital Library (DL), (1) one variant bases on document similarity and (2) one variant utilises implicit session information, such as queries and different document metadata encountered during the session of a users. We evaluate our approaches in a living lab environment using a DL in the social sciences and compare our contextualisation approaches against a non-contextualised approach. For a period of more than three months we analysed 47,444 unique retrieval sessions that contain search activities on the level of browsing. Our results show that a contextualisation of browsing significantly outperforms our baseline in terms of the position of the first clicked item in the result set. The mean rank of the first clicked document (measured as mean first relevant - MFR) was 4.52 using a non-contextualised ranking compared to 3.04 when re-ranking the result lists based on similarity to the previously viewed document. Furthermore, we observed that both contextual approaches show a noticeably higher click-through rate. A contextualisation based on document similarity leads to almost twice as many document views compared to the non-contextualised ranking.Comment: 10 pages, 2 figures, paper accepted at JCDL 201

    Aligning the topic of FCA with existing module learning outcomes

    Get PDF
    Although Formal Concept Analysis is worthy of study on computing courses, it is not always possible or practical to dedicate a whole module to it. It may, however, fit into an existing module as a topic but require some careful design of teaching and assessment activities to properly align it to the intended learning outcomes of the module. This paper describes and evaluates a three year project to align the teaching and assessment of FCA with the learning outcomes of a final-year undergraduate Smart Applications module at Sheffield Hallam University. Biggs' constructive alignment was used, incorporating an adapted version of Yin's case study research method, in an iterative process; progressively modifying teaching and assessment activities to align them more closely with the prescribed learning outcomes. The process involved examining conclusions made by students, from carrying out FCA case study assignments, to draw cross-case conclusions about the learning outcomes achieved, and how they deviated from the prescribed ones. These cross-case conclusions were used to feed back into the design of learning and assessment activities for the next delivery of the module. After three cycles, the learning outcomes achieved closely matched the prescribed learning outcomes of the module

    Automatic domain ontology extraction for context-sensitive opinion mining

    Get PDF
    Automated analysis of the sentiments presented in online consumer feedbacks can facilitate both organizations’ business strategy development and individual consumers’ comparison shopping. Nevertheless, existing opinion mining methods either adopt a context-free sentiment classification approach or rely on a large number of manually annotated training examples to perform context sensitive sentiment classification. Guided by the design science research methodology, we illustrate the design, development, and evaluation of a novel fuzzy domain ontology based contextsensitive opinion mining system. Our novel ontology extraction mechanism underpinned by a variant of Kullback-Leibler divergence can automatically acquire contextual sentiment knowledge across various product domains to improve the sentiment analysis processes. Evaluated based on a benchmark dataset and real consumer reviews collected from Amazon.com, our system shows remarkable performance improvement over the context-free baseline

    AUC Optimisation and Collaborative Filtering

    Get PDF
    In recommendation systems, one is interested in the ranking of the predicted items as opposed to other losses such as the mean squared error. Although a variety of ways to evaluate rankings exist in the literature, here we focus on the Area Under the ROC Curve (AUC) as it widely used and has a strong theoretical underpinning. In practical recommendation, only items at the top of the ranked list are presented to the users. With this in mind, we propose a class of objective functions over matrix factorisations which primarily represent a smooth surrogate for the real AUC, and in a special case we show how to prioritise the top of the list. The objectives are differentiable and optimised through a carefully designed stochastic gradient-descent-based algorithm which scales linearly with the size of the data. In the special case of square loss we show how to improve computational complexity by leveraging previously computed measures. To understand theoretically the underlying matrix factorisation approaches we study both the consistency of the loss functions with respect to AUC, and generalisation using Rademacher theory. The resulting generalisation analysis gives strong motivation for the optimisation under study. Finally, we provide computation results as to the efficacy of the proposed method using synthetic and real data

    Design implications for task-specific search utilities for retrieval and re-engineering of code

    Get PDF
    The importance of information retrieval systems is unquestionable in the modern society and both individuals as well as enterprises recognise the benefits of being able to find information effectively. Current code-focused information retrieval systems such as Google Code Search, Codeplex or Koders produce results based on specific keywords. However, these systems do not take into account developers’ context such as development language, technology framework, goal of the project, project complexity and developer’s domain expertise. They also impose additional cognitive burden on users in switching between different interfaces and clicking through to find the relevant code. Hence, they are not used by software developers. In this paper, we discuss how software engineers interact with information and general-purpose information retrieval systems (e.g. Google, Yahoo!) and investigate to what extent domain-specific search and recommendation utilities can be developed in order to support their work-related activities. In order to investigate this, we conducted a user study and found that software engineers followed many identifiable and repeatable work tasks and behaviours. These behaviours can be used to develop implicit relevance feedback-based systems based on the observed retention actions. Moreover, we discuss the implications for the development of task-specific search and collaborative recommendation utilities embedded with the Google standard search engine and Microsoft IntelliSense for retrieval and re-engineering of code. Based on implicit relevance feedback, we have implemented a prototype of the proposed collaborative recommendation system, which was evaluated in a controlled environment simulating the real-world situation of professional software engineers. The evaluation has achieved promising initial results on the precision and recall performance of the system

    A Survey of Volunteered Open Geo-Knowledge Bases in the Semantic Web

    Full text link
    Over the past decade, rapid advances in web technologies, coupled with innovative models of spatial data collection and consumption, have generated a robust growth in geo-referenced information, resulting in spatial information overload. Increasing 'geographic intelligence' in traditional text-based information retrieval has become a prominent approach to respond to this issue and to fulfill users' spatial information needs. Numerous efforts in the Semantic Geospatial Web, Volunteered Geographic Information (VGI), and the Linking Open Data initiative have converged in a constellation of open knowledge bases, freely available online. In this article, we survey these open knowledge bases, focusing on their geospatial dimension. Particular attention is devoted to the crucial issue of the quality of geo-knowledge bases, as well as of crowdsourced data. A new knowledge base, the OpenStreetMap Semantic Network, is outlined as our contribution to this area. Research directions in information integration and Geographic Information Retrieval (GIR) are then reviewed, with a critical discussion of their current limitations and future prospects

    A Proximity Indicator for e-Government: The Smallest Number of Clicks

    Get PDF
    In order to develop an indicator measuring the proximity of e-Government and its different generic functions, we analysed a set of studies that were conducted in the United States and in Europe. We defined 21 elements of measure grouped in six dimensions of proximity and we surveyed the official Websites of the French-speaking Swiss Cantons in 2002 and 2003. We observed that more technical aspects such as navigability were well developed, whereas more “socio-political” aspects (data protection, access for handicapped) and organisational issues were still in early stages. To conclude this work we give some hints for the application of a methodology based on proximity measurement.e-Government; portals; evaluation; proximity; 3-clicks rule; usability
    • …
    corecore