1,238 research outputs found

    NITELIGHT: A Graphical Tool for Semantic Query Construction

    No full text
    Query formulation is a key aspect of information retrieval, contributing to both the efficiency and usability of many semantic applications. A number of query languages, such as SPARQL, have been developed for the Semantic Web; however, there are, as yet, few tools to support end users with respect to the creation and editing of semantic queries. In this paper we introduce a graphical tool for semantic query construction (NITELIGHT) that is based on the SPARQL query language specification. The tool supports end users by providing a set of graphical notations that represent semantic query language constructs. This language provides a visual query language counterpart to SPARQL that we call vSPARQL. NITELIGHT also provides an interactive graphical editing environment that combines ontology navigation capabilities with graphical query visualization techniques. This paper describes the functionality and user interaction features of the NITELIGHT tool based on our work to date. We also present details of the vSPARQL constructs used to support the graphical representation of SPARQL queries

    Support for decision making on the World Wide Web : a thesis presented in partial fulfillment of the requirements for the degree of Masters of Information Science in Computer Science at Massey University, Turitea Campus, Palmerston North, New Zealand

    Get PDF
    This research explores tool support for information retrieval and comparison of multiple pieces of information on the web. The study identifies the main goals users may have in mind when using the Internet in this way, and the necessary activities users complete to fulfill their goals. The main goals web users have are information search, entertainment and consumer to business transactions. The tasks users perform on the web to fulfill their goals include collecting, comparing, filtering and processing web information. These tasks form a decision-making cycle on the web and depending on the goal at hand, users may or may not necessarily undertake all the tasks or sub-steps in any sequential order. Industry web support tools have been analyzed to find out how effective they are in supporting a common user's activities. These tools include web browsers (Netcaptor Browser and Internet Explorer), editing tools (Notes Pilot and Edit Pad), plug-ins, research tools and window management systems. Both browsers are poor at arranging multiple windows and excellent at opening web sites. The Internet Explorer browser proved to be better than the Netcaptor browser at a number of activities including, selecting web content, copying web text and images and pasting web content into editing documents. When used with either browser. Microsoft Windows is good for arranging windows but poor in switching windows views, scrolling windows and resizing and re-positioning windows. Both editing tools are poor at re-positioning and formatting web content from an HTML environment to a text-based environment. The Notes Pilot tool is also poor in making calculations and returning to the browser. It is excellent at saving work and retrieving old files. The Edit Pad tool is successful at all other activities except re-positioning and formatting web content. It can be seen that tool support is lacking or current web-based tools support the user poorly in a number of areas. The need for an integrated web support tool has been identified. The functional and non-functional requirements have been specified, the tool designed, implemented and evaluated by users. The users were requested to complete a questionnaire and conduct a think-aloud walk-through session while completing three tasks using the integrated web support tool. The sessions were observed and results recorded. Most of the users strongly agreed with the proposition that the tool would be useful for personal or academic activities. The users recognized the tool's novelty, its efficiency, and also indicated an overall level of satisfaction. The users were less satisfied about referring back to web sites, getting the software to do exactly what they wanted and arranging the work space to meet their needs. Changes were made to the tool

    Moi Helsinki. Personalised user interface solutions for generative data

    Get PDF
    In the modern days, online search stands out as the most popular way to access a major amount of information. At the same time, browsing through too much data could lead to an information overload. Helping users to feel more individual, as well as appropriately navigating them through the data is an objective designers should raise. In the theoretical background of this work, I bring attention to techniques that allow one to work with generative data and its contextualisation. I study historical and philosophical aspects of information perception, as well as the modern experience of working with online search engines such as Google. I refer to information architecture principles that can adapt user interface designs to generative content. In the age of big data and information pollution, a designer’s objective could be employing technology to make data more human-centred. Along with the theoretical writing, this thesis also consists of project work. Moi Helsinki is a location-based event calendar for the Helsinki area. The calendar gathers information about events retrieved from social media API, and showcases aggregated data in a single feed. Moi Helsinki reshapes the data output with the help of interface personalisation, showing the most relevant results at the top. It employs a user’s current geographical location in order to tailor search results based on proximity for each visitor. The options provided to website visitors within the UI are extended with further customisation, which can be enabled by adjusting the data output beyond just a user’s location. Setting aside certain distinctive features of event calendars, Moi Helsinki chooses another path to explore. Being more of a mediator than proprietor, Moi Helsinki offers a new way to reshape the data and communicate human-centred values through user interface

    RAPID WEBGIS DEVELOPMENT FOR EMERGENCY MANAGEMENT

    Get PDF
    The use of spatial data during emergency response and management helps to make faster and better decisions. Moreover spatial data should be as much updated as possible and easy to access. To face the challenge of rapid and updated data sharing the most efficient solution is largely considered the use of internet where the field of web mapping is constantly evolving. ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action) is a non profit association founded by Politecnico di Torino and SITI (Higher Institute for the Environmental Systems) as a joint project with the WFP (World Food Programme). The collaboration with the WFP drives some projects related to Early Warning Systems (i.e. flood and drought monitoring) and Early Impact Systems (e.g. rapid mapping and assessment through remote sensing systems). The Web GIS team has built and is continuously improving a complex architecture based entirely on Open Source tools. This architecture is composed by three main areas: the database environment, the server side logic and the client side logic. Each of them is implemented respecting the MCV (Model Controller View) pattern which means the separation of the different logic layers (database interaction, business logic and presentation). The MCV architecture allows to easily and fast build a Web GIS application for data viewing and exploration. In case of emergency data publication can be performed almost immediately as soon as data production is completed. The server side system is based on Python language and Django web development framework, while the client side on OpenLayers, GeoExt and Ext.js that manage data retrieval and user interface. The MCV pattern applied to javascript allows to keep the interface generation and data retrieval logic separated from the general application configuration, thus the server side environment can take care of the generation of the configuration file. The web application building process is data driven and can be considered as a view of the current architecture composed by data and data interaction tools. Once completely automated, the Web GIS application building process can be performed directly by the final user, that can customize data layers and controls to interact with the

    Where are your Manners? Sharing Best Community Practices in the Web 2.0

    Get PDF
    The Web 2.0 fosters the creation of communities by offering users a wide array of social software tools. While the success of these tools is based on their ability to support different interaction patterns among users by imposing as few limitations as possible, the communities they support are not free of rules (just think about the posting rules in a community forum or the editing rules in a thematic wiki). In this paper we propose a framework for the sharing of best community practices in the form of a (potentially rule-based) annotation layer that can be integrated with existing Web 2.0 community tools (with specific focus on wikis). This solution is characterized by minimal intrusiveness and plays nicely within the open spirit of the Web 2.0 by providing users with behavioral hints rather than by enforcing the strict adherence to a set of rules.Comment: ACM symposium on Applied Computing, Honolulu : \'Etats-Unis d'Am\'erique (2009

    Web User Profile Using XUL and Information Retrieval Techniques

    Get PDF
    This paper presents the importance of user profile in information retrieval, information filtering and recommender systems using explicit and implicit feedback. A Firefox extension (based on XUL) used for gathering data needed to infer a web user profile and an example file with collected data are presented. Also an algorithm for creating and updating the user profile and keeping track of a fixed number k of subjects of interest is presented

    Quantifying Biases in Online Information Exposure

    Full text link
    Our consumption of online information is mediated by filtering, ranking, and recommendation algorithms that introduce unintentional biases as they attempt to deliver relevant and engaging content. It has been suggested that our reliance on online technologies such as search engines and social media may limit exposure to diverse points of view and make us vulnerable to manipulation by disinformation. In this paper, we mine a massive dataset of Web traffic to quantify two kinds of bias: (i) homogeneity bias, which is the tendency to consume content from a narrow set of information sources, and (ii) popularity bias, which is the selective exposure to content from top sites. Our analysis reveals different bias levels across several widely used Web platforms. Search exposes users to a diverse set of sources, while social media traffic tends to exhibit high popularity and homogeneity bias. When we focus our analysis on traffic to news sites, we find higher levels of popularity bias, with smaller differences across applications. Overall, our results quantify the extent to which our choices of online systems confine us inside "social bubbles."Comment: 25 pages, 10 figures, to appear in the Journal of the Association for Information Science and Technology (JASIST
    • …
    corecore