15 research outputs found

    Building and exploiting context on the web

    Get PDF
    [no abstract

    Uzilla: A new tool for Web usability testing

    Get PDF
    Web usability testing and research presents challenges for accurate data collection. An instrumented browser solution, Uzilla, is compared with existing solutions, and its contributions to usability testing practice are noted. Uzilla implements a client-server architecture based on the open source Mozilla browser. Instrumentation of the browser facilitates the evaluation of Web sites and applications inside and outside of the laboratory. An integrated data collection and analysis server application decreases the effort required to understand test results and facilitates iterative testing

    Use of Landmarks to Improve Spatial Learning and Revisitation in Computer Interfaces

    Get PDF
    Efficient spatial location learning and remembering are just as important for two-dimensional Graphical User Interfaces (GUI) as they are for real environments where locations are revisited multiple times. Rapid spatial memory development in GUIs, however, can be difficult because these interfaces often lack adequate landmarks that have been predominantly used by people to learn and recall real-life locations. In the absence of sufficient landmarks in GUIs, artificially created visual objects (i.e., artificial landmarks) could be used as landmarks to support spatial memory development of spatial locations. In order to understand how spatial memory development occurs in GUIs and explore ways to assist users’ efficient location learning and recalling in GUIs, I carried out five studies exploring the use of landmarks in GUIs – one study that investigated interfaces of four standard desktop applications: Microsoft Word, Facebook, Adobe Photoshop, and Adobe Reader, and other four that tested artificial landmarks augmented two prototype desktop GUIs against non-landmarked versions: command selection interfaces and linear document viewers; in addition, I tested landmarks’ use in variants of these interfaces that varied in the number of command sets (small, medium, and large) and types of linear documents (textual and video). Results indicate that GUIs’ existing features and design elements can be reliable landmarks in GUIs that provide spatial benefits similar to real environments. I also show that artificial landmarks can significantly improve spatial memory development of GUIs, allowing support for rapid spatial location learning and remembering in GUIs. Overall, this dissertation reveals that landmarks can be a valuable addition to graphical systems to improve the memorability and usability of GUIs

    The visualization of evolving searches

    Get PDF

    Realistic electronic books

    Get PDF
    People like books. They are convenient and can be accessed easily and enjoyably. In contrast, many view the experience of accessing and exploring electronic documents as dull, cumbersome and disorientating. This thesis claims that modelling digital documents as physical books can significantly improve reading performance. To investigate this claim, a realistic electronic book model was developed and evaluated. In this model, a range of properties associated with physical books---analogue page turning, bookmarks and annotations---are emulated. Advantage is also taken of the digital environment by supporting hyperlinks, multimedia, full-text search over terms and synonyms, automatically cross referencing documents with an online encyclopaedia, and producing a back-of-the-book index. The main technical challenge of simulating physical books is finding a suitable technique for page turning that is sufficiently realistic, yet lightweight, responsive, scalable and accessible. Several techniques were surveyed, implemented and evaluated. The chosen technique allows realistic books to be presented in the Adobe Flash Player, the most widely used browser plug-in on the Web. A series of usability studies were conducted to compare reading performance while performing various tasks with HTML, PDF, physical books, and simulated books. They revealed that participants not only preferred the new interface, but completed the tasks more efficiently, without any loss in accuracy

    Web Search, Web Tutorials & Software Applications: Characterizing and Supporting the Coordinated Use of Online Resources for Performing Work in Feature-Rich Software

    Get PDF
    Web search and other online resources serve an integral role in how people learn and use feature-rich software (e.g., Adobe Photoshop) on a daily basis. Users depend on web resources both as a first line of technical support, and as a means for coping with system complexity. For example, people rely on web resources to learn new tasks, to troubleshoot problems, or to remind themselves of key task details. When users rely on web resources to support their work, their interactions are distributed over three user environments: (1) the search engine, (2) retrieved documents, and (3) the application's user interface. As users interact with these environments, their actions generate a rich set of signals that characterize how the population thinks about and uses software systems "in the wild," on a day-to-day basis. This dissertation presents three works that successively connect and associate signals and artifacts across these environments, thereby generating novel insights about users and their tasks, and enabling powerful new end-user tools and services. These three projects are as follows: Characterizing usability through search (CUTS): The CUTS system demonstrates that aggregate logs of web search queries can be leveraged to identify common tasks and potential usability problems faced by the users of any publicly available interactive system. For example, in 2011 I examined query data for the Firefox web browser. Automated analysis uncovered approximately 150 variations of the query "Firefox how to get the menu bar back", with queries issued once every 32 minutes on average. Notably, this analysis did not depend on direct access to query logs. Instead, query suggestions services and online advertising valuations were leveraged to approximate aggregate query data. Nevertheless, these data proved to be timely, to have a high degree of ecological validity, and to be arguably less prone to self-selection bias than data gathered via traditional usability methods. Query-feature graphs (QF-Graphs): Query-feature graphs are structures that map high-level descriptions of a user's goals to the specific features and commands relevant to achieving those goals in software. QF-graphs address an important instance of the more general vocabulary mismatch problem. For example, users of the GIMP photo manipulation software often want to "make a picture black and white", and fail to recognize the relevance of the applicable commands, which include: "desaturate", and "channel mixer". The key insights for building QF-graphs are that: (1) queries concisely express the user's goal in the user's own words, and (2) retrieved tutorials likely include both query terms, as well as terminology from the application's interface (e.g., the names of commands). QF-graphs are generated by mining these co-occurrences across thousands of query-tutorial pairings. InterTwine: InterTwine explores interaction possibilities that arise when software applications, web search, and online support materials are directly integrated into a single productivity system. With InterTwine, actions in the web browser directly impact how information is presented in a software application, and vice versa. For example, when a user opens a web tutorial in their browser, the application's menus and tooltips are updated to highlight the commands mentioned therein. These embellishments are designed to help users orient themselves after switching between the web browser and the application. InterTwine also augments web search results to include details of past application use. Search snippets gain before and after pictures and other metadata detailing how the user's personal work document evolved the last time they visited the page. This feature was motivated by the observation that existing mechanisms (e.g., highlighting visited links) are often insufficient for recalling which resources were previously helpful vs. unhelpful for accomplishing a task. Finally, the dissertation concludes with a discussion of the advantages, limitations and challenges of this research, and presents an outline for future work

    Improving Revisitation Browsers Capability by Using a Dynamic Bookmarks Personal Toolbar

    No full text

    Web Archive Services Framework for Tighter Integration Between the Past and Present Web

    Get PDF
    Web archives have contained the cultural history of the web for many years, but they still have a limited capability for access. Most of the web archiving research has focused on crawling and preservation activities, with little focus on the delivery methods. The current access methods are tightly coupled with web archive infrastructure, hard to replicate or integrate with other web archives, and do not cover all the users\u27 needs. In this dissertation, we focus on the access methods for archived web data to enable users, third-party developers, researchers, and others to gain knowledge from the web archives. We build ArcSys, a new service framework that extracts, preserves, and exposes APIs for the web archive corpus. The dissertation introduces a novel categorization technique to divide the archived corpus into four levels. For each level, we will propose suitable services and APIs that enable both users and third-party developers to build new interfaces. The first level is the content level that extracts the content from the archived web data. We develop ArcContent to expose the web archive content processed through various filters. The second level is the metadata level; we extract the metadata from the archived web data and make it available to users. We implement two services, ArcLink for temporal web graph and ArcThumb for optimizing the thumbnail creation in the web archives. The third level is the URI level that focuses on using the URI HTTP redirection status to enhance the user query. Finally, the highest level in the web archiving service framework pyramid is the archive level. In this level, we define the web archive by the characteristics of its corpus and building Web Archive Profiles. The profiles are used by the Memento Aggregator for query optimization

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    Supporting finding and re-finding through personalization

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 165-176).Although one of the most common uses for the Internet to search for information, Web search tools often fail to connect people with what they are looking for. This is because search tools are designed to satisfy people in general, not the searcher in particular. Different individuals with different information needs often type the same search terms into a search box and expect different results. For example, the query "breast cancer" may be used by a student to find information on the disease for a fifth grade science report, and by a cancer patient to find treatment options. This thesis explores how Web search personalization can help individuals take advantage of their unique past information interactions when searching. Several studies of search behavior are presented and used to inform the design of a personalized search system that significantly improves result quality. Without requiring any extra effort from the user, the system is able to return simple breast cancer tutorials for the fifth grader's "breast cancer" query, and lists of treatment options for the patient's. While personalization can help identify relevant new information, new information can create problems re-finding when presented in a way that does not account for previous information interactions.(cont.) Consider the cancer patient who repeats a search for breast cancer treatments: she may want to learn about new treatments while reviewing the information she found earlier about her current treatment. To not interfere with refinding, repeat search results should be personalized not by ranking the most relevant results first, but rather by ranking them where the user most expects them to be. This thesis presents a model of what people remember about search results, and shows that it is possible to invisibly merge new information into previously viewed search result lists where information has been forgotten. Personalizing repeat search results in this way enables people to effectively find both new and old information using the same search result list.by Jaime Teevan.Ph.D
    corecore