14 research outputs found

    Multi-Dimensional Models Facilitate Automatic Reformulation: The Case Study of the SET Game

    Get PDF
    In this paper we describe a reformulation strategy for solving multidimensional Constraint Satisfaction Problems (CSPs). This strategy operates by iteratively considering, in isolation, each one of the uni-dimensional constraints in the problem, and exploits the approximate symmetries induced by the selected constraint on the domains in order to enforce this constraint on the simplified problem. We use the game of SET, a combinatorial card game, as a toy problem to motivate our strategy and to explain and illustrate its operation. However, we believe that our approach is applicable to more complex domains of scientific and industrial importance, and deserves more thorough investigations in the future. Our approach sheds a new light on the dynamic reformulation of multidimensional CSPs. Importantly, it advocates that modeling tools for Constraint Programming should allow the user to specify the constraints directly on the attributes of the domain objects (i.e., variables and values) so that their multi-dimensionality can be exploited during problem solving

    Screen Correspondence: Mapping Interchangeable Elements between UIs

    Full text link
    Understanding user interface (UI) functionality is a useful yet challenging task for both machines and people. In this paper, we investigate a machine learning approach for screen correspondence, which allows reasoning about UIs by mapping their elements onto previously encountered examples with known functionality and properties. We describe and implement a model that incorporates element semantics, appearance, and text to support correspondence computation without requiring any labeled examples. Through a comprehensive performance evaluation, we show that our approach improves upon baselines by incorporating multi-modal properties of UIs. Finally, we show three example applications where screen correspondence facilitates better UI understanding for humans and machines: (i) instructional overlay generation, (ii) semantic UI element search, and (iii) automated interface testing

    Never-ending Learning of User Interfaces

    Full text link
    Machine learning models have been trained to predict semantic information about user interfaces (UIs) to make apps more accessible, easier to test, and to automate. Currently, most models rely on datasets that are collected and labeled by human crowd-workers, a process that is costly and surprisingly error-prone for certain tasks. For example, it is possible to guess if a UI element is "tappable" from a screenshot (i.e., based on visual signifiers) or from potentially unreliable metadata (e.g., a view hierarchy), but one way to know for certain is to programmatically tap the UI element and observe the effects. We built the Never-ending UI Learner, an app crawler that automatically installs real apps from a mobile app store and crawls them to discover new and challenging training examples to learn from. The Never-ending UI Learner has crawled for more than 5,000 device-hours, performing over half a million actions on 6,000 apps to train three computer vision models for i) tappability prediction, ii) draggability prediction, and iii) screen similarity

    Scraps: Enabling Contextual Mobile Capture, Contextualization, and Use of Document Resources

    No full text
    This project contains the survey and user study questions for the paper Scraps: Enabling Contextual Mobile Capture, Contextualization, and Use of Document Resources, which is currently under submission. The abstract is below. Abstract: People often capture photos or notes from their phones to integrate later into a document. Current mobile capture tools can make this hard, with captured information ending up fragmented and decontextualized. This paper explores how to help authors capture, contextualize, and use document-related information. A survey of 66 information workers reveals that document-focused information capture differs from other types of mobile information capture. While people capture a range of information types while mobile, most document-related capture consists of photos, notes, and bookmarks. Based on this survey we built Scraps, which has two parts: 1) a mobile app to let people capture and contextualize information from their phone, and 2) a Word sidebar to later link that information to a desktop document. In a field study with 11 information workers, Scraps streamlined the process of capturing and use document-related information, and enabled people to focus on writing over integrating captured information

    CogTool-Helper: Leveraging GUI Functional Testing Tools to Generate Predictive Human Performance Models

    Get PDF
    Numerous tools and techniques for human performance modeling have been introduced in the field of human-computer interaction. With such tools comes the ability to model legacy applications. Models can be used to compare design ideas to existing applications, or to evaluate products against those of competitors. One such modeling tool, CogTool, allows user interface designers and analysts to mock up design ideas, demonstrate tasks, and obtain human performance predictions for those tasks. This is one step towards a simple and complete analysis process, but it still requires a large amount of manual work. Graphical user interface (GUI) testing tools are orthogonal in that they provide automated model extraction of interfaces, methods for test case generation, and test case automation; however, the resulting test cases may not mimic tasks as they are performed by experienced users. In this thesis, we present CogTool-Helper, a tool that merges automated GUI testing with human performance modeling. It utilizes techniques from GUI testing to automatically create CogTool storyboards and models. We have designed an algorithm to find alternative methods for performing the same task so that the UI designer or analyst can study how a user might interact with the system beyond what they have specified. We have also implemented an approach to generate functional test cases that perform tasks in a way that mimics the user. We evaluate the feasibility of our approach in a human performance regression testing scenario in LibreOffice, and show how CogTool-Helper enhances the UI designer\u27s analysis process. Not only do the generated designs remove the need for manual design construction, but the resulting data allows new analyses that were previously not possible. Adviser: Myra B. Cohe

    Expanding Interface Design Capabilities through Semantic and Data-Driven Analyses

    No full text
    Thesis (Ph.D.)--University of Washington, 2020The design of an interface can have a huge impact on human productivity, creativity, safety, and satisfaction. Therefore, it is crucial that we provide user interface designers with the tools to make them efficient, more creative, and better understand their users. However, designers face key challenges in their tools throughout the design process. Designers explore alternatives of their interface layouts when prototyping. However, they are limited to exploring the layouts they can ideate and sketch, create in their prototyping tools, or find in other examples not containing their interface elements. In usability testing, designers can conduct large-scale studies and deploy their interfaces to gather data from crowdworkers, however, such studies can be expensive, time consuming, and difficult to conduct iteratively throughout the design process. Finally, designers often find that existing interfaces can be a platform for prototyping and enabling new forms of interaction, but existing interfaces are often rigid and difficult to modify at runtime. In this dissertation, I explore how we can use advanced technologies from program analysis and synthesis and machine learning, to enable semantic and data-driven analyses of interfaces. If we augment interface design tools with the capabilities of understanding, transforming, augmenting, and analyzing an interface design, we can advance designers' capabilities. Through semantic analysis of interfaces, we can help designers ideate more rapidly, prototype more efficiently, and more iteratively and cheaply evaluate the usability of their interface designs. I demonstrate this through four systems that (1) let designers rapidly ideate alternative layouts through mixed-initiative interaction with high-level constraints and feedback, (2) help designers adapt examples more efficiently by inferring semantic vector representation from an example screenshot, (3) enable designers to quickly and cheaply analyze a key aspect of the usability of their interfaces through a machine learning approach for modeling mobile interface tappability, and (4) prototype new modalities for existing web interfaces through applying program analysis to infer an abstract model of interface commands

    A Reformulation Strategy for Multi-Dimensional CSPs: The Case Study of the SET Game

    Get PDF
    •General reformulation strategy for CSPs –Multidimensional CSPs (MD-CSPs) –Problem reformulation by value interchangeability –A general reformulation strategy for MD-CSPs •Game of Set: A new toy problem –Game, CSP model –Problem reformulation –Algorithms & Results •Conclusion
    corecore