12 research outputs found

    Inferring Constraints from Multiple Snapshots

    Get PDF
    Many graphics tasks, such as the manipulation of graphical objects, and the construction of user-interface widgets, can be facilitated by geometric constraints. However, the difficulty of specifying constraints by traditional methods forms a barrier to their widespread use. In order to make constraints easier to declare, we have developed a method of specifying constraints implicitly, through multiple examples. Snapshots are taken of an initial scene configuration, and one or more additional snapshots are taken after the scene has been edited into other valid configurations. The constraints that are satisfied in all the snapshots are then applied to the scene objects. We discuss an efficient algorithm for inferring constraints from multiple snapshots. The algorithm has been incorporated into the Chimera editor, and several examples of its use are discussed

    Constraint specification by example in a Meta-CASE tool

    Get PDF
    CASE tools are very helpful to software engineers in different ways and in different phases of software development. However, they are not easy to specialise to meet the needs of particular application domains or particular software modelling requirements. Meta-CASE tools offer a way of providing such specialisation by enabling a designer to specify a tool which is then generated automatically. Constraints are often used in such meta-CASE tools as a technique for governing the syntax and semantics of model elements and the values of their attributes. However, although constraint definition is a difficult process it has attracted relatively little research attention. The PhD research described here presents an approach for improving the process of CASE tool constraint specification based on the notion of programming by example (or demonstration). The feasibility of the approach will be demonstrated via experiments with a prototype using the meta-CASE tool Diagram Editor Constraints System (DECS) as context

    Semi-Automated SVG Programming via Direct Manipulation

    Full text link
    Direct manipulation interfaces provide intuitive and interactive features to a broad range of users, but they often exhibit two limitations: the built-in features cannot possibly cover all use cases, and the internal representation of the content is not readily exposed. We believe that if direct manipulation interfaces were to (a) use general-purpose programs as the representation format, and (b) expose those programs to the user, then experts could customize these systems in powerful new ways and non-experts could enjoy some of the benefits of programmable systems. In recent work, we presented a prototype SVG editor called Sketch-n-Sketch that offered a step towards this vision. In that system, the user wrote a program in a general-purpose lambda-calculus to generate a graphic design and could then directly manipulate the output to indirectly change design parameters (i.e. constant literals) in the program in real-time during the manipulation. Unfortunately, the burden of programming the desired relationships rested entirely on the user. In this paper, we design and implement new features for Sketch-n-Sketch that assist in the programming process itself. Like typical direct manipulation systems, our extended Sketch-n-Sketch now provides GUI-based tools for drawing shapes, relating shapes to each other, and grouping shapes together. Unlike typical systems, however, each tool carries out the user's intention by transforming their general-purpose program. This novel, semi-automated programming workflow allows the user to rapidly create high-level, reusable abstractions in the program while at the same time retaining direct manipulation capabilities. In future work, our approach may be extended with more graphic design features or realized for other application domains.Comment: In 29th ACM User Interface Software and Technology Symposium (UIST 2016

    Constraint-based graphical layout of multimodal presentations

    Get PDF
    When developing advanced multimodal interfaces, combining the characteristics of different modalities such as natural language, graphics, animation, virtual realities, etc., the question of automatically designing the graphical layout of such presentations in an appropriate format becomes increasingly important. So, to communicate information to the user in an expressive and effective way, a knowledge-based layout component has to be integrated into the architecture of an intelligent presentation system. In order to achieve a coherent output, it must be able to reflect certain semantic and pragmatic relations specified by a presentation planner to arrange the visual appearance of a mixture of textual and graphic fragments delivered by mode-specific generators. In this paper we will illustrate by the example of LayLab, the layout manager of the multimodal presentation system WIP, how the complex positioning problem for multimodal information can be treated as a constraint satisfaction problem. The design of an aesthetically pleasing layout is characterized as a combination of a general search problem in a finite discrete search space and an optimization problem. Therefore, we have integrated two dedicated constraint solvers, an incremental hierarchy solver and a finite domain solver, in a layered constraint solver model CLAY, which is triggered from a common metalevel by rules and defaults. The underlying constraint language is able to encode graphical design knowledge expressed by semantic/pragmatic, geometrical/topological, and temporal relations. Furthermore, this mechanism allows one to prioritize the constraints as well as to handle constraint solving over finite domains. As graphical constraints frequently have only local effects, they are incrementally generated by the system on the fly. Ultimately, we will illustrate the functionality of LayLab by some snapshots of an example run

    Beyond Snapping: Persistent, Tweakable Alignment and Distribution with StickyLines

    Get PDF
    International audienceAligning and distributing graphical objects is a common, but cumbersome task. In a preliminary study (six graphic designers , six non-designers), we identified three key problems with current tools: lack of persistence, unpredictability of results, and inability to 'tweak' the layout. We created StickyLines, a tool that treats guidelines as first-class objects: Users can create precise, predictable and persistent interactive alignment and distribution relationships, and 'tweaked' positions can be maintained for subsequent interactions. We ran a [2x2] within-participant experiment to compare Sticky-Lines with standard commands, with two levels of layout difficulty. StickyLines performed 40% faster and required 49% fewer actions than traditional alignment and distribution commands for complex layouts. In study three, six professional designers quickly adopted StickyLines and identified novel uses, including creating complex compound guidelines and using them for both spatial and semantic grouping

    Constraint-based document layout for the Web

    Full text link

    Intelligent layout for information display : an approach using constraints and case-based reasoning

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Architecture, 1992.Includes bibliographical references (leaves 75-78).by Grace Elizabeth Colby.M.S

    Capturing graphic design knowledge from interactive user demonstrations

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1993.Includes bibliographical references (leaves 114-118).by Alan Greg Turransky.M.S

    実世界入出力を伴うプログラムの画像表現を用いた開発支援手法

    Get PDF
    学位の種別:課程博士University of Tokyo(東京大学

    Programming with agents new metaphors for thinking about computation

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1996.Includes bibliographical references (p. [197]-206).by Michael David Travers.M.S
    corecore