386 research outputs found

    An Incremental On-line Parsing Algorithm for Recognizing Sketching Diagrams

    Get PDF
    International audienceThis paper presents a syntactic recognition approach for on-line drawn graphical symbols. The proposed method consists in an incremental on-line predictive parser based on symbol descriptions by an adjacency grammar. The parser analyzes input strokes as they are drawn by the user and is able to get ahead which symbols are likely to be recognized when a partial subshape is drawn in an intermediate state. In addition, the parser takes into account two issues. First, symbol strokes are drawn in any order by the user and second, since it is an on-line framework, the system requires real-time response. The method has been applied to an on-line sketching interface for architectural symbols

    Towards Syntax-Aware Editors for Visual Languages

    Get PDF
    AbstractEditors for visual languages should provide a user-friendly environment supporting end users in the composition of visual sentences in an effective way. Syntax-aware editors are a class of editors that prompt users into writing syntactically correct programs by exploiting information on the visual language syntax. In particular, they do not constrain users to enter only correct syntactic states in a visual sentence. They merely inform the user when visual objects are syntactically correct. This means detecting both syntax and potential semantic errors as early as possible and providing feedback on such errors in a non-intrusive way during editing. As a consequence, error handling strategies are an essential part of such editing style of visual sentences.In this work, we develop a strategy for the construction of syntax-aware visual language editors by integrating incremental subsentence parsers into free-hand editors. The parser combines the LR-based techniques for parsing visual languages with the more general incremental Generalized LR parsing techniques developed for string languages. Such approach has been profitably exploited for introducing a noncorrecting error recovery strategy, and for prompting during the editing the continuation of what the user is drawing

    Multi-domain sketch understanding

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 121-128).by Christine J. Alvarado.Ph.D

    ChemInk: A Natural Real-Time Recognition System for Chemical Drawings

    Get PDF
    We describe a new sketch recognition framework for chemical structure drawings that combines multiple levels of visual features using a jointly trained conditional random field. This joint model of appearance at different levels of detail makes our framework less sensitive to noise and drawing variations, improving accuracy and robustness. In addition, we present a novel learning-based approach to corner detection that achieves nearly perfect accuracy in our domain. The result is a recognizer that is better able to handle the wide range of drawing styles found in messy freehand sketches. Our system handles both graphics and text, producing a complete molecular structure as output. It works in real time, providing visual feedback about the recognition progress. On a dataset of chemical drawings our system achieved an accuracy rate of 97.4%, an improvement over the best reported results in literature. A preliminary user study also showed that participants were on average over twice as fast using our sketch-based system compared to ChemDraw, a popular CAD-based tool for authoring chemical diagrams. This was the case even though most of the users had years of experience using ChemDraw and little or no experience using Tablet PCs.National Science Foundation (U.S.) (Grant 0729422)United States. Dept. of Homeland Security (Graduate Research Fellowship)Pfizer Inc

    SketChart: A Pen-Based Tool for Chart Generation and Interaction.

    Get PDF
    It has been shown that representing data with the right visualization increases the understanding of qualitative and quantitative information encoded in documents. However, current tools for generating such visualizations involve the use of traditional WIMP techniques, which perhaps makes free interaction and direct manipulation of the content harder. In this thesis, we present a pen-based prototype for data visualization using 10 different types of bar based charts. The prototype lets users sketch a chart and interact with the information once the drawing is identified. The prototype\u27s user interface consists of an area to sketch and touch based elements that will be displayed depending on the context and nature of the outline. Brainstorming and live presentations can benefit from the prototype due to the ability to visualize and manipulate data in real time. We also perform a short, informal user study to measure effectiveness of the tool while recognizing sketches and users acceptance while interacting with the system. Results show SketChart strengths and weaknesses and areas for improvement

    Sketch interpretation using multiscale stochastic models of temporal patterns

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 102-114).Sketching is a natural mode of interaction used in a variety of settings. For example, people sketch during early design and brainstorming sessions to guide the thought process; when we communicate certain ideas, we use sketching as an additional modality to convey ideas that can not be put in words. The emergence of hardware such as PDAs and Tablet PCs has enabled capturing freehand sketches, enabling the routine use of sketching as an additional human-computer interaction modality. But despite the availability of pen based information capture hardware, relatively little effort has been put into developing software capable of understanding and reasoning about sketches. To date, most approaches to sketch recognition have treated sketches as images (i.e., static finished products) and have applied vision algorithms for recognition. However, unlike images, sketches are produced incrementally and interactively, one stroke at a time and their processing should take advantage of this. This thesis explores ways of doing sketch recognition by extracting as much information as possible from temporal patterns that appear during sketching.(cont.) We present a sketch recognition framework based on hierarchical statistical models of temporal patterns. We show that in certain domains, stroke orderings used in the course of drawing individual objects contain temporal patterns that can aid recognition. We build on this work to show how sketch recognition systems can use knowledge of both common stroke orderings and common object orderings. We describe a statistical framework based on Dynamic Bayesian Networks that can learn temporal models of object-level and stroke-level patterns for recognition. Our framework supports multi-object strokes, multi-stroke objects, and allows interspersed drawing of objects - relaxing the assumption that objects are drawn one at a time. Our system also supports real-valued feature representations using a numerically stable recognition algorithm. We present recognition results for hand-drawn electronic circuit diagrams. The results show that modeling temporal patterns at multiple scales provides a significant increase in correct recognition rates, with no added computational penalties.by Tevfik Metin Sezgin.Ph.D

    Combining appearance and context for multi-domain sketch recognition

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 99-102).As our interaction with computing shifts away from the traditional desktop model (e.g., towards smartphones, tablets, touch-enabled displays), the technology that drives this interaction needs to evolve as well. Wouldn't it be great if we could talk, write, and draw to a computer just like we do with each other? This thesis addresses the drawing aspect of that vision: enabling computers to understand the meaning and semantics of free-hand diagrams. We present a novel framework for sketch recognition that seamlessly combines a rich representation of local visual appearance with a probabilistic graphical model for capturing higher level relationships. This joint model makes our system less sensitive to noise and drawing variations, improving accuracy and robustness. The result is a recognizer that is better able to handle the wide range of drawing styles found in messy freehand sketches. To preserve the fluid process of sketching on paper, our interface allows users to draw diagrams just as they would on paper, using the same notations and conventions. For the isolated symbol recognition task our method exceeds state-of-the-art performance in three domains: handwritten digits, PowerPoint shapes, and electrical circuit symbols. For the complete diagram recognition task it was able to achieve excellent performance on both chemistry and circuit diagrams, improving on the best previous results. Furthermore, in an on-line study our new interface was on average over twice as fast as the existing CAD-based method for authoring chemical diagrams, even for novice users who had little or no experience using a tablet. This is one of the first direct comparisons that shows a sketch recognition interface significantly outperforming a professional industry-standard CAD-based tool.by Tom Yu Ouyang.Ph.D

    Dynamic Furniture Modeling Through Assembly Instructions

    Get PDF
    We present a technique for parsing widely used furniture assembly instructions, and reconstructing the 3D models of furniture components and their dynamic assembly process. Our technique takes as input a multi-step assembly instruction in a vector graphic format and starts to group the vector graphic primitives into semantic elements representing individual furniture parts, mechanical connectors (e.g., screws, bolts and hinges), arrows, visual highlights, and numbers. To reconstruct the dynamic assembly process depicted over multiple steps, our system identifies previously built 3D furniture components when parsing a new step, and uses them to address the challenge of occlusions while generating new 3D components incrementally. With a wide range of examples covering a variety of furniture types, we demonstrate the use of our system to animate the 3D furniture assembly process and, beyond that, the semantic-aware furniture editing as well as the fabrication of personalized furnitures

    Freeform User Interfaces for Graphical Computing

    Get PDF
    報告番号: 甲15222 ; 学位授与年月日: 2000-03-29 ; 学位の種別: 課程博士 ; 学位の種類: 博士(工学) ; 学位記番号: 博工第4717号 ; 研究科・専攻: 工学系研究科情報工学専
    corecore