10 research outputs found

    Introduction to aspects of object oriented graphics

    Get PDF

    User Interface Management Systems: A Survey and a Proposed Design

    Get PDF
    The growth of interactive computing has resulted in increasingly more complex styles of interaction between user and computer. To facilitate the creation of highly interactive systems, the concept of the User Interface Management System (UIMS) has been developed. Following the definition of the term 'UIMS' and a consideration of the putative advantages of the UIMS approach, a number of User Interface Management Systems are examined. This examination focuses in turn on the run-time execution system, the specification notation and the design environment, with a view to establishing the features which an "ideal" UIMS should possess. On the basis of this examination, a proposal for the design of a new UIMS is presented, and progress reported towards the implementation of a prototype based on this design

    Contributions to Pen & Touch Human-Computer Interaction

    Full text link
    [EN] Computers are now present everywhere, but their potential is not fully exploited due to some lack of acceptance. In this thesis, the pen computer paradigm is adopted, whose main idea is to replace all input devices by a pen and/or the fingers, given that the origin of the rejection comes from using unfriendly interaction devices that must be replaced by something easier for the user. This paradigm, that was was proposed several years ago, has been only recently fully implemented in products, such as the smartphones. But computers are actual illiterates that do not understand gestures or handwriting, thus a recognition step is required to "translate" the meaning of these interactions to computer-understandable language. And for this input modality to be actually usable, its recognition accuracy must be high enough. In order to realistically think about the broader deployment of pen computing, it is necessary to improve the accuracy of handwriting and gesture recognizers. This thesis is devoted to study different approaches to improve the recognition accuracy of those systems. First, we will investigate how to take advantage of interaction-derived information to improve the accuracy of the recognizer. In particular, we will focus on interactive transcription of text images. Here the system initially proposes an automatic transcript. If necessary, the user can make some corrections, implicitly validating a correct part of the transcript. Then the system must take into account this validated prefix to suggest a suitable new hypothesis. Given that in such application the user is constantly interacting with the system, it makes sense to adapt this interactive application to be used on a pen computer. User corrections will be provided by means of pen-strokes and therefore it is necessary to introduce a recognizer in charge of decoding this king of nondeterministic user feedback. However, this recognizer performance can be boosted by taking advantage of interaction-derived information, such as the user-validated prefix. Then, this thesis focuses on the study of human movements, in particular, hand movements, from a generation point of view by tapping into the kinematic theory of rapid human movements and the Sigma-Lognormal model. Understanding how the human body generates movements and, particularly understand the origin of the human movement variability, is important in the development of a recognition system. The contribution of this thesis to this topic is important, since a new technique (which improves the previous results) to extract the Sigma-lognormal model parameters is presented. Closely related to the previous work, this thesis study the benefits of using synthetic data as training. The easiest way to train a recognizer is to provide "infinite" data, representing all possible variations. In general, the more the training data, the smaller the error. But usually it is not possible to infinitely increase the size of a training set. Recruiting participants, data collection, labeling, etc., necessary for achieving this goal can be time-consuming and expensive. One way to overcome this problem is to create and use synthetically generated data that looks like the human. We study how to create these synthetic data and explore different approaches on how to use them, both for handwriting and gesture recognition. The different contributions of this thesis have obtained good results, producing several publications in international conferences and journals. Finally, three applications related to the work of this thesis are presented. First, we created Escritorie, a digital desk prototype based on the pen computer paradigm for transcribing handwritten text images. Second, we developed "Gestures 脿 Go Go", a web application for bootstrapping gestures. Finally, we studied another interactive application under the pen computer paradigm. In this case, we study how translation reviewing can be done more ergonomically using a pen.[ES] Hoy en d铆a, los ordenadores est谩n presentes en todas partes pero su potencial no se aprovecha debido al "miedo" que se les tiene. En esta tesis se adopta el paradigma del pen computer, cuya idea fundamental es sustituir todos los dispositivos de entrada por un l谩piz electr贸nico o, directamente, por los dedos. El origen del rechazo a los ordenadores proviene del uso de interfaces poco amigables para el humano. El origen de este paradigma data de hace m谩s de 40 a帽os, pero solo recientemente se ha comenzado a implementar en dispositivos m贸viles. La lenta y tard铆a implantaci贸n probablemente se deba a que es necesario incluir un reconocedor que "traduzca" los trazos del usuario (texto manuscrito o gestos) a algo entendible por el ordenador. Para pensar de forma realista en la implantaci贸n del pen computer, es necesario mejorar la precisi贸n del reconocimiento de texto y gestos. El objetivo de esta tesis es el estudio de diferentes estrategias para mejorar esta precisi贸n. En primer lugar, esta tesis investiga como aprovechar informaci贸n derivada de la interacci贸n para mejorar el reconocimiento, en concreto, en la transcripci贸n interactiva de im谩genes con texto manuscrito. En la transcripci贸n interactiva, el sistema y el usuario trabajan "codo con codo" para generar la transcripci贸n. El usuario valida la salida del sistema proporcionando ciertas correcciones, mediante texto manuscrito, que el sistema debe tener en cuenta para proporcionar una mejor transcripci贸n. Este texto manuscrito debe ser reconocido para ser utilizado. En esta tesis se propone aprovechar informaci贸n contextual, como por ejemplo, el prefijo validado por el usuario, para mejorar la calidad del reconocimiento de la interacci贸n. Tras esto, la tesis se centra en el estudio del movimiento humano, en particular del movimiento de las manos, utilizando la Teor铆a Cinem谩tica y su modelo Sigma-Lognormal. Entender como se mueven las manos al escribir, y en particular, entender el origen de la variabilidad de la escritura, es importante para el desarrollo de un sistema de reconocimiento, La contribuci贸n de esta tesis a este t贸pico es importante, dado que se presenta una nueva t茅cnica (que mejora los resultados previos) para extraer el modelo Sigma-Lognormal de trazos manuscritos. De forma muy relacionada con el trabajo anterior, se estudia el beneficio de utilizar datos sint茅ticos como entrenamiento. La forma m谩s f谩cil de entrenar un reconocedor es proporcionar un conjunto de datos "infinito" que representen todas las posibles variaciones. En general, cuanto m谩s datos de entrenamiento, menor ser谩 el error del reconocedor. No obstante, muchas veces no es posible proporcionar m谩s datos, o hacerlo es muy caro. Por ello, se ha estudiado como crear y usar datos sint茅ticos que se parezcan a los reales. Las diferentes contribuciones de esta tesis han obtenido buenos resultados, produciendo varias publicaciones en conferencias internacionales y revistas. Finalmente, tambi茅n se han explorado tres aplicaciones relaciones con el trabajo de esta tesis. En primer lugar, se ha creado Escritorie, un prototipo de mesa digital basada en el paradigma del pen computer para realizar transcripci贸n interactiva de documentos manuscritos. En segundo lugar, se ha desarrollado "Gestures 脿 Go Go", una aplicaci贸n web para generar datos sint茅ticos y empaquetarlos con un reconocedor de forma r谩pida y sencilla. Por 煤ltimo, se presenta un sistema interactivo real bajo el paradigma del pen computer. En este caso, se estudia como la revisi贸n de traducciones autom谩ticas se puede realizar de forma m谩s ergon贸mica.[CA] Avui en dia, els ordinadors s贸n presents a tot arreu i es comunament acceptat que la seva utilitzaci贸 proporciona beneficis. No obstant aix貌, moltes vegades el seu potencial no s'aprofita totalment. En aquesta tesi s'adopta el paradigma del pen computer, on la idea fonamental 茅s substituir tots els dispositius d'entrada per un llapis electr貌nic, o, directament, pels dits. Aquest paradigma postula que l'origen del rebuig als ordinadors prov茅 de l'煤s d'interf铆cies poc amigables per a l'hum脿, que han de ser substitu茂des per alguna cosa m茅s coneguda. Per tant, la interacci贸 amb l'ordinador sota aquest paradigma es realitza per mitj脿 de text manuscrit i/o gestos. L'origen d'aquest paradigma data de fa m茅s de 40 anys, per貌 nom茅s recentment s'ha comen莽at a implementar en dispositius m貌bils. La lenta i tardana implantaci贸 probablement es degui al fet que 茅s necessari incloure un reconeixedor que "tradueixi" els tra莽os de l'usuari (text manuscrit o gestos) a alguna cosa comprensible per l'ordinador, i el resultat d'aquest reconeixement, actualment, 茅s lluny de ser 貌ptim. Per pensar de forma realista en la implantaci贸 del pen computer, cal millorar la precisi贸 del reconeixement de text i gestos. L'objectiu d'aquesta tesi 茅s l'estudi de diferents estrat猫gies per millorar aquesta precisi贸. En primer lloc, aquesta tesi investiga com aprofitar informaci贸 derivada de la interacci贸 per millorar el reconeixement, en concret, en la transcripci贸 interactiva d'imatges amb text manuscrit. En la transcripci贸 interactiva, el sistema i l'usuari treballen "bra莽 a bra莽" per generar la transcripci贸. L'usuari valida la sortida del sistema donant certes correccions, que el sistema ha d'usar per millorar la transcripci贸. En aquesta tesi es proposa utilitzar correccions manuscrites, que el sistema ha de recon猫ixer primer. La qualitat del reconeixement d'aquesta interacci贸 茅s millorada, tenint en compte informaci贸 contextual, com per exemple, el prefix validat per l'usuari. Despr茅s d'aix貌, la tesi se centra en l'estudi del moviment hum脿 en particular del moviment de les mans, des del punt de vista generatiu, utilitzant la Teoria Cinem脿tica i el model Sigma-Lognormal. Entendre com es mouen les mans en escriure 茅s important per al desenvolupament d'un sistema de reconeixement, en particular, per entendre l'origen de la variabilitat de l'escriptura. La contribuci贸 d'aquesta tesi a aquest t貌pic 茅s important, at猫s que es presenta una nova t猫cnica (que millora els resultats previs) per extreure el model Sigma- Lognormal de tra莽os manuscrits. De forma molt relacionada amb el treball anterior, s'estudia el benefici d'utilitzar dades sint猫tiques per a l'entrenament. La forma m茅s f脿cil d'entrenar un reconeixedor 茅s proporcionar un conjunt de dades "infinit" que representin totes les possibles variacions. En general, com m茅s dades d'entrenament, menor ser脿 l'error del reconeixedor. No obstant aix貌, moltes vegades no 茅s possible proporcionar m茅s dades, o fer-ho 茅s molt car. Per aix貌, s'ha estudiat com crear i utilitzar dades sint猫tiques que s'assemblin a les reals. Les diferents contribucions d'aquesta tesi han obtingut bons resultats, produint diverses publicacions en confer猫ncies internacionals i revistes. Finalment, tamb茅 s'han explorat tres aplicacions relacionades amb el treball d'aquesta tesi. En primer lloc, s'ha creat Escritorie, un prototip de taula digital basada en el paradigma del pen computer per realitzar transcripci贸 interactiva de documents manuscrits. En segon lloc, s'ha desenvolupat "Gestures 脿 Go Go", una aplicaci贸 web per a generar dades sint猫tiques i empaquetar-les amb un reconeixedor de forma r脿pida i senzilla. Finalment, es presenta un altre sistema inter- actiu sota el paradigma del pen computer. En aquest cas, s'estudia com la revisi贸 de traduccions autom脿tiques es pot realitzar de forma m茅s ergon貌mica.Mart铆n-Albo Sim贸n, D. (2016). Contributions to Pen & Touch Human-Computer Interaction [Tesis doctoral no publicada]. Universitat Polit猫cnica de Val猫ncia. https://doi.org/10.4995/Thesis/10251/68482TESI

    A User Interface Management System Generator

    Get PDF
    Much recent research has been focused on user interfaces. A major advance in interface design is the User Interface Management System (UIMS), which mediates between the application and the user. Our research has resulted in a conceptual framework for interaction which permits the design and implementation of a UIMS generator system. This system, called Graphical User Interface Development Environment or GUIDE, allows an interface designer to specify interactively the user interface for an application. The major issues addressed by this methodology are making interfaces implementable, modifiable and flexible, allowing for user variability, making interfaces consistent and allowing for application diversity within a user community. The underlying goal of GUIDE is that interface designers should be able to specify interfaces as broadly as is possible with a manually-coded system. The specific goals of GUIDE are: The designer need not write any interface code. Action routines are provided by the designer or application implementator which implement the actions or operations of the application system. Action routines may have parameters. The designer is able to specify multiple control paths based on the state of the system and a profile of the user. Inclusion of help and prompt messages is as easy as possible. GUIDE\u27s own interface may be generated with GUIDE. GUIDE goes beyond previous efforts in UIMS design in the full parameter specification provided in the interface for application actions, in the ability to reference application global items in the interface, and in the pervasiveness of conditions throughout the system. A parser is built into GUIDE to parse conditions and provide type-checking. The GUIDE framework describes interfaces in terms of three components: what the user sees of the application world (user-defined pictures and user-defined picture classes) what the user can do (tasks and tools) what happens when the user does something (actions and decisions) These three are combined to form contexts which describe the state of the interface at any time

    Composing graphical user interfaces in a purely functional language

    Get PDF
    This thesis is about building interactive graphical user interfaces in a compositional manner. Graphical user interface application hold out the promise of providing users with an interactive, graphical medium by which they can carry out tasks more effectively and conveniently. The application aids the user to solve some task. Conceptually, the user is in charge of the graphical medium, controlling the order and the rate at which individual actions are performed. This user-centred nature of graphical user interfaces has considerable ramifications for how software is structured. Since the application now services the user rather than the other way around, it has to be capable of responding to the user's actions when and in whatever order they might occur. This transfer of overall control towards the user places heavy burden on programming systems, a burden that many systems don't support too well. Why? Because the application now has to be structured so that it is responsive to whatever action the user may perform at any time. The main contribution of this thesis is to present a compositional approach to constructing graphical user interface applications in a purely functional programming language The thesis is concerned with the software techniques used to program graphical user interface applications, and not directly with their design. A starting point for the work presented here was to examine whether an approach based on functional programming could improve how graphical user interfaces are built. Functional programming languages, and Haskell in particular, contain a number of distinctive features such as higher-order functions, polymorphic type systems, lazy evaluation, and systematic overloading, that together pack quite a punch, at least according to proponents of these languages. A secondary contribution of this thesis is to present a compositional user interface framework called Haggis, which makes good use of current functional programming techniques. The thesis evaluates the properties of this framework by comparing it to existing systems

    Graphical interaction managment

    Get PDF

    Surface interaction : separating direct manipulation interfaces from their applications.

    Get PDF
    To promote both quality and economy in the production of applications and their interactive interfaces, it is desirable to delay their mutual binding. The later the binding, the more separable the interface from its application. An ideally separated interface can factor tasks from a range of applications, can provide a level of independence from hardware I/O devices, and can be responsive to end-user requirements. Current interface systems base their separation on two different abstractions. In linguistic architectures, for example User Interface Management Systems in the Seeheim model, the dialogue or syntax of interaction is abstracted in a separate notation. In agent architectures like Toolkits, interactive devices, at various levels of complexity, are abstracted into a class or call hierarchy. This Thesis identifies an essential feature of the popular notion of direct manipulation: directness requires that the same object be used both for output and input. In practice this compromises the separation of both dialogue and devices. In addition, dialogue cannot usefully be abstracted from its application functionality, while device abstraction reduces the designer's expressive control by binding presentation style to application semantics. This Thesis proposes an alternative separation, based on the abstraction of the medium of interaction, together with a dedicated user agent which allows direct manipulation of the medium. This interactive medium is called the surface. The Thesis proposes two new models for the surface, the first of which has been implemented as Presenter, the second of which is an ideal design permitting document quality interfaces. The major contribution of the Thesis is a precise specification of an architecture (UMA), whereby a separated surface can preserve directness without binding in application semantics, and at the same time an application can express its semantics on the surface without needing to manage all the details of interaction. Thus UMA partitions interaction into Surface Interaction, and deep interaction. Surface Interaction factors a large portion of the task of maintaining a highly manipulable interface, and brings the roles of user and application designer closer

    The device model of interaction

    No full text
    corecore