2,757 research outputs found

    SketchyDynamics: A Library for the Development of Physics Simulation Applications with Sketch-Based Interfaces

    Get PDF
    Sketch-based interfaces provide a powerful, natural and intuitive way for users to interact with an application. By combining a sketch-based interface with a physically simulated environment, an application offers the means for users to rapidly sketch a set of objects, like if they are doing it on piece of paper, and see how these objects behave in a simulation. In this paper we present SketchyDynamics, a library that intends to facilitate the creation of applications by rapidly providing them a sketch-based interface and physics simulation capabilities. SketchyDynamics was designed to be versatile and customizable but also simple. In fact, a simple application where the user draws objects and they are immediately simulated, colliding with each other and reacting to the specified physical forces, can be created with only 3 lines of code. In order to validate SketchyDynamics design choices, we also present some details of the usability evaluation that was conducted with a proof-of-concept prototype

    Pintarolas, a tangible sketch application

    Get PDF
    This paper presents Pintarolas, a simple sketch application that uses tangible interfaces as Human-Computer lnteraction modality. Totally sensor-less and cable-less interfaces (ordinary boardmarkers with fiducial markers attached), provide the means to support basic sketching tasks, such as drawing lines and sketching 2D primitive shapes (circle, triangle and square). The system requires an ordinary video camera linked to a PC and uses AR Toolkit for the handling of the Tangible Interfaces. Open GL is used for the graphical output and the CALI library, normally used for Calligraphic user interfaces development, is adopted in our case for 2D primilive shape recognition. A simple usability test was developed to assess the feasibility of this novel user interface in simple sketching tasks, showing that the users found the concept interesting and the tangible interfaces easy to operate.info:eu-repo/semantics/publishedVersio

    Aplicación móvil para el reconocimiento de moneda colombiana con retroalimentación de audio para personas con discapacidad visual

    Get PDF
    Context: According to the census conducted by the National Department of Statistics (DANE) in 2018, 7.1% of the Colombian population has a visual disability. These people face conditions with limited autonomy, such as the handling of money. In this context, there is a need to create tools to enable the inclusion of visually impaired people in the financial sector, allowing them to make payments and withdrawals in a safe and reliable manner. Method: This work describes the development of a mobile application called CopReader. This application enables the recognition of coins and banknotes of Colombian currency without an Internet connection, by means of convolutional neural network models. CopReader was developed to be used by visually impaired people. It takes a video or photographs, analyzes the input data, estimates the currency value, and uses audio feedback to communicate the result. Results: To validate the functionality of CopReader, integration tests were performed. In addition, precision and recall tests were conducted, considering the YoloV5 and MobileNet architectures, obtaining 95 and 93% for the former model and 99% for the latter. Then, field tests were performed with visually impaired people, obtaining accuracy values of 96%. 90% of the users were satisfied with the application’s functionality. Conclusions: CopReader is a useful tool for recognizing Colombian currency, helping visually impaired people gain to autonomy in handling money.Contexto: Según el censo realizado por el Departamento Nacional de Estadística (DANE) en 2018, el 7.1 % de la población colombiana tiene una discapacidad visual. Estas personas enfrentan condiciones con autonomía limitada, como lo es el manejo de dinero. En este contexto, es necesario crear herramientas que permitan la inclusión de las personas con discapacidad visual en el sector financiero, permitiéndoles realizar pagos y retiros de manera segura y confiable. Método: Este trabajo describe el desarrollo de una aplicación móvil llamada CopReader. Esta aplicación permite el reconocimiento de monedas y billetes de la moneda colombiana sin conexión a Internet, mediante modelos de redes neuronales convolucionales. CopReader fue desarrollada para ser utilizada por personas con discapacidad visual: toma un video o fotografías, analiza los datos de entrada, estima el valor de la moneda y utiliza retroalimentación auditiva para comunicar el resultado. Resultados: Para validar la funcionalidad de CopReader, se realizaron pruebas de integración. Además, se llevaron a cabo pruebas de precisión y recall, considerando las arquitecturas YoloV5 y MobileNet, donde se obtuvo 95 y 93 % para el primer modelo y 99 % para el segundo. Luego, se realizaron pruebas de campo con personas visualmente discapacitadas, obteniendo valores de exactitud del 96 %. El 90 % de los usuarios quedaron satisfechos con la funcionalidad de la aplicación. Conclusiones: CopReader es una herramienta útil para el reconocimiento de la moneda colombiana, ayudando a las personas con discapacidad visual a ganar autonomía en el manejo del dinero

    Integrating Multiple Sketch Recognition Methods to Improve Accuracy and Speed

    Get PDF
    Sketch recognition is the computer understanding of hand drawn diagrams. Recognizing sketches instantaneously is necessary to build beautiful interfaces with real time feedback. There are various techniques to quickly recognize sketches into ten or twenty classes. However for much larger datasets of sketches from a large number of classes, these existing techniques can take an extended period of time to accurately classify an incoming sketch and require significant computational overhead. Thus, to make classification of large datasets feasible, we propose using multiple stages of recognition. In the initial stage, gesture-based feature values are calculated and the trained model is used to classify the incoming sketch. Sketches with an accuracy less than a threshold value, go through a second stage of geometric recognition techniques. In the second geometric stage, the sketch is segmented, and sent to shape-specific recognizers. The sketches are matched against predefined shape descriptions, and confidence values are calculated. The system outputs a list of classes that the sketch could be classified as, along with the accuracy, and precision for each sketch. This process both significantly reduces the time taken to classify such huge datasets of sketches, and increases both the accuracy and precision of the recognition

    Integrating Multiple Sketch Recognition Methods to Improve Accuracy and Speed

    Get PDF
    Sketch recognition is the computer understanding of hand drawn diagrams. Recognizing sketches instantaneously is necessary to build beautiful interfaces with real time feedback. There are various techniques to quickly recognize sketches into ten or twenty classes. However for much larger datasets of sketches from a large number of classes, these existing techniques can take an extended period of time to accurately classify an incoming sketch and require significant computational overhead. Thus, to make classification of large datasets feasible, we propose using multiple stages of recognition. In the initial stage, gesture-based feature values are calculated and the trained model is used to classify the incoming sketch. Sketches with an accuracy less than a threshold value, go through a second stage of geometric recognition techniques. In the second geometric stage, the sketch is segmented, and sent to shape-specific recognizers. The sketches are matched against predefined shape descriptions, and confidence values are calculated. The system outputs a list of classes that the sketch could be classified as, along with the accuracy, and precision for each sketch. This process both significantly reduces the time taken to classify such huge datasets of sketches, and increases both the accuracy and precision of the recognition

    A modified EM algorithm for hand gesture segmentation in RGB-D data

    Get PDF

    Towards Naturalistic Interfaces of Virtual Reality Systems

    Get PDF
    Interaction plays a key role in achieving realistic experience in virtual reality (VR). Its realization depends on interpreting the intents of human motions to give inputs to VR systems. Thus, understanding human motion from the computational perspective is essential to the design of naturalistic interfaces for VR. This dissertation studied three types of human motions, including locomotion (walking), head motion and hand motion in the context of VR. For locomotion, the dissertation presented a machine learning approach for developing a mechanical repositioning technique based on a 1-D treadmill for interacting with a unique new large-scale projective display, called the Wide-Field Immersive Stereoscopic Environment (WISE). The usability of the proposed approach was assessed through a novel user study that asked participants to pursue a rolling ball at variable speed in a virtual scene. In addition, the dissertation studied the role of stereopsis in avoiding virtual obstacles while walking by asking participants to step over obstacles and gaps under both stereoscopic and non-stereoscopic viewing conditions in VR experiments. In terms of head motion, the dissertation presented a head gesture interface for interaction in VR that recognizes real-time head gestures on head-mounted displays (HMDs) using Cascaded Hidden Markov Models. Two experiments were conducted to evaluate the proposed approach. The first assessed its offline classification performance while the second estimated the latency of the algorithm to recognize head gestures. The dissertation also conducted a user study that investigated the effects of visual and control latency on teleoperation of a quadcopter using head motion tracked by a head-mounted display. As part of the study, a method for objectively estimating the end-to-end latency in HMDs was presented. For hand motion, the dissertation presented an approach that recognizes dynamic hand gestures to implement a hand gesture interface for VR based on a static head gesture recognition algorithm. The proposed algorithm was evaluated offline in terms of its classification performance. A user study was conducted to compare the performance and the usability of the head gesture interface, the hand gesture interface and a conventional gamepad interface for answering Yes/No questions in VR. Overall, the dissertation has two main contributions towards the improvement of naturalism of interaction in VR systems. Firstly, the interaction techniques presented in the dissertation can be directly integrated into existing VR systems offering more choices for interaction to end users of VR technology. Secondly, the results of the user studies of the presented VR interfaces in the dissertation also serve as guidelines to VR researchers and engineers for designing future VR systems

    SketchyDynamics apoio à produção de sistemas baseados em interfaces caligráficas para a simulação da dinâmica de corpos rígidos

    Get PDF
    Mestrado em Engenharia Informática - Área de Especialização em Sistemas Gráficos e MultimédiaO paradigma de interação proporcionado pelas interfaces caligráficas constitui uma forma natural de interação humano-computador. Esta naturalidade deve-se, sobretudo, à semelhança que este estilo de interação possui com a utilização de um lápis sobre papel, tarefa comum e intuitiva. Apesar disso é ainda pouco frequente o emprego de tais interfaces em aplicações informáticas, sendo o estilo de interação WIMP (Windows, Icons, Menus and Pointers) mais utilizado e favorecido. No entanto, antecipa-se um futuro no qual as interfaces caligráficas estarão cada vez mais presentes, pois é notório o surgimento de um número crescente não só de aplicações que adotam este estilo de interação, mas também de equipamentos que incentivam à sua utilização. Com base nesta premissa, é seguro afirmar a necessidade de investir nesta área, de modo a agilizar e acelerar a adoção do estilo de interação caligráfico e, assim, tornar a interação humano-computador num processo cada vez mais natural. O trabalho descrito neste documento visa um estudo à utilização das interfaces caligráficas orientada para a criação e controlo de um ambiente simulado. Mais concretamente, é apresentado o sistema SketchyDynamics, que integra um módulo de simulação da dinâmica de corpos rígidos em simbiose com uma interface caligráfica munida das ações necessárias para a manipulação da simulação. Recorrendo a este sistema, é facilitada a produção de aplicações que tirem partido destas funcionalidades, sem a necessidade de as reimplementar. É ainda descrita uma avaliação de técnicas de reconhecimento caligráfico realizada com o objetivo de determinar aquela que melhor se integraria no sistema desenvolvido. No âmbito desta avaliação são ainda apresentados alguns pormenores sobre a implementação dessas técnicas, bem como procedimentos que permitem uma maximização da sua eficácia. São também discutidos os resultados de uma avaliação de usabilidade conduzida com o propósito de validar o sistema SketchyDynamics do ponto de vista do utilizador. Os resultados desta avaliação mostram que este sistema foi bem-sucedido e que se encontra preparado para o utilizador final, não obstante a existência de margem para futuras melhorias.The interaction paradigm provided by sketch-based interfaces represents a natural method of human-computer interaction. This naturalness is largely due to the similarity that this interaction style has with the use of a pencil on a paper, an intuitive and common task. Despite that, the implementation of these interfaces on computer applications is still unusual, in favor of the WIMP (Windows, Icons, Menus and Points) interaction style. Nevertheless, we can predict a future where sketch-based interfaces will be increasingly more widespread, based on the recent emergence of not only applications that adopt this interaction style, but also equipment that encourage their use. With this premise in mind, it is safe to assert the need for investment in this area, in order to streamline and accelerate the adoption of the sketch-based interaction style and thus make the human-computer interaction a progressively more natural process. The work described in this document aims the study of the use of sketch-based interfaces in the creation and control of simulated environments. More specifically, we present the SketchyDynamics system, which incorporates a rigid body simulation module in symbiosis with a sketch-based interface provided with the necessary actions for the manipulation of the simulation. Using this system, we hope to ease the production of applications that take advantage of these features, without the need to implement them from scratch. An evaluation of various sketch recognition techniques, performed in order to find the one that best fits in the developed system, is also described. As part of this evaluation, we also present some details on the implementation of these techniques, as well as procedures that allow us to maximize their efficiency. Furthermore, we discuss the results of a usability evaluation that was conducted with the purpose of validating the SketchyDynamics system from the user’s point of view. The results of this evaluation suggest that, despite the existence of room for further improvements, the system was successful and is ready for final users
    corecore