6 research outputs found

    Camera-based virtual environment interaction on mobile devices

    Get PDF
    Mobile virtual environments, with real-time 3D and 2D graphics, are now possible on smart phone and other camera-enabled devices. Using computer vision, the camera sensor can be treated as an input modality in applications by analyzing the incoming live video. We present our tracking algorithm and several mobile virtual environment and gaming prototypes including: a 3D first person shooter, a 2D puzzle game and a simple action game. Camera-based interaction provides a user experience that is not possible through traditional means, and maximizes the use of the limited display size. 漏 Springer-Verlag Berlin Heidelberg 2006

    A face tracking algorithm for user interaction in mobile devices

    Get PDF
    A new face tracking algorithm, and a human-computer interaction technique based on this algorithm, are proposed for use on mobile devices. The face tracking algorithm considers the limitations of mobile use case - constrained computational resources and varying environmental conditions. The solution is based on color comparisons and works on images gathered from the front camera of a device. The face tracking system generates 2D face position as an output that can be used for controlling different applications. Two of such applications are also presented in this work; the first example uses face position to determine the viewpoint, and the second example enables an intuitive way of browsing large images. 漏 2009 IEEE

    A Vision-Based Approach for Controlling User Interfaces of Mobile Devices

    No full text

    M-Government: desarrollo de los servicios de la administraci贸n p煤blica a trav茅s de dispositivos m贸viles

    Get PDF
    Este trabajo de investigaci贸n analiza el desarrollo del m-Government, entendido como la prestaci贸n de servicios por parte de la Administraci贸n P煤blica a trav茅s de dispositivos m贸viles. El m-Government se enmarca dentro de la integraci贸n de las Tecnolog铆as de la Informaci贸n y la Comunicaci贸n en la gesti贸n de las administraciones bajo la denominaci贸n de e-Government o Administraci贸n Electr贸nica. Analiza los factores cr铆ticos para el desarrollo de e-servicios a trav茅s de un modelo de aceptaci贸n de tecnolog铆a adaptado a la realidad de la Administraci贸n y a las peculiaridades que los dispositivos m贸viles presentan. Fundamenta su an谩lisis en enfoques te贸ricos t茅cnicos, como el de Computer Human Interaction, Arquitectura de la Informaci贸n, Usabilidad y Accesibilidad; enfoques te贸ricos de calidad, especialmente calidad de servicios y modelos de evaluaci贸n de la calidad de servicios como el SERVQUAL; y enfoques te贸ricos de aceptaci贸n de la tecnolog铆a. La principal conclusi贸n del trabajo es que la intenci贸n de uso de los dispositivos m贸viles para acceder a servicios prestados por la Administraci贸n depende principalmente de la utilidad percibida por la ciudadan铆a. Estos servicios deben intentar mejorar la productividad personal frente a otras dimensiones, como por ejemplo la facilidad de uso, el acceso 24/7, o el conocimiento tecnol贸gico. El desarrollo de servicios de m-Government debe permitir cumplir f谩cilmente al ciudadano con sus obligaciones, ahorrando tiempo en sus tr谩mites con la administraci贸n

    Perceptually Optimized Visualization on Autostereoscopic 3D Displays

    Get PDF
    The family of displays, which aims to visualize a 3D scene with realistic depth, are known as "3D displays". Due to technical limitations and design decisions, such displays create visible distortions, which are interpreted by the human vision as artefacts. In absence of visual reference (e.g. the original scene is not available for comparison) one can improve the perceived quality of the representations by making the distortions less visible. This thesis proposes a number of signal processing techniques for decreasing the visibility of artefacts on 3D displays. The visual perception of depth is discussed, and the properties (depth cues) of a scene which the brain uses for assessing an image in 3D are identified. Following the physiology of vision, a taxonomy of 3D artefacts is proposed. The taxonomy classifies the artefacts based on their origin and on the way they are interpreted by the human visual system. The principles of operation of the most popular types of 3D displays are explained. Based on the display operation principles, 3D displays are modelled as a signal processing channel. The model is used to explain the process of introducing distortions. It also allows one to identify which optical properties of a display are most relevant to the creation of artefacts. A set of optical properties for dual-view and multiview 3D displays are identified, and a methodology for measuring them is introduced. The measurement methodology allows one to derive the angular visibility and crosstalk of each display element without the need for precision measurement equipment. Based on the measurements, a methodology for creating a quality profile of 3D displays is proposed. The quality profile can be either simulated using the angular brightness function or directly measured from a series of photographs. A comparative study introducing the measurement results on the visual quality and position of the sweet-spots of eleven 3D displays of different types is presented. Knowing the sweet-spot position and the quality profile allows for easy comparison between 3D displays. The shape and size of the passband allows depth and textures of a 3D content to be optimized for a given 3D display. Based on knowledge of 3D artefact visibility and an understanding of distortions introduced by 3D displays, a number of signal processing techniques for artefact mitigation are created. A methodology for creating anti-aliasing filters for 3D displays is proposed. For multiview displays, the methodology is extended towards so-called passband optimization which addresses Moir茅, fixed-pattern-noise and ghosting artefacts, which are characteristic for such displays. Additionally, design of tuneable anti-aliasing filters is presented, along with a framework which allows the user to select the so-called 3d sharpness parameter according to his or her preferences. Finally, a set of real-time algorithms for view-point-based optimization are presented. These algorithms require active user-tracking, which is implemented as a combination of face and eye-tracking. Once the observer position is known, the image on a stereoscopic display is optimised for the derived observation angle and distance. For multiview displays, the combination of precise light re-direction and less-precise face-tracking is used for extending the head parallax. For some user-tracking algorithms, implementation details are given, regarding execution of the algorithm on a mobile device or on desktop computer with graphical accelerator
    corecore