48,800 research outputs found

    Flexible Point-Based Rendering on Mobile Devices

    Get PDF
    Point-based rendering is a compact and efficient means of displayingcomplex geometry. For mobile devices which typically have limited CPU orfloating point speed, limited memory, no graphics hardware and a smalldisplay, a hierarchical packed point based representation of objectsis particularly well adapted. We introduce -grids, which are ageneralization of previous octree based representations and analyse theirmemory and rendering efficiency. By storing intermediate node attributes,our structure allows flexible rendering, permitting efficient local imagerefinement, required for example when zooming into very complex scenes.We also introduce a novel and efficient one-pass shadow mapping algorithm usingthis data structure. We show an implementation of our method on a PDA,which can render objects sampled by 1.3 million points at 2.1 frames per second;the model was originally made up of 4.7 million polygons

    Flexible point-based rendering on mobile devices

    Full text link

    A toolkit of mechanism and context independent widgets

    Get PDF
    Most human-computer interfaces are designed to run on a static platform (e.g. a workstation with a monitor) in a static environment (e.g. an office). However, with mobile devices becoming ubiquitous and capable of running applications similar to those found on static devices, it is no longer valid to design static interfaces. This paper describes a user-interface architecture which allows interactors to be flexible about the way they are presented. This flexibility is defined by the different input and output mechanisms used. An interactor may use different mechanisms depending upon their suitability in the current context, user preference and the resources available for presentation using that mechanism

    EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

    No full text
    Face performance capture and reenactment techniques use multiple cameras and sensors, positioned at a distance from the face or mounted on heavy wearable devices. This limits their applications in mobile and outdoor environments. We present EgoFace, a radically new lightweight setup for face performance capture and front-view videorealistic reenactment using a single egocentric RGB camera. Our lightweight setup allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments. The input image is projected into a low dimensional latent space of the facial expression parameters. Through careful adversarial training of the parameter-space synthetic rendering, a videorealistic animation is produced. Our problem is challenging as the human visual system is sensitive to the smallest face irregularities that could occur in the final results. This sensitivity is even stronger for video results. Our solution is trained in a pre-processing stage, through a supervised manner without manual annotations. EgoFace captures a wide variety of facial expressions, including mouth movements and asymmetrical expressions. It works under varying illuminations, background, movements, handles people from different ethnicities and can operate in real time
    • …
    corecore