58,617 research outputs found

    Wide-angle display developments by computer graphics

    Get PDF
    Computer graphics can now expand its new subset, wide-angle projection, to be as significant a generic capability as computer graphics itself. Some prior work in computer graphics is presented which leads to an attractive further subset of wide-angle projection, called hemispheric projection, to be a major communication media. Hemispheric film systems have long been present and such computer graphics systems are in use in simulators. This is the leading edge of capabilities which should ultimately be as ubiquitous as CRTs (cathode-ray tubes). These assertions are not from degrees in science or only from a degree in graphic design, but in a history of computer graphics innovations, laying groundwork by demonstration. The author believes that it is timely to look at several development strategies, since hemispheric projection is now at a point comparable to the early stages of computer graphics, requiring similar patterns of development again

    Utilizing ubiquitous commodity graphics hardware for scientific computing

    Get PDF
    Current GPUs have many times the memory bandwidth and computing power compared to CPUs. The difference in performance is getting bigger as the evolution speed of the GPUs is higher than of the CPUs. This make it interesting to use the GPU for general purpose computing (GPGPU). I begin by looking at the architecture of the GPU, and some different techniques for programming on a GPU, including some of the available high-level languages. I have implemented the Mandelbrot computation on a cluster of GPUs (the HPDC display wall), and compared it against two different CPU implementations on the cluster. I have also implemented the Mandelbrot computation in both Cg and Brook, and compared the performance of the two languages. My experimental study shows that the GPU implementation of the Mandelbrot application is up to twice as fast as the load-balanced CPU implementation on the cluster of 28 computers, and up to 6 times faster on one computer

    Ubiquitous Scalable Graphics: An End-to-End Framework using Wavelets

    Get PDF
    Advances in ubiquitous displays and wireless communications have fueled the emergence of exciting mobile graphics applications including 3D virtual product catalogs, 3D maps, security monitoring systems and mobile games. Current trends that use cameras to capture geometry, material reflectance and other graphics elements means that very high resolution inputs is accessible to render extremely photorealistic scenes. However, captured graphics content can be many gigabytes in size, and must be simplified before they can be used on small mobile devices, which have limited resources, such as memory, screen size and battery energy. Scaling and converting graphics content to a suitable rendering format involves running several software tools, and selecting the best resolution for target mobile device is often done by trial and error, which all takes time. Wireless errors can also affect transmitted content and aggressive compression is needed for low-bandwidth wireless networks. Most rendering algorithms are currently optimized for visual realism and speed, but are not resource or energy efficient on mobile device. This dissertation focuses on the improvement of rendering performance by reducing the impacts of these problems with UbiWave, an end-to-end Framework to enable real time mobile access to high resolution graphics using wavelets. The framework tackles the issues including simplification, transmission, and resource efficient rendering of graphics content on mobile device based on wavelets by utilizing 1) a Perceptual Error Metric (PoI) for automatically computing the best resolution of graphics content for a given mobile display to eliminate guesswork and save resources, 2) Unequal Error Protection (UEP) to improve the resilience to wireless errors, 3) an Energy-efficient Adaptive Real-time Rendering (EARR) heuristic to balance energy consumption, rendering speed and image quality and 4) an Energy-efficient Streaming Technique. The results facilitate a new class of mobile graphics application which can gracefully adapt the lowest acceptable rendering resolution to the wireless network conditions and the availability of resources and battery energy on mobile device adaptively

    Factors influencing visual attention switch in multi-display user interfaces: a survey

    Get PDF
    Multi-display User Interfaces (MDUIs) enable people to take advantage of the different characteristics of different display categories. For example, combining mobile and large displays within the same system enables users to interact with user interface elements locally while simultaneously having a large display space to show data. Although there is a large potential gain in performance and comfort, there is at least one main drawback that can override the benefits of MDUIs: the visual and physical separation between displays requires that users perform visual attention switches between displays. In this paper, we present a survey and analysis of existing data and classifications to identify factors that can affect visual attention switch in MDUIs. Our analysis and taxonomy bring attention to the often ignored implications of visual attention switch and collect existing evidence to facilitate research and implementation of effective MDUIs.Postprin

    Ambient Gestures

    No full text
    We present Ambient Gestures, a novel gesture-based system designed to support ubiquitous ‘in the environment’ interactions with everyday computing technology. Hand gestures and audio feedback allow users to control computer applications without reliance on a graphical user interface, and without having to switch from the context of a non-computer task to the context of the computer. The Ambient Gestures system is composed of a vision recognition software application, a set of gestures to be processed by a scripting application and a navigation and selection application that is controlled by the gestures. This system allows us to explore gestures as the primary means of interaction within a multimodal, multimedia environment. In this paper we describe the Ambient Gestures system, define the gestures and the interactions that can be achieved in this environment and present a formative study of the system. We conclude with a discussion of our findings and future applications of Ambient Gestures in ubiquitous computing
    corecore