504 research outputs found

    Remote experimental station for engineering education

    Get PDF
    This thesis provides a distance-learning laboratory for students of electrical and computer engineering department where the instructor can conduct experiments on a computer and send the results to the students at remote computers. The output of the experiment conducted by the instructor is sampled using a successive approximation Analog to Digital (A/D) converter. A microcontroller collects samples using high speed queued serial peripheral interface clock and transmits data to IBM-compatible personal computer over a serial port interface, where the samples are processed using Fast Fourier Transforms and graphed. The client/server application developed transfers the acquired samples over Transmission Control Protocol/Internet Protocol (TCP/IP) network with operational Graphical User Interface (GUI) to the remote computers where the samples are processed and presented to students. The application was tested on all Windows platforms and various Internet speeds (56k modem, Digital Subscriber Line (DSL), Local Area Network (LAN)). The results were analyzed and appropriate methodology of Remote Experimental Station was formulated

    A high performance vector rendering pipeline

    Get PDF
    Vector images are images which encode visible surfaces of a 3D scene, in a resolution independent format. Prior to this work generation of such an image was not real time. As such the benefits of using them in the graphics pipeline were not fully expressed. In this thesis we propose methods for addressing the following questions. How can we introduce vector images into the graphics pipeline, namingly, how can we produce them in real time. How can we take advantage of resolution independence, and how can we render vector images to a pixel display as efficiently as possible and with the highest quality. There are three main contributions of this work. We have designed a real time vector rendering system. That is, we present a GPU accelerated pipeline which takes as an input a scene with 3D geometry, and outputs a vector image. We call this system SVGPU: Scalable Vector Graphics on the GPU. As mentioned vector images are resolution independent. We have designed a cloud pipeline for streaming vector images. That is, we present system design and optimizations for streaming vector images across interconnection networks, which reduces the bandwidth required for transporting real time 3D content from server to client. Lastly, in this thesis we introduce another added benefit of vector images. We have created a method for rendering them with the highest possible quality. That is, we have designed a new set of operations on vector images, which allows us to anti-alias them during rendering to a canonical 2D image. Our contributions provide the system design, optimizations, and algorithms required to bring vector image utilization and benefits much closer to the real time graphics pipeline. Together they form an end to end pipeline to this purpose, i.e. "A High Performance Vector Rendering Pipeline.

    Doctor of Philosophy

    Get PDF
    dissertationBalancing the trade off between the spatial and temporal quality of interactive computer graphics imagery is one of the fundamental design challenges in the construction of rendering systems. Inexpensive interactive rendering hardware may deliver a high level of temporal performance if the level of spatial image quality is sufficiently constrained. In these cases, the spatial fidelity level is an independent parameter of the system and temporal performance is a dependent variable. The spatial quality parameter is selected for the system by the designer based on the anticipated graphics workload. Interactive ray tracing is one example; the algorithm is often selected due to its ability to deliver a high level of spatial fidelity, and the relatively lower level of temporal performance isreadily accepted. This dissertation proposes an algorithm to perform fine-grained adjustments to the trade off between the spatial quality of images produced by an interactive renderer, and the temporal performance or quality of the rendered image sequence. The approach first determines the minimum amount of sampling work necessary to achieve a certain fidelity level, and then allows the surplus capacity to be directed towards spatial or temporal fidelity improvement. The algorithm consists of an efficient parallel spatial and temporal adaptive rendering mechanism and a control optimization problem which adjusts the sampling rate based on a characterization of the rendered imagery and constraints on the capacity of the rendering system

    Foveated Rendering Techniques in Modern Computer Graphics

    Get PDF
    Foveated rendering coupled with eye-tracking has the potential to dramatically accelerate interactive 3D graphics with minimal loss of perceptual detail. I have developed a new foveated rendering technique: Kernel Foveated Rendering (KFR), which parameterizes foveated rendering by embedding polynomial kernel functions in log-polar mapping. This GPU-driven technique uses parameterized foveation that mimics the distribution of photoreceptors in the human retina. I present a two-pass kernel foveated rendering pipeline that maps well onto modern GPUs. In the first pass, I compute the kernel log-polar transformation and render to a reduced-resolution buffer. In the second pass, I have carried out the inverse-log-polar transformation with anti-aliasing to map the reduced-resolution rendering to the full-resolution screen. I carry out user studies to empirically identify the KFR parameters and observe a 2.8X-3.2X speedup in rendering on 4K displays. The eye-tracking-guided kernel foveated rendering can resolve the mutually conflicting goals of interactive rendering and perceptual realism

    A Performance Comparison of VMware GPU Virtualization Techniques in Cloud Gaming

    Get PDF
    Cloud gaming is an application deployment scenario which runs an interactive gaming application remotely in a cloud according to the commands received from a thin client and streams the scenes as a video sequence back to the client over the Internet, and it is of interest to both research community and industry. The academic community has developed some open-source cloud gaming systems such as GamingAnywhere for research study, while some industrial pioneers such as Onlive and Gaikai have succeeded in gaining a large user base in the cloud gaming market. Graphical Processing Unit (GPU) virtualization plays an important role in such an environment as it is a critical component that allows virtual machines to run 3D applications with performance guarantees. Currently, GPU pass-through and GPU sharing are the two main techniques of GPU virtualization. The former enables a single virtual machine to access a physical GPU directly and exclusively, while the latter makes a physical GPU shareable by multiple virtual machines. VMware Inc., one of the most popular virtualization solution vendors, has provided concrete implementations of GPU pass-through and GPU sharing. In particular, it provides a GPU pass-through solution called Virtual Dedicated Graphics Acceleration (vDGA) and a GPU-sharing solution called Virtual Shared Graphics Acceleration (vSGA). Moreover, VMware Inc. recently claimed it realized another GPU sharing solution called vGPU. Nevertheless, the feasibility and performance of these solutions in cloud gaming has not been studied yet. In this work, an experimental study is conducted to evaluate the feasibility and performance of GPU pass-through and GPU sharing solutions offered by VMware in cloud gaming scenarios. The primary results confirm that vDGA and vGPU techniques can fit the demands of cloud gaming. In particular, these two solutions achieved good performance in the tested graphics card benchmarks, and gained acceptable image quality and response delay for the tested games

    Efficient Algorithms for Large-Scale Image Analysis

    Get PDF
    This work develops highly efficient algorithms for analyzing large images. Applications include object-based change detection and screening. The algorithms are 10-100 times as fast as existing software, sometimes even outperforming FGPA/GPU hardware, because they are designed to suit the computer architecture. This thesis describes the implementation details and the underlying algorithm engineering methodology, so that both may also be applied to other applications

    A neural probe with up to 966 electrodes and up to 384 configurable channels in 0.13 μm SOI CMOS

    Get PDF
    In vivo recording of neural action-potential and local-field-potential signals requires the use of high-resolution penetrating probes. Several international initiatives to better understand the brain are driving technology efforts towards maximizing the number of recording sites while minimizing the neural probe dimensions. We designed and fabricated (0.13-μm SOI Al CMOS) a 384-channel configurable neural probe for large-scale in vivo recording of neural signals. Up to 966 selectable active electrodes were integrated along an implantable shank (70 μm wide, 10 mm long, 20 μm thick), achieving a crosstalk of −64.4 dB. The probe base (5 × 9 mm2) implements dual-band recording and a 1

    MP3 Hardware and Audio Decoder

    Get PDF
    The thesis titled “MP3 Hardware Audio decoder” describes about the hardware and software resources for decoding the MPEG1 bitstream. The dual architecture model in the hardware with instruction set tailored for audio decoding helps to reduce number of cycles and memory. The coding was done in assembly and testing was carried out in “model Sim”, with compliance bit streams for correctness of decode
    corecore