166,691 research outputs found

    An Object-oriented drawing package in smalltalk/v

    Get PDF
    Graphics creation applications tend to fall into two categories: bit-mapped paint packages, and object-oriented drawing packages. Although each interface has its own unique advantages, few vendors have attempted to integrate the two into a single package. Those who have tried have, in fact, poor integration both from the user\u27s perspective and in the underlying mathematical model. In this thesis, I have addressed the issue of integrating bit-mapped and object-oriented interfaces by creating an object-oriented graphics package which provides the user with a consistent interface for creating and manipulating both graphical objects and bit-mapped graphics. The consistency of the interface was facilitated by the consistency of the design, the underlying geometric model, and the implementation, all of which are themselves object-oriented. The thesis is written in Smalltalk/V for the Macintosh* . While the solution for this integration was not derived overnight, the use of object-oriented design principles sped the development of a complex graphical user interface, while providing fresh insight into the problem of representing bit-mapped objects. Because Smalltalk enforces the notion that every element in the system is an object, the Smalltalk developer is forced to begin designing his solution purely in terms of objects. This mind-set allowed me to view the point as no other graphics package has presented it: as a unique graphical entity (just as ll IS 1R formal geometry) available to the user as a graphical tool. As a result, users of my package are able to enjoy the benefits of both bit-mapped and object-oriented editors without ever abandoning an environment in which every graphical element is an object, in terms of both the interface and the underlying mathematical model

    Two-level pipelined systolic array graphics engine

    Get PDF
    The authors report a VLSI design of an advanced systolic array graphics (SAG) engine built from pipelined functional units which can generate realistic images interactively for high-resolution displays. They introduce a structured frame store system as an environment for the advanced SAG engine and present the principles and architecture of the advanced SAG engine. They introduce pipelined functional units into this SAG engine to meet the performance requirements. This is done by a formal approach where the original systolic array is represented at bit level by a finite, vertex-weighted, edge-weighted, directed graph. Two architectures built from pipelined functional units are described. A prototype containing nine processing elements was fabricated in a 1.6-Âżm CMOS technolog

    Display system software for the integration of an ADAGE 3000 programmable display generator into the solid modeling package C.A.D. software

    Get PDF
    A software system that integrates an ADAGE 3000 Programmable Display Generator into a C.A.D. software package known as the Solid Modeling Program is described. The Solid Modeling Program (SMP) is an interactive program that is used to model complex solid object through the composition of primitive geomeentities. In addition, SMP provides extensive facilities for model editing and display. The ADAGE 3000 Programmable Display Generator (PDG) is a color, raster scan, programmable display generator with a 32-bit bit-slice, bipolar microprocessor (BPS). The modularity of the system architecture and the width and speed of the system bus allow for additional co-processors in the system. These co-processors combine to provide efficient operations on and rendering of graphics entities. The resulting software system takes advantage of the graphics capabilities of the PDG in the operation of SMP by distributing its processing modules between the host and the PDG. Initially, the target host computer was a PRIME 850, which was later substituted with a VAX-11/785. Two versions of the software system were developed, a phase 1 and a phase 2. In phase 1, the ADAGE 3000 is used as a frame buffer. In phase II, SMP was functionally partitioned and some of its functions were implemented in the ADAGE 3000 by means of ADAGE's SOLID 3000 software package

    A graphics subsystem retrofit design for the bladed-disk data acquisition system

    Get PDF
    A graphics subsystem retrofit design for the turbojet blade vibration data acquisition system is presented. The graphics subsystem will operate in two modes permitting the system operator to view blade vibrations on an oscilloscope type of display. The first mode is a real-time mode that displays only gross blade characteristics, such as maximum deflections and standing waves. This mode is used to aid the operator in determining when to collect detailed blade vibration data. The second mode of operation is a post-processing mode that will animate the actual blade vibrations using the detailed data collected on an earlier data collection run. The operator can vary the rate of payback to view differring characteristics of blade vibrations. The heart of the graphics subsystem is a modified version of AMD's ""super sixteen'' computer, called the graphics preprocessor computer (GPC). This computer is based on AMD's 2900 series of bit-slice components

    Hybrid compression of video with graphics in DTV communication systems

    Get PDF
    Advanced broadcast manipulation of TV sequences and enhanced user interfaces for TV systems have resulted in an increased amount of pre- and post-editing of video sequences, where graphical information is inserted. However, in the current broadcasting chain, there are no provisions for enabling an efficient transmission/storage of these mixed video and graphics signals and, at this emerging stage of DTV systems, introducing new standards is not desired. Nevertheless, in the professional video communication chain between content provider and broadcaster and locally, in the DTV receiver, proprietary video-graphics compression schemes can be used to enable more efficient transmission/storage of mixed video and graphics signals. For example, in the DTV receiver case this will lead to a significant memory-cost reduction. To preserve a high overall image quality, the video and graphics data require independent coding systems, matched with their specific visual and statistical properties. We introduce various efficient algorithms that support both the lossless (contour, runlength and arithmetic coding) and the lossy (block predictive coding) compression of graphics data. If the graphics data are a-priori mixed with video and the graphics position is unknown at compression time, an accurate detection mechanism is applied to distinguish the two signals, such that independent coding algorithms can be employed for each data-type. In the DTV memory-reduction scenario, an overall bit-rate control completes the system, ensuring a fixed compression factor of 2-3 per frame without sacrificing the quality of the graphic

    Towards the AlexNet Moment for Homomorphic Encryption: HCNN, theFirst Homomorphic CNN on Encrypted Data with GPUs

    Get PDF
    Deep Learning as a Service (DLaaS) stands as a promising solution for cloud-based inference applications. In this setting, the cloud has a pre-learned model whereas the user has samples on which she wants to run the model. The biggest concern with DLaaS is user privacy if the input samples are sensitive data. We provide here an efficient privacy-preserving system by employing high-end technologies such as Fully Homomorphic Encryption (FHE), Convolutional Neural Networks (CNNs) and Graphics Processing Units (GPUs). FHE, with its widely-known feature of computing on encrypted data, empowers a wide range of privacy-concerned applications. This comes at high cost as it requires enormous computing power. In this paper, we show how to accelerate the performance of running CNNs on encrypted data with GPUs. We evaluated two CNNs to classify homomorphically the MNIST and CIFAR-10 datasets. Our solution achieved a sufficient security level (> 80 bit) and reasonable classification accuracy (99%) and (77.55%) for MNIST and CIFAR-10, respectively. In terms of latency, we could classify an image in 5.16 seconds and 304.43 seconds for MNIST and CIFAR-10, respectively. Our system can also classify a batch of images (> 8,000) without extra overhead
    • …
    corecore