254 research outputs found

    The Comparison of three 3D graphics raster processors and the design of another

    Get PDF
    There are a number of 3D graphics accelerator architectures on the market today. One of the largest issues concerning the design of a 3D accelerator is that of affordability for the home user while still delivering good performance. Three such architectures were analyzed: the Heresy architecture defined by Chiueh [2], the Talisman architecture defined by Torborg [7], and the Tayra architecture\u27s specification by White [9]. Portions of these three architectures were used to create a new architecture taking advantage of as many of their features as possible. The advantage of chunking will be analyzed, along with the advantages of a single cycle z-buffering algorithm. It was found that Fast Phong Shading is not suitable for implementation in this pipeline, and that the clipping algorithm should be eliminated in favor of a scissoring algorithm

    Simulation of ultrasonic imaging with linear arrays in causal absorptive media

    Get PDF
    Rigorous and efficient numerical methods are presented for simulation of acoustic propagation in a medium where the absorption is described by relaxation processes. It is shown how FFT-based algorithms can be used to simulate ultrasound images in pulse-echo mode. General expressions are obtained for the complex wavenumber in a relaxing medium. A fit to measurements in biological media shows the appropriateness of the model. The wavenumber is applied to three FFT-based extrapolation operators, which are implemented in a weak form to reduce spatial aliasing. The influence of the absorptive medium on the quality of images obtained with a linear array transducer is demonstrated. It is shown that, for moderately absorbing media, the absorption has a large influence on the images, whereas the dispersion has a negligible effect on the images.\ud \u

    A hybrid hair model using three dimensional fuzzy textures

    Get PDF
    Cataloged from PDF version of article.Human hair modeling and rendering have always been a challenging topic in computer graphics. The techniques for human hair modeling consist of explicit geometric models as well as volume density models. Recently, hybrid cluster models have also been successful in this subject. In this study, we present a novel three dimensional texture model called 3D Fuzzy Textures and algorithms to generate them. Then, we use the developed model along with a cluster model to give human hair complex hairstyles such as curly and wavy styles. Our model requires little user effort to model curly and wavy hair styles. With this study, we aim at eliminating the drawbacks of the volume density model and the cluster hair model with 3D fuzzy textures. A three dimensional cylindrical texture mapping function is introduced for mapping purposes. Current generation graphics hardware is utilized in the design of rendering system enabling high performance rendering.Aran, Medeni ErolM.S

    An automatic data system for vibration modal tuning and evaluation

    Get PDF
    A digitally based automatic modal tuning and analysis system developed to provide an operational capability beginning at 0.1 hertz is described. The elements of the system, which provides unique control features, maximum operator visibility, and rapid data reduction and documentation, are briefly described; and the operational flow is discussed to illustrate the full range of capabilities and the flexibility of application. The successful application of the system to a modal survey of the Skylab payload is described. Information about the Skylab test article, coincident-quadrature analysis of modal response data, orthogonality, and damping calculations is included in the appendixes. Recommendations for future application of the system are also made

    Experimental comparison of noise and resolution for 2k and 4k storage phosphor radiography systems

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/134792/1/mp8656.pd

    Efficient Algorithms for Large-Scale Image Analysis

    Get PDF
    This work develops highly efficient algorithms for analyzing large images. Applications include object-based change detection and screening. The algorithms are 10-100 times as fast as existing software, sometimes even outperforming FGPA/GPU hardware, because they are designed to suit the computer architecture. This thesis describes the implementation details and the underlying algorithm engineering methodology, so that both may also be applied to other applications

    Decoupled Sampling for Real-Time Graphics Pipelines

    Get PDF
    We propose decoupled sampling, an approach that decouples shading from visibility sampling in order to enable motion blur and depth-of-field at reduced cost. More generally, it enables extensions of modern real-time graphics pipelines that provide controllable shading rates to trade off quality for performance. It can be thought of as a generalization of GPU-style multisample antialiasing (MSAA) to support unpredictable shading rates, with arbitrary mappings from visibility to shading samples as introduced by motion blur, depth-of-field, and adaptive shading. It is inspired by the Reyes architecture in offline rendering, but targets real-time pipelines by driving shading from visibility samples as in GPUs, and removes the need for micropolygon dicing or rasterization. Decoupled Sampling works by defining a many-to-one hash from visibility to shading samples, and using a buffer to memoize shading samples and exploit reuse across visibility samples. We present extensions of two modern GPU pipelines to support decoupled sampling: a GPU-style sort-last fragment architecture, and a Larrabee-style sort-middle pipeline. We study the architectural implications and derive end-to-end performance estimates on real applications through an instrumented functional simulator. We demonstrate high-quality motion blur and depth-of-field, as well as variable and adaptive shading rates

    Three--dimensional medical imaging: Algorithms and computer systems

    Get PDF
    This paper presents an introduction to the field of three-dimensional medical imaging It presents medical imaging terms and concepts, summarizes the basic operations performed in three-dimensional medical imaging, and describes sample algorithms for accomplishing these operations. The paper contains a synopsis of the architectures and algorithms used in eight machines to render three-dimensional medical images, with particular emphasis paid to their distinctive contributions. It compares the performance of the machines along several dimensions, including image resolution, elapsed time to form an image, imaging algorithms used in the machine, and the degree of parallelism used in the architecture. The paper concludes with general trends for future developments in this field and references on three-dimensional medical imaging

    Decoupled Sampling for Graphics Pipelines

    Get PDF
    We propose a generalized approach to decoupling shading from visibility sampling in graphics pipelines, which we call decoupled sampling. Decoupled sampling enables stochastic supersampling of motion and defocus blur at reduced shading cost, as well as controllable or adaptive shading rates which trade off shading quality for performance. It can be thought of as a generalization of multisample antialiasing (MSAA) to support complex and dynamic mappings from visibility to shading samples, as introduced by motion and defocus blur and adaptive shading. It works by defining a many-to-one hash from visibility to shading samples, and using a buffer to memoize shading samples and exploit reuse across visibility samples. Decoupled sampling is inspired by the Reyes rendering architecture, but like traditional graphics pipelines, it shades fragments rather than micropolygon vertices, decoupling shading from the geometry sampling rate. Also unlike Reyes, decoupled sampling only shades fragments after precise computation of visibility, reducing overshading. We present extensions of two modern graphics pipelines to support decoupled sampling: a GPU-style sort-last fragment architecture, and a Larrabee-style sort-middle pipeline. We study the architectural implications of decoupled sampling and blur, and derive end-to-end performance estimates on real applications through an instrumented functional simulator. We demonstrate high-quality motion and defocus blur, as well as variable and adaptive shading rates
    corecore