25 research outputs found

    The lightspeed automatic interactive lighting preview system

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 57-59).We present an automated approach for high-quality preview of feature-film rendering during lighting design. Similar to previous work, we use a deep-framebuffer shaded on the GPU to achieve interactive performance. Our first contribution is to generate the deep-framebuffer and corresponding shaders automatically through data-flow analysis and compilation of the original scene. Cache compression reduces automatically-generated deep-framebuffers to reasonable size for complex production scenes and shaders. We also propose a new structure, the indirect framebuffer, that decouples shading samples from final pixels and allows a deep-framebuffer to handle antialiasing, motion blur and transparency efficiently. Progressive refinement enables fast feedback at coarser resolution. We demonstrate our approach in real-world production.by Jonathan Millard Ragan-Kelley.S.M

    Real-time Cinematic Design Of Visual Aspects In Computer-generated Images

    Get PDF
    Creation of visually-pleasing images has always been one of the main goals of computer graphics. Two important components are necessary to achieve this goal --- artists who design visual aspects of an image (such as materials or lighting) and sophisticated algorithms that render the image. Traditionally, rendering has been of greater interest to researchers, while the design part has always been deemed as secondary. This has led to many inefficiencies, as artists, in order to create a stunning image, are often forced to resort to the traditional, creativity-baring, pipelines consisting of repeated rendering and parameter tweaking. Our work shifts the attention away from the rendering problem and focuses on the design. We propose to combine non-physical editing with real-time feedback and provide artists with efficient ways of designing complex visual aspects such as global illumination or all-frequency shadows. We conform to existing pipelines by inserting our editing components into existing stages, hereby making editing of visual aspects an inherent part of the design process. Many of the examples showed in this work have been, until now, extremely hard to achieve. The non-physical aspect of our work enables artists to express themselves in more creative ways, not limited by the physical parameters of current renderers. Real-time feedback allows artists to immediately see the effects of applied modifications and compatibility with existing workflows enables easy integration of our algorithms into production pipelines

    Artistic Path Space Editing of Physically Based Light Transport

    Get PDF
    Die Erzeugung realistischer Bilder ist ein wichtiges Ziel der Computergrafik, mit Anwendungen u.a. in der Spielfilmindustrie, Architektur und Medizin. Die physikalisch basierte Bildsynthese, welche in letzter Zeit anwendungsübergreifend weiten Anklang findet, bedient sich der numerischen Simulation des Lichttransports entlang durch die geometrische Optik vorgegebener Ausbreitungspfade; ein Modell, welches für übliche Szenen ausreicht, Photorealismus zu erzielen. Insgesamt gesehen ist heute das computergestützte Verfassen von Bildern und Animationen mit wohlgestalteter und theoretisch fundierter Schattierung stark vereinfacht. Allerdings ist bei der praktischen Umsetzung auch die Rücksichtnahme auf Details wie die Struktur des Ausgabegeräts wichtig und z.B. das Teilproblem der effizienten physikalisch basierten Bildsynthese in partizipierenden Medien ist noch weit davon entfernt, als gelöst zu gelten. Weiterhin ist die Bildsynthese als Teil eines weiteren Kontextes zu sehen: der effektiven Kommunikation von Ideen und Informationen. Seien es nun Form und Funktion eines Gebäudes, die medizinische Visualisierung einer Computertomografie oder aber die Stimmung einer Filmsequenz -- Botschaften in Form digitaler Bilder sind heutzutage omnipräsent. Leider hat die Verbreitung der -- auf Simulation ausgelegten -- Methodik der physikalisch basierten Bildsynthese generell zu einem Verlust intuitiver, feingestalteter und lokaler künstlerischer Kontrolle des finalen Bildinhalts geführt, welche in vorherigen, weniger strikten Paradigmen vorhanden war. Die Beiträge dieser Dissertation decken unterschiedliche Aspekte der Bildsynthese ab. Dies sind zunächst einmal die grundlegende Subpixel-Bildsynthese sowie effiziente Bildsyntheseverfahren für partizipierende Medien. Im Mittelpunkt der Arbeit stehen jedoch Ansätze zum effektiven visuellen Verständnis der Lichtausbreitung, die eine lokale künstlerische Einflussnahme ermöglichen und gleichzeitig auf globaler Ebene konsistente und glaubwürdige Ergebnisse erzielen. Hierbei ist die Kernidee, Visualisierung und Bearbeitung des Lichts direkt im alle möglichen Lichtpfade einschließenden "Pfadraum" durchzuführen. Dies steht im Gegensatz zu Verfahren nach Stand der Forschung, die entweder im Bildraum arbeiten oder auf bestimmte, isolierte Beleuchtungseffekte wie perfekte Spiegelungen, Schatten oder Kaustiken zugeschnitten sind. Die Erprobung der vorgestellten Verfahren hat gezeigt, dass mit ihnen real existierende Probleme der Bilderzeugung für Filmproduktionen gelöst werden können

    Decoupled deferred shading for hardware rasterization

    Full text link

    Decoupled Sampling for Graphics Pipelines

    Get PDF
    We propose a generalized approach to decoupling shading from visibility sampling in graphics pipelines, which we call decoupled sampling. Decoupled sampling enables stochastic supersampling of motion and defocus blur at reduced shading cost, as well as controllable or adaptive shading rates which trade off shading quality for performance. It can be thought of as a generalization of multisample antialiasing (MSAA) to support complex and dynamic mappings from visibility to shading samples, as introduced by motion and defocus blur and adaptive shading. It works by defining a many-to-one hash from visibility to shading samples, and using a buffer to memoize shading samples and exploit reuse across visibility samples. Decoupled sampling is inspired by the Reyes rendering architecture, but like traditional graphics pipelines, it shades fragments rather than micropolygon vertices, decoupling shading from the geometry sampling rate. Also unlike Reyes, decoupled sampling only shades fragments after precise computation of visibility, reducing overshading. We present extensions of two modern graphics pipelines to support decoupled sampling: a GPU-style sort-last fragment architecture, and a Larrabee-style sort-middle pipeline. We study the architectural implications of decoupled sampling and blur, and derive end-to-end performance estimates on real applications through an instrumented functional simulator. We demonstrate high-quality motion and defocus blur, as well as variable and adaptive shading rates

    Decoupled Sampling for Real-Time Graphics Pipelines

    Get PDF
    We propose decoupled sampling, an approach that decouples shading from visibility sampling in order to enable motion blur and depth-of-field at reduced cost. More generally, it enables extensions of modern real-time graphics pipelines that provide controllable shading rates to trade off quality for performance. It can be thought of as a generalization of GPU-style multisample antialiasing (MSAA) to support unpredictable shading rates, with arbitrary mappings from visibility to shading samples as introduced by motion blur, depth-of-field, and adaptive shading. It is inspired by the Reyes architecture in offline rendering, but targets real-time pipelines by driving shading from visibility samples as in GPUs, and removes the need for micropolygon dicing or rasterization. Decoupled Sampling works by defining a many-to-one hash from visibility to shading samples, and using a buffer to memoize shading samples and exploit reuse across visibility samples. We present extensions of two modern GPU pipelines to support decoupled sampling: a GPU-style sort-last fragment architecture, and a Larrabee-style sort-middle pipeline. We study the architectural implications and derive end-to-end performance estimates on real applications through an instrumented functional simulator. We demonstrate high-quality motion blur and depth-of-field, as well as variable and adaptive shading rates

    A Design Model for using Advanced Multimedia in the Teaching of Photography in The Kingdom of Bahrain.

    Get PDF
    This Study investigates the effectiveness of a new Instructional Design model for using advanced multimedia in the teaching and learning of photography at university level in Kingdom of Bahrain. A preliminary study revealed that the central problems faced by students are learning key technical aspects of photography coupled with insufficient resources and high student teachers ratio. Advanced multimedia was proposed as an effective tool for teaching and learning photography. A critical review and analysis of existing e-learning resources revealed that such technology might help in teaching and learning, especially subjects that need experience with real instruments like cameras. Through reference to the ASSURE model, Laurillard's conversational model, and insights from Steuer's Classification model, the researcher developed a new instructional design model for using advanced multimedia in photography education [AMPE]. This was field tested in University photography teaching. For the evaluation of the AMPE model a mixed-model design was used, combining quantitative and qualitative methods. In a quantitative evaluation, effectiveness in learning was estimated from the student achievement in a test. A comparison of the opinions of the two groups of students in a specially constructed questionnaire measuring their views of the respective teaching and learning methods was also applied. Finally engagement and enjoyment in learning in the two groups of students were also assessed through questionnaire. The participants‘comments, opinions, and suggestions were obtained through open-ended questions in the questionnaire. The study found that advanced multimedia enhances effectiveness, engagement, and enjoyment in learning photography. The instructional model and associated ―virtual camera‖ seems to be a suitable solution for the lack of real cameras in the classroom environment, and can help in the teaching of difficult technical photographic knowledge in an efficient and practical manner.University of Bahrai

    Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace

    Get PDF
    The symposium Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace was held at the NASA Lewis Research Center on March 30-31, 1993. The purpose of the symposium was to simulate interdisciplinary thinking in the sciences and technologies which will be required for exploration and development of space over the next thousand years. The keynote speakers were Hans Moravec, Vernor Vinge, Carol Stoker, and Myron Krueger. The proceedings consist of transcripts of the invited talks and the panel discussion by the invited speakers, summaries of workshop sessions, and contributed papers by the attendees

    The adoption and impact of computer integrated prepress systems in the printing and publishing industries of Kuwait

    Get PDF
    This research is aimed at developing a comprehensive picture of the implications of digital technology in the graphic arts industries in Kuwait. The purpose of the study is twofold: (1) to explore the meaning of the outcomes of recent technological change processes for the traditional prepress occupations in Kuwait; and, (2) to examine the impact of technology on Arabic layout and design. The study is based on the assumption that technological change is a chain of interactions among the sociological, cultural, political and economic variables. The prepress area in Kuwait has its own cultural, social, economic, and political structure. When a new technology is introduced it is absorbed and shaped by the existing structure. Based on such a dialectical conceptualisation, four major levels of analysis can be distinguished in this study: (1) technological change in the graphic arts industries; (2) the typographic evolution of the Arabic script; (3) the workers themselves as individuals and occupational collectives; and, (4) technology's impact on Arabic publication design. The methodological approach selected for this study can be defined as a dialectical, interpretive exploration. Given the historical perspective and the multiple levels of analysis, this approach calls for a variety of data gathering methods. Both qualitative and quantitative data were sought. A combination of document analysis, participant observation and interviewing allow to link the historical and current events with individual and collective actions, perceptions and interpretations of reality. The findings presented in this study contradicts the belief that the widespread adoption of new production processes is coincidental with continuous advances in scientific knowledge which provide the basis for the development of new technologies. Instead, the changes have been hindered by the lack of untrained personnel, the Arabic software incompatibility, and the lack of informed decisions to successfully implement the technology. Without any doubt, the new technology has influenced Arabic calligraphy, but this does not mean the decay of Arabic calligraphy as an art. As this study shows, the challenge is not to the art, but to the artist

    Efficient design of precision medical robotics

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2012.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (p. 106-114).Medical robotics is increasingly demonstrating the potential to improve patient care through more precise interventions. However, taking inspiration from industrial robotics has often resulted in large, sometimes cumbersome designs, which represent high capital and per procedure expenditures, as well as increased procedure times. This thesis proposes and demonstrates an alternative model and method for developing economical, appropriately scaled medical robots that improve care and efficiency, while moderating costs. Key to this approach is a structured design process that actively reduces complexity. A selected medical procedure is decomposed into discrete tasks which are then separated into those that are conducted satisfactorily and those where the clinician encounters limitations, often where robots' strengths would be complimentary. Then by following deterministic principles and with continual user participation, prototyping and testing, a system can be designed that integrates into and assists with current procedures, rather than requiring a completely new protocol. This model is expected to lay the groundwork for increasing the use of hands-on technology in interventional medicine.by Nevan Clancy Hanumara.Ph.D
    corecore