133 research outputs found

    Programmable Image-Based Light Capture for Previsualization

    Get PDF
    Previsualization is a class of techniques for creating approximate previews of a movie sequence in order to visualize a scene prior to shooting it on the set. Often these techniques are used to convey the artistic direction of the story in terms of cinematic elements, such as camera movement, angle, lighting, dialogue, and character motion. Essentially, a movie director uses previsualization (previs) to convey movie visuals as he sees them in his minds-eye . Traditional methods for previs include hand-drawn sketches, Storyboards, scaled models, and photographs, which are created by artists to convey how a scene or character might look or move. A recent trend has been to use 3D graphics applications such as video game engines to perform previs, which is called 3D previs. This type of previs is generally used prior to shooting a scene in order to choreograph camera or character movements. To visualize a scene while being recorded on-set, directors and cinematographers use a technique called On-set previs, which provides a real-time view with little to no processing. Other types of previs, such as Technical previs, emphasize accurately capturing scene properties but lack any interactive manipulation and are usually employed by visual effects crews and not for cinematographers or directors. This dissertation\u27s focus is on creating a new method for interactive visualization that will automatically capture the on-set lighting and provide interactive manipulation of cinematic elements to facilitate the movie maker\u27s artistic expression, validate cinematic choices, and provide guidance to production crews. Our method will overcome the drawbacks of the all previous previs methods by combining photorealistic rendering with accurately captured scene details, which is interactively displayed on a mobile capture and rendering platform. This dissertation describes a new hardware and software previs framework that enables interactive visualization of on-set post-production elements. A three-tiered framework, which is the main contribution of this dissertation is; 1) a novel programmable camera architecture that provides programmability to low-level features and a visual programming interface, 2) new algorithms that analyzes and decomposes the scene photometrically, and 3) a previs interface that leverages the previous to perform interactive rendering and manipulation of the photometric and computer generated elements. For this dissertation we implemented a programmable camera with a novel visual programming interface. We developed the photometric theory and implementation of our novel relighting technique called Symmetric lighting, which can be used to relight a scene with multiple illuminants with respect to color, intensity and location on our programmable camera. We analyzed the performance of Symmetric lighting on synthetic and real scenes to evaluate the benefits and limitations with respect to the reflectance composition of the scene and the number and color of lights within the scene. We found that, since our method is based on a Lambertian reflectance assumption, our method works well under this assumption but that scenes with high amounts of specular reflections can have higher errors in terms of relighting accuracy and additional steps are required to mitigate this limitation. Also, scenes which contain lights whose colors are a too similar can lead to degenerate cases in terms of relighting. Despite these limitations, an important contribution of our work is that Symmetric lighting can also be leveraged as a solution for performing multi-illuminant white balancing and light color estimation within a scene with multiple illuminants without limits on the color range or number of lights. We compared our method to other white balance methods and show that our method is superior when at least one of the light colors is known a priori

    Thinking like a director: Film editing paterns for virtual cinematographic storytelling

    Get PDF
    International audienceThis paper introduces Film Editing Patterns (FEP), a language to formalize film editing practices and stylistic choices found in movies. FEP constructs are constraints, expressed over one or more shots from a movie sequence that characterize changes in cinematographic visual properties such as shot sizes, camera angles, or layout of actors on the screen. We present the vocabulary of the FEP language, introduce its usage in analyzing styles from annotated film data, and describe how it can support users in the creative design of film sequences in 3D. More specifically, (i) we define the FEP language, (ii) we present an application to craft filmic sequences from 3D animated scenes that uses FEPs as a high level mean to select cameras and perform cuts between cameras that follow best practices in cinema and (iii) we evaluate the benefits of FEPs by performing user experiments in which professional filmmakers and amateurs had to create cinematographic sequences. The evaluation suggests that users generally appreciate the idea of FEPs, and that it can effectively help novice and medium experienced users in crafting film sequences with little training

    Cinema Server = s/t (story over time) : an interface for interactive motion picture design

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Architecture, 1993.Includes bibliographical references (leaves 146-148).by Stephan J. Fitch.M.S

    VIRTUAL TOURS FOR SMART CITIES: A COMPARATIVE PHOTOGRAMMETRIC APPROACH FOR LOCATING HOT-SPOTS IN SPHERICAL PANORAMAS

    Get PDF
    The paper aims to investigate the possibilities of using the panorama-based VR to survey data related to that set of activities for planning and management of urban areas, belonging to the Smart Cities strategies. The core of our workflow is to facilitate the visualization of the data produced by the infrastructures of the Smart Cities. A graphical interface based on spherical panoramas, instead of complex three-dimensional could help the user/citizen of the city to better know the operation related to control units spread in the urban area. From a methodological point of view three different kind of spherical panorama acquisition has been tested and compared in order to identify a semi-automatic procedure for locating homologous points on two or more spherical images starting from a point cloud obtained from the same images. The points thus identified allow to quickly identify the same hot-spot on multiple images simultaneously. The comparison shows how all three systems have proved to be useful for the purposes of the research but only one has proved to be reliable from a geometric point of view to identify the locators useful for the construction of the virtual tour

    Virtual Cinematography: Beyond Big Studio Production

    Get PDF
    In the current production environment, the ability to previsualize shots utilizing a virtual camera system requires expensive hardware and large motion capture spaces only available to large studio environments. By leveraging consumer-level technologies such as tablets and motion gaming controllers as well as merging the cinematic techniques of film with the real-time benefits of game engines, it is possible to develop a hybrid interface that would lower the barrier of entry for virtual production. Utilizing affordable hardware, an intuitive user interface, and an intelligent camera system, the SmartVCS is a new virtual inematography platform that provides professional directors as well as a new market of amateur filmmakers the ability to previsualize their films or game cinematics with familiar and accessible technology. This system has potential applications to other areas including game level design, real-time compositing & post-production, and architectural visualization. In addition, this system has the ability to expand as a human-computer interface for video games, robotics, and medicine as a functional hybrid freespace input device.M.S., Digital Media -- Drexel University, 201

    ARES Biennial Report 2012 Final

    Get PDF
    Since the return of the first lunar samples, what is now the Astromaterials Research and Exploration Science (ARES) Directorate has had curatorial responsibility for all NASA-held extraterrestrial materials. Originating during the Apollo Program (1960s), this capability at Johnson Space Center (JSC) included scientists who were responsible for the science planning and training of astronauts for lunar surface activities as well as experts in the analysis and preservation of the precious returned samples. Today, ARES conducts research in basic and applied space and planetary science, and its scientific staff represents a broad diversity of expertise in the physical sciences (physics, chemistry, geology, astronomy), mathematics, and engineering organized into three offices (figure 1): Astromaterials Research (KR), Astromaterials Acquisition and Curation (KT), and Human Exploration Science (KX). Scientists within the Astromaterials Acquisition and Curation Office preserve, protect, document, and distribute samples of the current astromaterials collections. Since the return of the first lunar samples, ARES has been assigned curatorial responsibility for all NASA-held extraterrestrial materials (Apollo lunar samples, Antarctic meteorites - some of which have been confirmed to have originated on the Moon and on Mars - cosmic dust, solar wind samples, comet and interstellar dust particles, and space-exposed hardware). The responsibilities of curation consist not only of the longterm care of the samples, but also the support and planning for future sample collection missions and research and technology to enable new sample types. Curation provides the foundation for research into the samples. The Lunar Sample Facility and other curation clean rooms, the data center, laboratories, and associated instrumentation are unique NASA resources that, together with our staff's fundamental understanding of the entire collection, provide a service to the external research community, which relies on access to the samples. The curation efforts are greatly enhanced by a strong group of planetary scientists who conduct peerreviewed astromaterials research. Astromaterials Research Office scientists conduct peer-reviewed research as Principal or Co-Investigators in planetary science (e. g., cosmochemistry, origins of solar systems, Mars fundamental research, planetary geology and geophysics) and participate as Co-Investigators or Participating Scientists in many of NASA's robotic planetary missions. Since the last report, ARES has achieved several noteworthy milestones, some of which are documented in detail in the sections that follow. Within the Human Exploration Science Office, ARES is a world leader in orbital debris research, modeling and monitoring the debris environment, designing debris shielding, and developing policy to control and mitigate the orbital debris population. ARES has aggressively pursued refinements in knowledge of the debris environment and the hazard it presents to spacecraft. Additionally, the ARES Image Science and Analysis Group has been recognized as world class as a result of the high quality of near-real-time analysis of ascent and on-orbit inspection imagery to identify debris shedding, anomalies, and associated potential damage during Space Shuttle missions. ARES Earth scientists manage and continuously update the database of astronaut photography that is predominantly from Shuttle and ISS missions, but also includes the results of 40 years of human spaceflight. The Crew Earth Observations Web site (http://eol.jsc.nasa.gov/Education/ESS/crew.htm) continues to receive several million hits per month. ARES scientists are also influencing decisions in the development of the next generation of human and robotic spacecraft and missions through laboratory tests on the optical qualities of materials for windows, micrometeoroid/orbital debris shielding technology, and analog activities to assess surface science operations. ARES serves as host to numerous students and visiting scientists as part of the services provided to the research community and conducts a robust education and outreach program. ARES scientists are recognized nationally and internationally by virtue of their success in publishing in peer-reviewed journals and winning competitive research proposals. ARES scientists have won every major award presented by the Meteoritical Society, including the Leonard Medal, the most prestigious award in planetary science and cosmochemistry; the Barringer Medal, recognizing outstanding work in the field of impact cratering; the Nier Prize for outstanding research by a young scientist; and several recipients of the Nininger Meteorite Award. One of our scientists received the Department of Defense (DoD) Joint Meritorious Civilian Service Award (the highest civilian honor given by the DoD). ARES has established numerous partnerships with other NASA Centers, universities, and national laboratories. ARES scientists serve as journal editors, members of advisory panels and review committees, and society officers, and several scientists have been elected as Fellows in their professional societies. This biennial report summarizes a subset of the accomplishments made by each of the ARES offices and highlights participation in ongoing human and robotic missions, development of new missions, and planning for future human and robotic exploration of the solar system beyond low Earth orbit

    Digital Pathology: The Time Is Now to Bridge the Gap between Medicine and Technological Singularity

    Get PDF
    Digitalization of the imaging in radiology is a reality in several healthcare institutions worldwide. The challenges of filing, confidentiality, and manipulation have been brilliantly solved in radiology. However, digitalization of hematoxylin- and eosin-stained routine histological slides has shown slow movement. Although the application for external quality assurance is a reality for a pathologist with most of the continuing medical education programs utilizing virtual microscopy, the abandonment of traditional glass slides for routine diagnostics is far from the perspectives of many departments of laboratory medicine and pathology. Digital pathology images are captured as images by scanning and whole slide imaging/virtual microscopy can be obtained by microscopy (robotic) on an entire histological (microscopic) glass slide. Since 1986, services using telepathology for the transfer of images of anatomic pathology between detached locations have benefited countless patients globally, including the University of Alberta. The purpose of specialist recertification or re-validation for the Royal College of Pathologists of Canada belonging to the Royal College of Physicians and Surgeons of Canada and College of American Pathologists is a milestone in virtual reality. Challenges, such as high bandwidth requirement, electronic platforms, the stability of the operating systems, have been targeted and are improving enormously. The encryption of digital images may be a requirement for the accreditation of laboratory services—quantum computing results in quantum-mechanical phenomena, such as superposition and entanglement. Different from binary digital electronic computers based on transistors where data are encoded into binary digits (bits) with two different states (0 and 1), quantum computing uses quantum bits (qubits), which can be in superpositions of states. The use of quantum computing protocols on encrypted data is crucial for the permanent implementation of virtual pathology in hospitals and universities. Quantum computing may well represent the technological singularity to create new classifications and taxonomic rules in medicine

    Computational Light Transport for Forward and Inverse Problems.

    Get PDF
    El transporte de luz computacional comprende todas las técnicas usadas para calcular el flujo de luz en una escena virtual. Su uso es ubicuo en distintas aplicaciones, desde entretenimiento y publicidad, hasta diseño de producto, ingeniería y arquitectura, incluyendo el generar datos validados para técnicas basadas en imagen por ordenador. Sin embargo, simular el transporte de luz de manera precisa es un proceso costoso. Como consecuencia, hay que establecer un balance entre la fidelidad de la simulación física y su coste computacional. Por ejemplo, es común asumir óptica geométrica o una velocidad de propagación de la luz infinita, o simplificar los modelos de reflectancia ignorando ciertos fenómenos. En esta tesis introducimos varias contribuciones a la simulación del transporte de luz, dirigidas tanto a mejorar la eficiencia del cálculo de la misma, como a expandir el rango de sus aplicaciones prácticas. Prestamos especial atención a remover la asunción de una velocidad de propagación infinita, generalizando el transporte de luz a su estado transitorio. Respecto a la mejora de eficiencia, presentamos un método para calcular el flujo de luz que incide directamente desde luminarias en un sistema de generación de imágenes por Monte Carlo, reduciendo significativamente la variancia de las imágenes resultantes usando el mismo tiempo de ejecución. Asimismo, introducimos una técnica basada en estimación de densidad en el estado transitorio, que permite reusar mejor las muestras temporales en un medio parcipativo. En el dominio de las aplicaciones, también introducimos dos nuevos usos del transporte de luz: Un modelo para simular un tipo especial de pigmentos gonicromáticos que exhiben apariencia perlescente, con el objetivo de proveer una forma de edición intuitiva para manufactura, y una técnica de imagen sin línea de visión directa usando información del tiempo de vuelo de la luz, construida sobre un modelo de propagación de la luz basado en ondas.<br /
    • …
    corecore