33,375 research outputs found
Irrigation and water resources in the 1990's
Presented at Irrigation and water resources in the 1990's: proceedings from the 1992 national conference held on October 5-7, 1992 in Phoenix, Arizona.Includes bibliographical references.Mapping technology, specifically photogrammetry and its related disciplines, has been extensively employed for irrigation and drainage engineering and management applications since becoming a practical tool in the post World War II era. The computer, space and information systems revolution of the '80's has radically changed the methods and potential for photogrammetric mapping applications. All aspects of photogrammetry have been affected including ground control surveys, aerial photography, stereo-restitution and map compilation, and cartography, the form in which maps are presented or published. The NAVSTAR Global Positioning System (GPS) satellites are used extensively to rapidly establish highly accurate ground for mapping projects. GPS is also used to improve aerial photography operations by providing an accurate flight line guidance system. Future GPS developments will permit instantaneous and highly accurate positioning of the aerial camera system at the moment of exposure. The analytical stereo plotting instrument developed and the close of the '50's is rapidly becoming the industry standard for the measurement and conversion of aerial photographs into highly precise maps and spatial data. The evolution of the personal computer and computer graphics has led to the "digital map" and the "digital terrain model" which in turn provides powerful management and analysis capabilities for geographically distributed data through the use of Geographic Information Systems (GIS)
Reflectance Transformation Imaging (RTI) System for Ancient Documentary Artefacts
This tutorial summarises our uses of reflectance transformation imaging in archaeological contexts. It introduces the UK AHRC funded project reflectance Transformation Imaging for Anciant Documentary Artefacts and demonstrates imaging methodologies
Can Computers Create Art?
This essay discusses whether computers, using Artificial Intelligence (AI),
could create art. First, the history of technologies that automated aspects of
art is surveyed, including photography and animation. In each case, there were
initial fears and denial of the technology, followed by a blossoming of new
creative and professional opportunities for artists. The current hype and
reality of Artificial Intelligence (AI) tools for art making is then discussed,
together with predictions about how AI tools will be used. It is then
speculated about whether it could ever happen that AI systems could be credited
with authorship of artwork. It is theorized that art is something created by
social agents, and so computers cannot be credited with authorship of art in
our current understanding. A few ways that this could change are also
hypothesized.Comment: to appear in Arts, special issue on Machine as Artist (21st Century
Light field super resolution through controlled micro-shifts of light field sensor
Light field cameras enable new capabilities, such as post-capture refocusing
and aperture control, through capturing directional and spatial distribution of
light rays in space. Micro-lens array based light field camera design is often
preferred due to its light transmission efficiency, cost-effectiveness and
compactness. One drawback of the micro-lens array based light field cameras is
low spatial resolution due to the fact that a single sensor is shared to
capture both spatial and angular information. To address the low spatial
resolution issue, we present a light field imaging approach, where multiple
light fields are captured and fused to improve the spatial resolution. For each
capture, the light field sensor is shifted by a pre-determined fraction of a
micro-lens size using an XY translation stage for optimal performance
Single-shot layered reflectance separation using a polarized light field camera
We present a novel computational photography technique for single shot separation of diffuse/specular reflectance as well as novel angular domain separation of layered reflectance. Our solution consists of a two-way polarized light field (TPLF) camera which simultaneously captures two orthogonal states of polarization. A single photograph of a subject acquired with the TPLF camera under polarized illumination then enables standard separation of diffuse (depolarizing) and polarization preserving specular reflectance using light field sampling. We further demonstrate that the acquired data also enables novel angular separation of layered reflectance including separation of specular reflectance and single scattering in the polarization preserving component, and separation of shallow scattering from deep scattering in the depolarizing component. We apply our approach for efficient acquisition of facial reflectance including diffuse and specular normal maps, and novel separation of photometric normals into layered reflectance normals for layered facial renderings. We demonstrate our proposed single shot layered reflectance separation to be comparable to an existing multi-shot technique that relies on structured lighting while achieving separation results under a variety of illumination conditions
MoSculp: Interactive Visualization of Shape and Time
We present a system that allows users to visualize complex human motion via
3D motion sculptures---a representation that conveys the 3D structure swept by
a human body as it moves through space. Given an input video, our system
computes the motion sculptures and provides a user interface for rendering it
in different styles, including the options to insert the sculpture back into
the original video, render it in a synthetic scene or physically print it.
To provide this end-to-end workflow, we introduce an algorithm that estimates
that human's 3D geometry over time from a set of 2D images and develop a
3D-aware image-based rendering approach that embeds the sculpture back into the
scene. By automating the process, our system takes motion sculpture creation
out of the realm of professional artists, and makes it applicable to a wide
range of existing video material.
By providing viewers with 3D information, motion sculptures reveal space-time
motion information that is difficult to perceive with the naked eye, and allow
viewers to interpret how different parts of the object interact over time. We
validate the effectiveness of this approach with user studies, finding that our
motion sculpture visualizations are significantly more informative about motion
than existing stroboscopic and space-time visualization methods.Comment: UIST 2018. Project page: http://mosculp.csail.mit.edu
- …