12 research outputs found

    Digital multimedia development processes and optimizing techniques

    Get PDF

    Evaluation of the color image and video processing chain and visual quality management for consumer systems

    Get PDF
    With the advent of novel digital display technologies, color processing is increasingly becoming a key aspect in consumer video applications. Today’s state-of-the-art displays require sophisticated color and image reproduction techniques in order to achieve larger screen size, higher luminance and higher resolution than ever before. However, from color science perspective, there are clearly opportunities for improvement in the color reproduction capabilities of various emerging and conventional display technologies. This research seeks to identify potential areas for improvement in color processing in a video processing chain. As part of this research, various processes involved in a typical video processing chain in consumer video applications were reviewed. Several published color and contrast enhancement algorithms were evaluated, and a novel algorithm was developed to enhance color and contrast in images and videos in an effective and coordinated manner. Further, a psychophysical technique was developed and implemented for performing visual evaluation of color image and consumer video quality. Based on the performance analysis and visual experiments involving various algorithms, guidelines were proposed for the development of an effective color and contrast enhancement method for images and video applications. It is hoped that the knowledge gained from this research will help build a better understanding of color processing and color quality management methods in consumer video

    Perception, aesthetics and culture in new media

    Get PDF
    Thesis (M.S.V.S.)--Massachusetts Institute of Technology, Dept. of Architecture, 1988.Includes bibliographical references (leaves 98-100).by Kimberly Ann Foley.M.S.V.S

    3D head motion, point-of-regard and encoded gaze fixations in real scenes: next-generation portable video-based monocular eye tracking

    Get PDF
    Portable eye trackers allow us to see where a subject is looking when performing a natural task with free head and body movements. These eye trackers include headgear containing a camera directed at one of the subject\u27s eyes (the eye camera) and another camera (the scene camera) positioned above the same eye directed along the subject\u27s line-of-sight. The output video includes the scene video with a crosshair depicting where the subject is looking -- the point-of-regard (POR) -- that is updated for each frame. This video may be the desired final result or it may be further analyzed to obtain more specific information about the subject\u27s visual strategies. A list of the calculated POR positions in the scene video can also be analyzed. The goals of this project are to expand the information that we can obtain from a portable video-based monocular eye tracker and to minimize the amount of user interaction required to obtain and analyze this information. This work includes offline processing of both the eye and scene videos to obtain robust 2D PORs in scene video frames, identify gaze fixations from these PORs, obtain 3D head motion and ray trace fixations through volumes-of-interest (VOIs) to determine what is being fixated, when and where (3D POR). To avoid the redundancy of ray tracing a 2D POR in every video frame and to group these POR data meaningfully, a fixation-identification algorithm is employed to simplify the long list of 2D POR data into gaze fixations. In order to ray trace these fixations, the 3D motion -- position and orientation over time -- of the scene camera is computed. This camera motion is determined via an iterative structure and motion recovery algorithm that requires a calibrated camera and knowledge of the 3D location of at least four points in the scene (that can be selected from premeasured VOI vertices). The subjects 3D head motion is obtained directly from this camera motion. For the final stage of the algorithm, the 3D locations and dimensions of VOIs in the scene are required. This VOI information in world coordinates is converted to camera coordinates for ray tracing. A representative 2D POR position for each fixation is converted from image coordinates to the same camera coordinate system. Then, a ray is traced from the camera center through this position to determine which (if any) VOI is being fixated and where it is being fixated -- the 3D POR in the world. Results are presented for various real scenes. Novel visualizations of portable eye tracker data created using the results of our algorithm are also presented

    Photomosaics : putting pictures in their place

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1996.Includes bibliographical references (p. 91-92).by Robert S. Silvers.M.S

    Converting the interlaced 3:2 pulldown film to the NTSC video without motion artifacts

    No full text
    In this paper, we present a new format conversion methodology in which the interlaced 3:2 pulldown film is converted to the NTSC video. For display purposes, the 24 frames/s film is converted to 60 fields/s video by alternatively repeating each frame for 3 fields and 2 fields. This conversion results in severe motion judder when the film is displayed on the progressive TV. Although motion estimation and compensation (ME and MC) are employed to eliminate the motion judder, other artifacts may be resulted depending on the accuracy of ME. Such motion artifacts are including halo effects, block artifacts and the severe distortion in periodic patterns when the block-based ME is used to interpolate the intermediate fields. To resolve those problems, we propose a robust full-search block motion algorithm using pseudo M-estimator, which is performed on field images to reduce the physical memory size (reducing motion search range) when implemented with hardware

    I Am Error

    Get PDF
    I Am Error is a platform study of the Nintendo Family Computer (or Famicom), a videogame console first released in Japan in July 1983 and later exported to the rest of the world as the Nintendo Entertainment System (or NES). The book investigates the underlying computational architecture of the console and its effects on the creative works (e.g. videogames) produced for the platform. I Am Error advances the concept of platform as a shifting configuration of hardware and software that extends even beyond its ‘native’ material construction. The book provides a deep technical understanding of how the platform was programmed and engineered, from code to silicon, including the design decisions that shaped both the expressive capabilities of the machine and the perception of videogames in general. The book also considers the platform beyond the console proper, including cartridges, controllers, peripherals, packaging, marketing, licensing, and play environments. Likewise, it analyzes the NES’s extension and afterlife in emulation and hacking, birthing new genres of creative expression such as ROM hacks and tool-assisted speed runs. I Am Error considers videogames and their platforms to be important objects of cultural expression, alongside cinema, dance, painting, theater and other media. It joins the discussion taking place in similar burgeoning disciplines—code studies, game studies, computational theory—that engage digital media with critical rigor and descriptive depth. But platform studies is not simply a technical discussion—it also keeps a keen eye on the cultural, social, and economic forces that influence videogames. No platform exists in a vacuum: circuits, code, and console alike are shaped by the currents of history, politics, economics, and culture—just as those currents are shaped in kind

    Graphics Technology in Space Applications (GTSA 1989)

    Get PDF
    This document represents the proceedings of the Graphics Technology in Space Applications, which was held at NASA Lyndon B. Johnson Space Center on April 12 to 14, 1989 in Houston, Texas. The papers included in these proceedings were published in general as received from the authors with minimum modifications and editing. Information contained in the individual papers is not to be construed as being officially endorsed by NASA
    corecore