58 research outputs found

    Storytelling with salient stills

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1996.Includes bibliographical references (p. 59-63).Michale J. Massey.M.S

    Salient stills

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Architecture, 1992.Includes bibliographical references (leaves 67-70).by Laura A. Teodosio.M.S

    Multimedia Provocations

    Get PDF

    Salient movies

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.Includes bibliographical references (leaves 64-65).by Karrie Karahalios.M.Eng

    A Survey on Video-based Graphics and Video Visualization

    Get PDF

    Information theoretic measures for encoding video

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1995.Includes bibliographical references (p. 75-78).by Giridharan Iyengar.M.S

    Shape-Time Photography

    Get PDF
    We introduce a new method to describe, in a single image, changes in shape over time. We acquire both range and image information with a stationary stereo camera. From the pictures taken, we display a composite image consisting of the image data from the surface closest to the camera at every pixel. This reveals the 3-d relationships over time by easy-to-interpret occlusion relationships in the composite image. We call the composite a shape-time photograph. Small errors in depth measurements cause artifacts in the shape-time images. We correct most of these using a Markov network to estimate the most probable front surface, taking into account the depth measurements, their uncertainties, and layer continuity assumptions

    TRUSS: Tracking Risk with Ubiquitous Smart Sensing

    Get PDF
    We present TRUSS, or Tracking Risk with Ubiquitous Smart Sensing, a novel system that infers and renders safety context on construction sites by fusing data from wearable devices, distributed sensing infrastructure, and video. Wearables stream real-time levels of dangerous gases, dust, noise, light quality, altitude, and motion to base stations that synchronize the mobile devices, monitor the environment, and capture video. At the same time, low-power video collection and processing nodes track the workers as they move through the view of the cameras, identifying the tracks using information from the sensors. These processes together connect the context-mining wearable sensors to the video; information derived from the sensor data is used to highlight salient elements in the video stream. The augmented stream in turn provides users with better understanding of real-time risks, and supports informed decision-making. We tested our system in an initial deployment on an active construction site.Intel CorporationMassachusetts Institute of Technology. Media LaboratoryEni S.p.A. (Firm
    corecore