25 research outputs found
Circulant temporal encoding for video retrieval and temporal alignment
We address the problem of specific video event retrieval. Given a query video
of a specific event, e.g., a concert of Madonna, the goal is to retrieve other
videos of the same event that temporally overlap with the query. Our approach
encodes the frame descriptors of a video to jointly represent their appearance
and temporal order. It exploits the properties of circulant matrices to
efficiently compare the videos in the frequency domain. This offers a
significant gain in complexity and accurately localizes the matching parts of
videos. The descriptors can be compressed in the frequency domain with a
product quantizer adapted to complex numbers. In this case, video retrieval is
performed without decompressing the descriptors. We also consider the temporal
alignment of a set of videos. We exploit the matching confidence and an
estimate of the temporal offset computed for all pairs of videos by our
retrieval approach. Our robust algorithm aligns the videos on a global timeline
by maximizing the set of temporally consistent matches. The global temporal
alignment enables synchronous playback of the videos of a given scene
Two and Three Dimensional near Infrared Subcutaneous Structure Imager Using Real Time Nonlinear Video Processing
An imager is provided for viewing subcutaneous structures. In an embodiment of the invention, the imager includes a camera configured to generate a video frame, and an adaptive nonlinear processor. The adaptive nonlinear processor is configured to adjust a signal of the video frame below a first threshold to a maximum dark level and to adjust the signal of the video frame above a second threshold to a maximum light level. The imager further includes a display, configured to display the processed video frame
Integration of Computer Generated Images with NTSC Video
From the growing field of computer graphics comes the need to combine the computer graphics with video from other sources. Combining video from separate sources creates a problem since both sources are synchronous to each other. The objective of this research report is to present a hardware design approach that will combine two asynchronous video sources to produce one video picture. The combining of two video sources is called “overlaying.” The hardware design described in this report will overlay video from a Digital Equipment Computer PRO-350 with the video from an RS-170 video source. The design approach presented includes system block diagrams and circuit descriptions from a video frame buffer
Operational Television System for Launch Complex 39 at the John F. Kennedy Space Center
Launch Complex 39 (LC-39) is located at the National Aeronautics and Space Administration\u27s John F. Kennedy Space Center, Merritt Island, Florida, from where the manned Apollo Space Capsules will be launched. Many new approaches to the launching of space vehicles are incorporated and the extensive use of Closed Circuit Television (CCTV) for the surveillance of pre-launch and launch activities is considered vital in the Apollo/Saturn remotely controlled launch program. The use of closed circuit television has been used to varying degrees throughout the brief history of space vehicle launchings, however it is believed that this system, with its rather specialized requirements, is a major step forward in the constructive use of the powerful medium of television. The system was designed and installed under the supervision to the U. S. Army Corps of Engineers, and every attempt has been made to combine the lessons that have been learned in the TV broadcast, industrial, education, CATV, MATY, and long line transmission fields into a superior television system for use at LC-39
Synchronizing eye tracking and optical motion capture: How to bring them together
Both eye tracking and motion capture technologies are nowadays frequently used in human sciences, although both technologies are usually used separately. However, measuring both eye and body movements simultaneously would offer great potential for investigating cross- modal interaction in human (e.g. music and language-related) behavior. Here we combined an Ergoneers Dikablis head mounted eye tracker with a Qualisys Oqus optical motion cap- ture system. In order to synchronize the recordings of both devices, we developed a gener- alizable solution that does not rely on any (cost-intensive) ready-made / company-provided synchronization solution. At the beginning of each recording, the participant nods quickly while fixing on a target while keeping the eyes open – a motion yielding a sharp vertical displacement in both mocap and eye data. This displacement can be reliably detected with a peak-picking algorithm and used for accurately aligning the mocap and eye data. This method produces accurate synchronization results in the case of clean data and therefore provides an attractive alternative to costly plug-ins, as well as a solution in case ready-made synchronization solutions are unavailable
Technical workflow in TV coverage of Mountain Bike events
This project focuses on the technical requirements needed for the television production
of a live action sport event, especifically the RedBull's Mountain Bike competitions in the
Catalan cup 2010. There are three main stages: the TV production on location, the editing and
the deliver of the final video to the customer. Each different stage requires several technical
decisions to be made and this document provides a detailed and comprehensible steps to achieve
each purpose
Acquisition and Recognition of 3D Signature
Táto práca sa zaoberá metĂłdami snĂmania podpisov v 3D priestore, vĂ˝berom vhodnĂ©ho modelu snĂmania, zĂskanĂm dostatoÄŤnĂ©ho poÄŤtu vzoriek na vytvorenie databázy a nakoniec overovanĂm podpisov. V prvej ÄŤasti je spracovaná problematika existujĂşcich riešenĂ a spĂ´sobov overovania podpisov, ÄŹalej spracovanie obrazu potrebnĂ© pre účely snĂmania markeru v 3D priestore. NasledujĂşce ÄŤasti sĂş venovanĂ© návrhu unikátneho riešenia podpisovania sa v priestore perom absenciou akĂ©hokoÄľvek kontaktu. Boli navrhnutĂ© dva modely snĂmania a to pomocou kamier alebo senzoru Leap Motion. Aplikácia bola implementovaná nad tĂ˝mto senzorom a systĂ©m overovania dynamickĂ˝ch podpisov pomocou algoritmu DTW. ÄŽalej práca obsahuje popis vytvorenia databázy a experimentálne overenie podpisov. Na konci nájdeme zhodnotenie bezpeÄŤnosti a chybovosti systĂ©mu, ktorĂ© je porovnanĂ© s inĂ˝mi metĂłdami. VĂ˝sledkom práce je funkÄŤná aplikácia na snĂmanie a rozpoznávanie 3D podpisov s potenciálom novej bezpeÄŤnej techniky podpisovania.This work deals with methods of signing in 3D space, selecting a suitable scan model, obtaining a sufficient number of samples to create a database, and finally verifying signatures. The first part deals with the issue of existing solutions and methods of signature verification, further image processing required for marker shooting in 3D space. The following sections are dedicated to design a unique signature solution in free space using a pen without any contact. Two shooting models have been designed using cameras or Leap Motion sensor. The application was implemented based on DTW algorithm using this sensor resulting in a dynamic signature verification system. Furthermore, the work includes a description using of database creation and experimental signature verification. At the end, we find an assessment of the security and error rate of the system that is compared to other methods. The result of this thesis is an application for 3D signature capture and recognition with the potential of a new technique for secure signature.
A data path for a pixel-parallel image processing system
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.Includes bibliographical references (p. 65).by Daphne Yong-Hsu Shih.M.Eng