842 research outputs found

    Color television study Final report, Nov. 1965 - Mar. 1966

    Get PDF
    Color television camera for transmission from lunar and earth orbits and lunar surfac

    Automatic Synchronization of Multi-User Photo Galleries

    Full text link
    In this paper we address the issue of photo galleries synchronization, where pictures related to the same event are collected by different users. Existing solutions to address the problem are usually based on unrealistic assumptions, like time consistency across photo galleries, and often heavily rely on heuristics, limiting therefore the applicability to real-world scenarios. We propose a solution that achieves better generalization performance for the synchronization task compared to the available literature. The method is characterized by three stages: at first, deep convolutional neural network features are used to assess the visual similarity among the photos; then, pairs of similar photos are detected across different galleries and used to construct a graph; eventually, a probabilistic graphical model is used to estimate the temporal offset of each pair of galleries, by traversing the minimum spanning tree extracted from this graph. The experimental evaluation is conducted on four publicly available datasets covering different types of events, demonstrating the strength of our proposed method. A thorough discussion of the obtained results is provided for a critical assessment of the quality in synchronization.Comment: ACCEPTED to IEEE Transactions on Multimedi

    Bagadus: next generation sport analysis and multimedia platform using camera array and sensor networks

    Get PDF
    Today, a large number of (elite) sports clubs spend a large amount of resources to analyze their game performance, either manually or using one of the many existing analytics tools. In the area of soccer, there exist several systems where trainers and coaches can analyze the game play in order to improve the performance. However, most of these systems are cumbersome and relies on manual work from many people and/or heavy video processing. In this thesis, we present Bagadus, a prototype of a soccer analysis application which integrates a sensor system, soccer analytics annotations and video processing of a video camera array. The prototype is currently installed at Alfheim Stadium in Norway, and we demonstrate how the system can follow and zoom in on particular player(s), and search for and playout events from the games using the stitched panorama video and/or the camera switching mode

    A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks

    Get PDF
    From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds

    Optical Synchronization of Time-of-Flight Cameras

    Get PDF
    Time-of-Flight (ToF)-Kameras erzeugen Tiefenbilder (3D-Bilder), indem sie Infrarotlicht aussenden und die Zeit messen, bis die Reflexion des Lichtes wieder empfangen wird. Durch den Einsatz mehrerer ToF-Kameras können ihre vergleichsweise geringere Auflösungen überwunden, das Sichtfeld vergrößert und Verdeckungen reduziert werden. Der gleichzeitige Betrieb birgt jedoch die Möglichkeit von Störungen, die zu fehlerhaften Tiefenmessungen führen. Das Problem der gegenseitigen Störungen tritt nicht nur bei Mehrkamerasystemen auf, sondern auch wenn mehrere unabhängige ToF-Kameras eingesetzt werden. In dieser Arbeit wird eine neue optische Synchronisation vorgestellt, die keine zusätzliche Hardware oder Infrastruktur erfordert, um ein Zeitmultiplexverfahren (engl. Time-Division Multiple Access, TDMA) für die Anwendung mit ToF-Kameras zu nutzen, um so die Störungen zu vermeiden. Dies ermöglicht es einer Kamera, den Aufnahmeprozess anderer ToF-Kameras zu erkennen und ihre Aufnahmezeiten schnell zu synchronisieren, um störungsfrei zu arbeiten. Anstatt Kabel zur Synchronisation zu benötigen, wird nur die vorhandene Hardware genutzt, um eine optische Synchronisation zu erreichen. Dazu wird die Firmware der Kamera um das Synchronisationsverfahren erweitert. Die optische Synchronisation wurde konzipiert, implementiert und in einem Versuchsaufbau mit drei ToF-Kameras verifiziert. Die Messungen zeigen die Wirksamkeit der vorgeschlagenen optischen Synchronisation. Während der Experimente wurde die Bildrate durch das zusätzliche Synchronisationsverfahren lediglich um etwa 1 Prozent reduziert.Time-of-Flight (ToF) cameras produce depth images (three-dimensional images) by measuring the time between the emission of infrared light and the reception of its reflection. A setup of multiple ToF cameras may be used to overcome their comparatively low resolution, increase the field of view, and reduce occlusion. However, the simultaneous operation of multiple ToF cameras introduces the possibility of interference resulting in erroneous depth measurements. The problem of interference is not only related to a collaborative multicamera setup but also to multiple ToF cameras operating independently. In this work, a new optical synchronization for ToF cameras is presented, requiring no additional hardware or infrastructure to utilize a time-division multiple access (TDMA) scheme to mitigate interference. It effectively enables a camera to sense the acquisition process of other ToF cameras and rapidly synchronizes its acquisition times to operate without interference. Instead of requiring cables to synchronize, only the existing hardware is utilized to enable an optical synchronization. To achieve this, the camera’s firmware is extended with the synchronization procedure. The optical synchronization has been conceptualized, implemented, and verified with an experimental setup deploying three ToF cameras. The measurements show the efficacy of the proposed optical synchronization. During the experiments, the frame rate was reduced by only about 1% due to the synchronization procedure
    corecore