29 research outputs found

    New Generation of Instrumented Ranges: Enabling Automated Performance Analysis

    Get PDF
    Military training conducted on physical ranges that match a unit’s future operational environment provides an invaluable experience. Today, to conduct a training exercise while ensuring a unit’s performance is closely observed, evaluated, and reported on in an After Action Review, the unit requires a number of instructors to accompany the different elements. Training organized on ranges for urban warfighting brings an additional level of complexity—the high level of occlusion typical for these environments multiplies the number of evaluators needed. While the units have great need for such training opportunities, they may not have the necessary human resources to conduct them successfully. In this paper we report on our US Navy/ONR-sponsored project aimed at a new generation of instrumented ranges, and the early results we have achieved. We suggest a radically different concept: instead of recording multiple video streams that need to be reviewed and evaluated by a number of instructors, our system will focus on capturing dynamic individual warfighter pose data and performing automated performance evaluation. We will use an in situ network of automatically-controlled pan-tilt-zoom video cameras and personal position and orientation sensing devices. Our system will record video, reconstruct dynamic 3D individual poses, analyze, recognize events, evaluate performances, generate reports, provide real-time free exploration of recorded data, and even allow the user to generate ‘what-if’ scenarios that were never recorded. The most direct benefit for an individual unit will be the ability to conduct training with fewer human resources, while having a more quantitative account of their performance (dispersion across the terrain, ‘weapon flagging’ incidents, number of patrols conducted). The instructors will have immediate feedback on some elements of the unit’s performance. Having data sets for multiple units will enable historical trend analysis, thus providing new insights and benefits for the entire service.Office of Naval Researc

    3D Medical Collaboration Technology to Enhance Emergency Healthcare

    Get PDF
    Two-dimensional (2D) videoconferencing has been explored widely in the past 15–20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals’ viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare

    Achieving Color Uniformity Across Multi‐Projector Displays

    No full text
    Large area tiled displays are gaining popularity for use in collaborative immersive virtual environments and scientific visualization. While recent work has addressed the issues of geometric registration, rendering architectures, and human interfaces, there has been relatively little work on photometric calibration in general, and photometric non-uniformity in particular. For example, as a result of differences in the photometric characteristics of projectors, the color and intensity of a large area display varies from place to place. Further, the imagery typically appears brighter at the regions of overlap between adjacent projectors. In this paper we analyze and classify the causes of photometric non-uniformity in a tiled display. We then propose a methodology for determining corrections designed to achieve uniformity, that can correct for the photometric variations across a tiled projector display in real time using per channel color look-up-tables (LUT)

    A Personal Surround Environment: Projective Display with Correction for Display Surface Geometry and Extreme Lens Distortion ABSTRACT

    No full text
    Projectors equipped with wide-angle lenses can have an advantage over traditional projectors in creating immersive display environments since they can be placed very close to the display surface to reduce user shadowing issues while still producing large images. However, wide-angle projectors exhibit severe image distortion requiring the image generator to correctively pre-distort the output image. In this paper, we describe a new technique based on Raskar’s [14] two-pass rendering algorithm that is able to correct for both arbitrary display surface geometry and the extreme lens distortion caused by fisheye lenses. We further detail how the distortion correction algorithm can be implemented in a real-time shader program running on a commodity GPU to create low-cost, personal surround environments

    PixelFlex2: A Comprehensive, Automatic, Casually-Aligned Multi-Projector Display

    No full text
    We introduce PixelFlex2, our newest scalable wall-sized, multi-projector display system. For it, we had to solve most of the difficult problems left open by its predecessor, PixelFlex, a proof-of-concept demonstration driven by a large, multi-headed SGI graphics system. PixelFlex2 retains the achievements of PixelFlex (high-performance through single-pass rendering, single-pixel accuracy for geometric blending with only casual placement of projectors) , while adding a) higher performance and scalability with a Linux PC-cluster, b) application support with either the distributed-rendering framework of Chromium or a performance-oriented, parallel-process framework supported by a proprietary API, c) improved geometric calibration by using a corner finder for feature detection, and d) photometric calibration with a single conventional camera using high dynamic range imaging techniques rather than an expensive photometer
    corecore