14,356 research outputs found
Speaker-following Video Subtitles
We propose a new method for improving the presentation of subtitles in video
(e.g. TV and movies). With conventional subtitles, the viewer has to constantly
look away from the main viewing area to read the subtitles at the bottom of the
screen, which disrupts the viewing experience and causes unnecessary eyestrain.
Our method places on-screen subtitles next to the respective speakers to allow
the viewer to follow the visual content while simultaneously reading the
subtitles. We use novel identification algorithms to detect the speakers based
on audio and visual information. Then the placement of the subtitles is
determined using global optimization. A comprehensive usability study indicated
that our subtitle placement method outperformed both conventional
fixed-position subtitling and another previous dynamic subtitling method in
terms of enhancing the overall viewing experience and reducing eyestrain
A system for synthetic vision and augmented reality in future flight decks
Rockwell Science Center is investigating novel human-computer interaction techniques for enhancing the situational awareness in future flight decks. One aspect is to provide intuitive displays that provide the vital information and the spatial awareness by augmenting the real world with an overlay of relevant information registered to the real world. Such Augmented Reality (AR) techniques can be employed during bad weather scenarios to permit flying in Visual Flight Rules (VFR) in conditions which would normally require Instrumental Flight Rules (IFR). These systems could easily be implemented on heads-up displays (HUD). The advantage of AR systems vs. purely synthetic vision (SV) systems is that the pilot can relate the information overlay to real objects in the world, whereas SV systems provide a constant virtual view, where inconsistencies can hardly be detected. The development of components for such a system led to a demonstrator implemented on a PC. A camera grabs video images which are overlaid with registered information. Orientation of the camera is obtained from an inclinometer and a magnetometer; position is acquired from GPS. In a possible implementation in an airplane, the on-board attitude information can be used for obtaining correct registration. If visibility is sufficient, computer vision modules can be used to fine-tune the registration by matching visual cues with database features. This technology would be especially useful for landing approaches. The current demonstrator provides a frame-rate of 15 fps, using a live video feed as background with an overlay of avionics symbology in the foreground. In addition, terrain rendering from a 1 arc sec. digital elevation model database can be overlaid to provide synthetic vision in case of limited visibility. For true outdoor testing (on ground level), the system has been implemented on a wearable computer
3D Scanning System for Automatic High-Resolution Plant Phenotyping
Thin leaves, fine stems, self-occlusion, non-rigid and slowly changing
structures make plants difficult for three-dimensional (3D) scanning and
reconstruction -- two critical steps in automated visual phenotyping. Many
current solutions such as laser scanning, structured light, and multiview
stereo can struggle to acquire usable 3D models because of limitations in
scanning resolution and calibration accuracy. In response, we have developed a
fast, low-cost, 3D scanning platform to image plants on a rotating stage with
two tilting DSLR cameras centred on the plant. This uses new methods of camera
calibration and background removal to achieve high-accuracy 3D reconstruction.
We assessed the system's accuracy using a 3D visual hull reconstruction
algorithm applied on 2 plastic models of dicotyledonous plants, 2 sorghum
plants and 2 wheat plants across different sets of tilt angles. Scan times
ranged from 3 minutes (to capture 72 images using 2 tilt angles), to 30 minutes
(to capture 360 images using 10 tilt angles). The leaf lengths, widths, areas
and perimeters of the plastic models were measured manually and compared to
measurements from the scanning system: results were within 3-4% of each other.
The 3D reconstructions obtained with the scanning system show excellent
geometric agreement with all six plant specimens, even plants with thin leaves
and fine stems.Comment: 8 papes, DICTA 201
Space Station Freedom automation and robotics: An assessment of the potential for increased productivity
This report presents the results of a study performed in support of the Space Station Freedom Advanced Development Program, under the sponsorship of the Space Station Engineering (Code MT), Office of Space Flight. The study consisted of the collection, compilation, and analysis of lessons learned, crew time requirements, and other factors influencing the application of advanced automation and robotics, with emphasis on potential improvements in productivity. The lessons learned data collected were based primarily on Skylab, Spacelab, and other Space Shuttle experiences, consisting principally of interviews with current and former crew members and other NASA personnel with relevant experience. The objectives of this report are to present a summary of this data and its analysis, and to present conclusions regarding promising areas for the application of advanced automation and robotics technology to the Space Station Freedom and the potential benefits in terms of increased productivity. In this study, primary emphasis was placed on advanced automation technology because of its fairly extensive utilization within private industry including the aerospace sector. In contrast, other than the Remote Manipulator System (RMS), there has been relatively limited experience with advanced robotics technology applicable to the Space Station. This report should be used as a guide and is not intended to be used as a substitute for official Astronaut Office crew positions on specific issues
- …