5,645 research outputs found

    Multi-use lunar telescopes

    Get PDF
    The objective of multi-use telescopes is to reduce the initial and operational costs of space telescopes to the point where a fair number of telescopes, a dozen or so, would be affordable. The basic approach is to develop a common telescope, control system, and power and communications subsystem that can be used with a wide variety of instrument payloads, i.e., imaging CCD cameras, photometers, spectrographs, etc. By having such a multi-use and multi-user telescope, a common practice for earth-based telescopes, development cost can be shared across many telescopes, and the telescopes can be produced in economical batches

    Applications of ISES for vegetation and land use

    Get PDF
    Remote sensing relative to applications involving vegetation cover and land use is reviewed to consider the potential benefits to the Earth Observing System (Eos) of a proposed Information Sciences Experiment System (ISES). The ISES concept has been proposed as an onboard experiment and computational resource to support advanced experiments and demonstrations in the information and earth sciences. Embedded in the concept is potential for relieving the data glut problem, enhancing capabilities to meet real-time needs of data users and in-situ researchers, and introducing emerging technology to Eos as the technology matures. These potential benefits are examined in the context of state-of-the-art research activities in image/data processing and management

    Im2Flow: Motion Hallucination from Static Images for Action Recognition

    Full text link
    Existing methods to recognize actions in static images take the images at their face value, learning the appearances---objects, scenes, and body poses---that distinguish each action class. However, such models are deprived of the rich dynamic structure and motions that also define human activity. We propose an approach that hallucinates the unobserved future motion implied by a single snapshot to help static-image action recognition. The key idea is to learn a prior over short-term dynamics from thousands of unlabeled videos, infer the anticipated optical flow on novel static images, and then train discriminative models that exploit both streams of information. Our main contributions are twofold. First, we devise an encoder-decoder convolutional neural network and a novel optical flow encoding that can translate a static image into an accurate flow map. Second, we show the power of hallucinated flow for recognition, successfully transferring the learned motion into a standard two-stream network for activity recognition. On seven datasets, we demonstrate the power of the approach. It not only achieves state-of-the-art accuracy for dense optical flow prediction, but also consistently enhances recognition of actions and dynamic scenes.Comment: Published in CVPR 2018, project page: http://vision.cs.utexas.edu/projects/im2flow
    corecore