7,572 research outputs found
Data simulation for the Lightning Imaging Sensor (LIS)
This project aims to build a data analysis system that will utilize existing video tape scenes of lightning as viewed from space. The resultant data will be used for the design and development of the Lightning Imaging Sensor (LIS) software and algorithm analysis. The desire for statistically significant metrics implies that a large data set needs to be analyzed. Before 1990 the quality and quantity of video was insufficient to build a usable data set. At this point in time, there is usable data from missions STS-34, STS-32, STS-31, STS-41, STS-37, and STS-39. During the summer of 1990, a manual analysis system was developed to demonstrate that the video analysis is feasible and to identify techniques to deduce information that was not directly available. Because the closed circuit television system used on the space shuttle was intended for documentary TV, the current value of the camera focal length and pointing orientation, which are needed for photoanalysis, are not included in the system data. A large effort was needed to discover ancillary data sources as well as develop indirect methods to estimate the necessary parameters. Any data system coping with full motion video faces an enormous bottleneck produced by the large data production rate and the need to move and store the digitized images. The manual system bypassed the video digitizing bottleneck by using a genlock to superimpose pixel coordinates on full motion video. Because the data set had to be obtained point by point by a human operating a computer mouse, the data output rate was small. The loan and subsequent acquisition of a Abekas digital frame store with a real time digitizer moved the bottleneck from data acquisition to a problem of data transfer and storage. The semi-automated analysis procedure was developed using existing equipment and is described. A fully automated system is described in the hope that the components may come on the market at reasonable prices in the next few years
Recommended from our members
An Overview of Models for Response Times and Processes in Cognitive Tests.
Response times (RTs) are a natural kind of data to investigate cognitive processes underlying cognitive test performance. We give an overview of modeling approaches and of findings obtained with these approaches. Four types of models are discussed: response time models (RT as the sole dependent variable), joint models (RT together with other variables as dependent variable), local dependency models (with remaining dependencies between RT and accuracy), and response time as covariate models (RT as independent variable). The evidence from these approaches is often not very informative about the specific kind of processes (other than problem solving, information accumulation, and rapid guessing), but the findings do suggest dual processing: automated processing (e.g., knowledge retrieval) vs. controlled processing (e.g., sequential reasoning steps), and alternative explanations for the same results exist. While it seems well-possible to differentiate rapid guessing from normal problem solving (which can be based on automated or controlled processing), further decompositions of response times are rarely made, although possible based on some of model approaches
Lightning observations from space shuttle
The experimental program of the Earth Sciences and Applications Division at NASA/MSFC includes development of the Lightning Imaging Sensor (LIS) for the NOAA Earth Observing System (EOS) Polar Platform. The research plan is to use existing lightning information to generate simulated data for the LIS experiment. Navigation algorithms were used to transform pixel locations to latitude and longitude values. The simulated data would then be used to test and develop algorithms for the analysis of LIS data. Individual frames of video imagery obtained from Space Shuttle Missions provide the raw data for the simulation. Individual video frames were digitized to get the pixel locations of lightning flashes. The pixel locations will be used to locate the geographical position of the event. Because of a lack of detailed knowledge of camera orientation with respect to the Space Shuttle, video scenes that contain identifiable city lights were chosen for analysis. A method for locating the payload bay camera axis was developed and tested. Two measurements are needed: the pixel location of the apparent horizon and a timed siting of a known location passing the principal line of the image. Individual video frames were navigated and lightning illuminated clouds were located on the map. Satisfactory agreement in location was achieved for cities and LLP lightning locations. Ground truth measurements were compared to satellite observations. A vertical lightning event was identified on the horizon. Very low frequency (VLF) transmission on this particular occassion shows a strong response to negative cloud to cloud flashes
- …