14 research outputs found

    Data reduction for the MIPS far-infrared arrays

    Get PDF
    Traditional photoconductive detectors are used at 70 and 160 microns in the Multiband Imaging Photometer for SIRTF. These devices are highly sensitivity to cosmic rays and have complex response characteristics, all of which must be anticipated in the data reduction pipeline. The pipeline is being developed by a team at the SIRTF Science Center, where the detailed design and coding are carried out, and at Steward Observatory, where the high level algorithms are developed and detector tests are conducted to provide data for pipeline experiments. A number of innovations have been introduced. Burger's model is used to extrapolate to asymptotic values for the response of the detectors. This approach permits rapid fitting of the complexities in the detector response. Examples of successful and unsuccessful fits to the laboratory test data are shown

    Spitzer Space Telescope MIPS Germanium Pipeline

    Get PDF
    The MIPS Germanium data reduction pipelines present challenges to remove a wide variety of detector artifacts and still operate efficiently in a loosely coupled multiprocessor environment. The system scheduling architecture is designed to sequentially execute four stages of pipelines. Each pipeline stage is built around perl scripts that can invoke Fortran/C/C++ modules or Informix database stored procedures. All inter-pipeline communication is via the database. The pipeline stages are the elimination of nonlinear and radiation artifacts in the flux measurement, the calibration of the fluxes with both onboard and stellar calibration sources, applying post-facto pointing information, and assembling individual exposures into mosaics

    Data reduction for the MIPS far-infrared arrays

    Get PDF
    Traditional photoconductive detectors are used at 70 and 160 microns in the Multiband Imaging Photometer for SIRTF. These devices are highly sensitivity to cosmic rays and have complex response characteristics, all of which must be anticipated in the data reduction pipeline. The pipeline is being developed by a team at the SIRTF Science Center, where the detailed design and coding are carried out, and at Steward Observatory, where the high level algorithms are developed and detector tests are conducted to provide data for pipeline experiments. A number of innovations have been introduced. Burger's model is used to extrapolate to asymptotic values for the response of the detectors. This approach permits rapid fitting of the complexities in the detector response. Examples of successful and unsuccessful fits to the laboratory test data are shown

    Reduction Algorithms for the Multiband Imaging Photometer for Spitzer

    Full text link
    We describe the data reduction algorithms for the Multiband Imaging Photometer for Spitzer (MIPS) instrument. These algorithms were based on extensive preflight testing and modeling of the Si:As (24 micron) and Ge:Ga (70 and 160 micron) arrays in MIPS and have been refined based on initial flight data. The behaviors we describe are typical of state-of-the-art infrared focal planes operated in the low backgrounds of space. The Ge arrays are bulk photoconductors and therefore show a variety of artifacts that must be removed to calibrate the data. The Si array, while better behaved than the Ge arrays, does show a handful of artifacts that also must be removed to calibrate the data. The data reduction to remove these effects is divided into three parts. The first part converts the non-destructively read data ramps into slopes while removing artifacts with time constants of the order of the exposure time. The second part calibrates the slope measurements while removing artifacts with time constants longer than the exposure time. The third part uses the redundancy inherit in the MIPS observing modes to improve the artifact removal iteratively. For each of these steps, we illustrate the relevant laboratory experiments or theoretical arguments along with the mathematical approaches taken to calibrate the data. Finally, we describe how these preflight algorithms have performed on actual flight data.Comment: 21 pages, 16 figures, PASP accepted (May 2005 issue), version of paper with full resolution images is available at http://dirty.as.arizona.edu/~kgordon/papers/PS_files/mips_dra.pd

    Neural networks for signal processing and control

    No full text
    Neural networks are developed for controlling a robot-arm and camera system and for processing images. The networks are based upon computational schemes that may be found in the brain.In the first network, a neural map algorithm is employed to control a five-joint pneumatic robot arm and gripper through feedback from two video cameras. The pneumatically driven robot arm employed shares essential mechanical characteristics with skeletal muscle systems. To control the position of the arm, 200 neurons formed a network representing the three-dimensional workspace embedded in a four-dimensional system of coordinates from the two cameras, and learned a set of pressures corresponding to the end effector positions, as well as a set of Jacobian matrices for interpolating between these positions. Because of the properties of the rubber-tube actuators of the arm, the position as a function of supplied pressure is nonlinear, nonseparable, and exhibits hysteresis. Nevertheless, through the neural network learning algorithm the position could be controlled to an accuracy of about one pixel (\sim3 mm) after two hundred learning steps. Applications of repeated corrections in each step via the Jacobian matrices leads to a very robust control algorithm since the Jacobians learned by the network have to satisfy the weak requirement that they yield a reduction of the distance between gripper and target.The second network is proposed as a model for the mammalian vision system in which backward connections from the primary visual cortex (V1) to the lateral geniculate nucleus play a key role. The application of hebbian learning to the forward and backward connections causes the formation of receptive fields which are sensitive to edges, bars, and spatial frequencies of preferred orientations. The receptive fields are learned in such a way as to maximize the rate of transfer of information from the LGN to V1. Orientational preferences are organized into a feature map in the primary visual cortex by the application of lateral interactions during the learning phase. The organization of the mature network is compared to that found in the macaque monkey by several analytical tests.The capacity of the network to process images is investigated. By a method of reconstructing the input images in terms of V1 activities, the simulations show that images can be faithfully represented in V1 by the proposed network. The signal-to-noise ratio of the image is improved by the representation, and compression ratios of well over two-hundred are possible. Lateral interactions between V1 neurons sharpen their orientational tuning. We further study the dynamics of the processing, showing that the rate of decrease of the error of the reconstruction is maximized for the receptive fields used.Lastly, we employ a Fokker-Planck equation for a more detailed prediction of the error value vs. time. The Fokker-Planck equation for an underdamped system with a driving force is derived, yielding an energy-dependent diffusion coefficient which is the integral of the spectral densities of the force and the velocity of the system. The theory is applied to correlated noise activation and resonant activation. Simulation results for the error of the network vs time are compared to the solution of the Fokker-Planck equation.U of I OnlyETDs are only available to UIUC Users without author permissio

    Metrics Correlation and Analysis Service (MCAS)

    No full text
    Abstract. The complexity of Grid workflow activities and their associated software stacks inevitably involves multiple organizations, ownership, and deployment domains. In this setting, important and common tasks such as the correlation and display of metrics and debugging information (fundamental ingredients of troubleshooting) are challenged by the informational entropy inherent to independently maintained and operated software components. Because such an information pool is disorganized, it is a difficult environment for business intelligence analysis i.e. troubleshooting, incident investigation, and trend spotting. The mission of the MCAS project is to deliver a software solution to help with adaptation, retrieval, correlation, and display of workflow-driven data and of type-agnostic events, generated by loosely coupled or fully decoupled middleware
    corecore