983 research outputs found

    Evaluating model accuracy for model-based reasoning

    Get PDF
    Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom

    Telepresence system development for application to the control of remote robotic systems

    Get PDF
    The recent developments of techniques which assist an operator in the control of remote robotic systems are described. In particular, applications are aimed at two specific scenarios: The control of remote robot manipulators; and motion planning for remote transporter vehicles. Common to both applications is the use of realistic computer graphics images which provide the operator with pertinent information. The specific system developments for several recently completed and ongoing telepresence research projects are described

    Issues in knowledge representation to support maintainability: A case study in scientific data preparation

    Get PDF
    Scientific data preparation is the process of extracting usable scientific data from raw instrument data. This task involves noise detection (and subsequent noise classification and flagging or removal), extracting data from compressed forms, and construction of derivative or aggregate data (e.g. spectral densities or running averages). A software system called PIPE provides intelligent assistance to users developing scientific data preparation plans using a programming language called Master Plumber. PIPE provides this assistance capability by using a process description to create a dependency model of the scientific data preparation plan. This dependency model can then be used to verify syntactic and semantic constraints on processing steps to perform limited plan validation. PIPE also provides capabilities for using this model to assist in debugging faulty data preparation plans. In this case, the process model is used to focus the developer's attention upon those processing steps and data elements that were used in computing the faulty output values. Finally, the dependency model of a plan can be used to perform plan optimization and runtime estimation. These capabilities allow scientists to spend less time developing data preparation procedures and more time on scientific analysis tasks. Because the scientific data processing modules (called fittings) evolve to match scientists' needs, issues regarding maintainability are of prime importance in PIPE. This paper describes the PIPE system and describes how issues in maintainability affected the knowledge representation used in PIPE to capture knowledge about the behavior of fittings

    Process Desugb for the Photosynthesis of Ethylene

    Get PDF
    This project evaluates the feasibility of using cyanobacteria to produce ethylene from CO2. A recent paper published by the National Renewable Energy Lab (NREL) showed that it was possible to produce ethylene continuously in lab scale experiments. The cyanobacteria uses CO2, water, and light to photosynthesize ethylene. We were tasked to design a plan to produce 100MM lb/year of ethylene. It was quickly determined that at the current published production rate the process would not be economically feasible. The rate would need to be significantly increased before the process becomes economically viable. Also, at the current state of technology, no commercially available or patented photobioreactor can support this process. The presence of both a gas feed and effluent pose significant obstacles for reactor design. It was also determined that due to the endothermic nature of the reaction and the inefficiency of photosynthesis, the process must rely predominantly on sunlight. This project includes specifications and pricing for water purification, cell growth, and two separation systems. The present value of the process without the reactor section was calculated to determine the maximum reactor investment and annual operating cost to yield a return on investment of 15%. Location of the plant was also determined. Due to carbon dioxide and seawater needs, this plant will be located along the coast in Santa Rosa, CA, close to an ethanol plant. The plant will operate 340 days per year to allot for any downtime incurred in daily operation. Cells will be initially grown in seed and growth tanks in batch-type process. Warm seawater supplemented with nitric acid, phosphorous acid, and sodium hydroxide will be used as the media for cell growth. Two separation sections were designed for purifying reactor effluent. The two separation systems investigated were pressure swing adsorption using zeolite adsorbent and cryogenic distillation with a custom nitrogen refrigeration system. These two were compared economically and it was shown that the PSA system yielded favorable economics. Without the reactor section, the process using pressure swing adsorption yields an IRR of 67.62% with a net present value of 70MMat1570MM at 15% ROI. The proposed reactor section investment and annual operating cost can have at most a net present value of -70MM to meet project requirements

    Intelligent assistance in scientific data preparation

    Get PDF
    Scientific data preparation is the process of extracting usable scientific data from raw instrument data. This task involves noise detection (and subsequent noise classification and flagging or removal), extracting data from compressed forms, and construction of derivative or aggregate data (e.g. spectral densities or running averages). A software system called PIPE provides intelligent assistance to users developing scientific data preparation plans using a programming language called Master Plumber. PIPE provides this assistance capability by using a process description to create a dependency model of the scientific data preparation plan. This dependency model can then be used to verify syntactic and semantic constraints on processing steps to perform limited plan validation. PIPE also provides capabilities for using this model to assist in debugging faulty data preparation plans. In this case, the process model is used to focus the developer's attention upon those processing steps and data elements that were used in computing the faulty output values. Finally, the dependency model of a plan can be used to perform plan optimization and run time estimation. These capabilities allow scientists to spend less time developing data preparation procedures and more time on scientific analysis tasks

    Lensed Quasar Hosts

    Full text link
    Gravitational lensing assists in the detection of quasar hosts by amplifying and distorting the host light away from the unresolved quasar core images. We present the results of HST observations of 30 quasar hosts at redshifts 1 < z < 4.5. The hosts are small in size (r_e <~ 6 kpc), and span a range of morphologies consistent with early-types (though smaller in mass) to disky/late-type. The ratio of the black hole mass (MBH, from the virial technique) to the bulge mass (M_bulge, from the stellar luminosity) at 1<z<1.7 is broadly consistent with the local value; while MBH/M_bulge at z>1.7 is a factor of 3--6 higher than the local value. But, depending on the stellar content the ratio may decline at z>4 (if E/S0-like), flatten off to 6--10 times the local value (if Sbc-like), or continue to rise (if Im-like). We infer that galaxy bulge masses must have grown by a factor of 3--6 over the redshift range 3>z>1, and then changed little since z~1. This suggests that the peak epoch of galaxy formation for massive galaxies is above z~1. We also estimate the duty cycle of luminous AGNs at z>1 to be ~1%, or 10^7 yrs, with sizable scatter.Comment: 8 pages, 6 figures, review article with C. Impey at the conference on "QSO Host Galaxies: Evolution and Environment", Aug. 29-Sep. 2, 2005, Lorentz Center, Leiden, The Netherland

    Imaging markers of disability in aquaporin-4 immunoglobulin G seropositive neuromyelitis optica: a graph theory study

    Get PDF
    Neuromyelitis optica spectrum disorders lack imaging biomarkers associated with disease course and supporting prognosis. This complex and heterogeneous set of disorders affects many regions of the central nervous system, including the spinal cord and visual pathway. Here, we use graph theory-based multimodal network analysis to investigate hypothesis-free mixed networks and associations between clinical disease with neuroimaging markers in 40 aquaporin-4-immunoglobulin G antibody seropositive patients (age = 48.16 ± 14.3 years, female:male = 36:4) and 31 healthy controls (age = 45.92 ± 13.3 years, female:male = 24:7). Magnetic resonance imaging measures included total brain and deep grey matter volumes, cortical thickness and spinal cord atrophy. Optical coherence tomography measures of the retina and clinical measures comprised of clinical attack types and expanded disability status scale were also utilized. For multimodal network analysis, all measures were introduced as nodes and tested for directed connectivity from clinical attack types and disease duration to systematic imaging and clinical disability measures. Analysis of variance, with group interactions, gave weights and significance for each nodal association (hyperedges). Connectivity matrices from 80% and 95% F-distribution networks were analyzed and revealed the number of combined attack types and disease duration as the most connected nodes, directly affecting changes in several regions of the central nervous system. Subsequent multivariable regression models, including interaction effects with clinical parameters, identified associations between decreased nucleus accumbens (β = −0.85, P = 0.021) and caudate nucleus (β = −0.61, P = 0.011) volumes with higher combined attack type count and longer disease duration, respectively. We also confirmed previously reported associations between spinal cord atrophy with increased number of clinical myelitis attacks. Age was the most important factor associated with normalized brain volume, pallidum volume, cortical thickness and the expanded disability status scale score. The identified imaging biomarker candidates warrant further investigation in larger-scale studies. Graph theory-based multimodal networks allow for connectivity and interaction analysis, where this method may be applied in other complex heterogeneous disease investigations with different outcome measures
    • …
    corecore