738 research outputs found

    Advanced information processing system for advanced launch system: Avionics architecture synthesis

    Get PDF
    The Advanced Information Processing System (AIPS) is a fault-tolerant distributed computer system architecture that was developed to meet the real time computational needs of advanced aerospace vehicles. One such vehicle is the Advanced Launch System (ALS) being developed jointly by NASA and the Department of Defense to launch heavy payloads into low earth orbit at one tenth the cost (per pound of payload) of the current launch vehicles. An avionics architecture that utilizes the AIPS hardware and software building blocks was synthesized for ALS. The AIPS for ALS architecture synthesis process starting with the ALS mission requirements and ending with an analysis of the candidate ALS avionics architecture is described

    Radio Galaxy Zoo: Knowledge Transfer Using Rotationally Invariant Self-Organising Maps

    Full text link
    With the advent of large scale surveys the manual analysis and classification of individual radio source morphologies is rendered impossible as existing approaches do not scale. The analysis of complex morphological features in the spatial domain is a particularly important task. Here we discuss the challenges of transferring crowdsourced labels obtained from the Radio Galaxy Zoo project and introduce a proper transfer mechanism via quantile random forest regression. By using parallelized rotation and flipping invariant Kohonen-maps, image cubes of Radio Galaxy Zoo selected galaxies formed from the FIRST radio continuum and WISE infrared all sky surveys are first projected down to a two-dimensional embedding in an unsupervised way. This embedding can be seen as a discretised space of shapes with the coordinates reflecting morphological features as expressed by the automatically derived prototypes. We find that these prototypes have reconstructed physically meaningful processes across two channel images at radio and infrared wavelengths in an unsupervised manner. In the second step, images are compared with those prototypes to create a heat-map, which is the morphological fingerprint of each object and the basis for transferring the user generated labels. These heat-maps have reduced the feature space by a factor of 248 and are able to be used as the basis for subsequent ML methods. Using an ensemble of decision trees we achieve upwards of 85.7% and 80.7% accuracy when predicting the number of components and peaks in an image, respectively, using these heat-maps. We also question the currently used discrete classification schema and introduce a continuous scale that better reflects the uncertainty in transition between two classes, caused by sensitivity and resolution limits

    Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 1: Army fault tolerant architecture overview

    Get PDF
    Digital computing systems needed for Army programs such as the Computer-Aided Low Altitude Helicopter Flight Program and the Armored Systems Modernization (ASM) vehicles may be characterized by high computational throughput and input/output bandwidth, hard real-time response, high reliability and availability, and maintainability, testability, and producibility requirements. In addition, such a system should be affordable to produce, procure, maintain, and upgrade. To address these needs, the Army Fault Tolerant Architecture (AFTA) is being designed and constructed under a three-year program comprised of a conceptual study, detailed design and fabrication, and demonstration and validation phases. Described here are the results of the conceptual study phase of the AFTA development. Given here is an introduction to the AFTA program, its objectives, and key elements of its technical approach. A format is designed for representing mission requirements in a manner suitable for first order AFTA sizing and analysis, followed by a discussion of the current state of mission requirements acquisition for the targeted Army missions. An overview is given of AFTA's architectural theory of operation

    Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 2: Army fault tolerant architecture design and analysis

    Get PDF
    Described here is the Army Fault Tolerant Architecture (AFTA) hardware architecture and components and the operating system. The architectural and operational theory of the AFTA Fault Tolerant Data Bus is discussed. The test and maintenance strategy developed for use in fielded AFTA installations is presented. An approach to be used in reducing the probability of AFTA failure due to common mode faults is described. Analytical models for AFTA performance, reliability, availability, life cycle cost, weight, power, and volume are developed. An approach is presented for using VHSIC Hardware Description Language (VHDL) to describe and design AFTA's developmental hardware. A plan is described for verifying and validating key AFTA concepts during the Dem/Val phase. Analytical models and partial mission requirements are used to generate AFTA configurations for the TF/TA/NOE and Ground Vehicle missions

    Electron Magnetic Resonance: The Modified Bloch Equation

    Full text link
    We find a modified Bloch equation for the electronic magnetic moment when the magnetic moment explicitly contains a diamagnetic contribution (a magnetic field induced magnetic moment arising from the electronic orbital angular momentum) in addition to the intrinsic magnetic moment of the electron. The modified Bloch is coupled to equations of motion for the position and momentum operators. In the presence of static and time varying magnetic field components, the magnetic moment oscillates out of phase with the magnetic field and power is absorbed by virtue of the magnetic field induced magnetic moment, even in the absence of coupling to the environment. We explicitly work out the spectrum and absorption for the case of a pp state electron

    Individual tree biomass equations or biomass expansion factors for assessment of carbon stock changes in living biomass - A comparative study

    Get PDF
    AbstractSignatory countries to the United Nations Framework Convention on Climate Change (UNFCCC) and its supplementary Kyoto Protocol (KP) are obliged to report greenhouse gas emissions and removals. Changes in the carbon stock of living biomass should be reported using either the default or stock change methods of the Intergovernmental Panel on Climate Change (IPCC) under the Land Use, Land-Use Change and Forestry sector. Traditionally, volume estimates are used as a forestry measures. Changes in living biomass may be assessed by first estimating the change in the volume of stem wood and then converting this volume to whole tree biomass using biomass expansion factors (BEFs). However, this conversion is often non-trivial because the proportion of stem wood increases with tree size at the expense of branches, foliage, stump and roots. Therefore, BEFs typically vary over time and their use may result in biased estimates. The objective of this study was to evaluate differences between biomass estimates obtained using biomass equations and BEFs with particular focus on uncertainty analysis. Assuming that the development of tree fractions in different ways can be handled by individual biomass equations, BEFs for standing stock were shown to overestimate the biomass sink capacity (Sweden). Although estimates for BEFs derived for changes in stock were found to be unbiased, the estimated BEFs varied substantially over time (0.85–1.22ton CO2/m3). However, to some extent this variation may be due to random sampling errors rather than actual changes. The highest accuracy was obtained for estimates based on biomass equations for different tree fractions, applied to data from the Swedish National Forest Inventory using a permanent sample design (estimated change in stock 1990–2005: 420million tons CO2, with a standard error amounting to 26.7million tons CO2) Many countries have adopted such a design combined with the stock change method for reporting carbon stock changes under the UNFCCC/KP

    Radio Galaxy Zoo: Machine learning for radio source host galaxy cross-identification

    Get PDF
    We consider the problem of determining the host galaxies of radio sources by cross-identification. This has traditionally been done manually, which will be intractable for wide-area radio surveys like the Evolutionary Map of the Universe (EMU). Automated cross-identification will be critical for these future surveys, and machine learning may provide the tools to develop such methods. We apply a standard approach from computer vision to cross-identification, introducing one possible way of automating this problem, and explore the pros and cons of this approach. We apply our method to the 1.4 GHz Australian Telescope Large Area Survey (ATLAS) observations of the Chandra Deep Field South (CDFS) and the ESO Large Area ISO Survey South 1 (ELAIS-S1) fields by cross-identifying them with the Spitzer Wide-area Infrared Extragalactic (SWIRE) survey. We train our method with two sets of data: expert cross-identifications of CDFS from the initial ATLAS data release and crowdsourced cross-identifications of CDFS from Radio Galaxy Zoo. We found that a simple strategy of cross-identifying a radio component with the nearest galaxy performs comparably to our more complex methods, though our estimated best-case performance is near 100 per cent. ATLAS contains 87 complex radio sources that have been cross-identified by experts, so there are not enough complex examples to learn how to cross-identify them accurately. Much larger datasets are therefore required for training methods like ours. We also show that training our method on Radio Galaxy Zoo cross-identifications gives comparable results to training on expert cross-identifications, demonstrating the value of crowdsourced training data

    The utility of the alvarado score in the diagnosis of acute appendicitis in the elderly

    Get PDF
    Clinical scores determining the likelihood of acute appendicitis (AA), including the Alvarado score, were devised using a younger population, and their efficacy in predicting AA in elderly patients is not well documented. This study's purpose is to evaluate the utility of Alvarado scores in this population. A retrospective chart review of patients >65 years old presenting with pathologically diagnosed AA from 2000 to 2010 was performed. Ninety-six patients met inclusion criteria. The average age was 73.7 ± 1.5 years and our cohort was 41.7 per cent male. The average Alvarado score was 6.9 ± 0.33. The distribution of scores was 1 to 4 in 3.7 per cent, 5 to 6 in 37.8 per cent, and 7 to 10 in 58.5 per cent of cases. There was a statistically significant increase in patients scoring 5 or 6 in our cohort versus the original Alvarado cohort (P < 0.01). Right lower quadrant tenderness (97.6%), left shift of neutrophils (91.5%), and leukocytosis (84.1%) were the most common symptoms on presentation. In conclusion, our data suggest that altering our interpretation of the Alvarado score to classify elderly patients presenting with a score of ‡5 as high risk may lead to earlier diagnosis of AA. Physicians should have a higher clinical suspicion of AA in elderly patients presenting with right lower quadrant tenderness, left shift, or leukocytosis

    Long-Term Potentiation: One Kind or Many?

    Get PDF
    Do neurobiologists aim to discover natural kinds? I address this question in this chapter via a critical analysis of classification practices operative across the 43-year history of research on long-term potentiation (LTP). I argue that this 43-year history supports the idea that the structure of scientific practice surrounding LTP research has remained an obstacle to the discovery of natural kinds
    • …
    corecore