5,334,727 research outputs found

    Software for Schenkerian Analysis

    Get PDF
    Software developed to automate the process of Schen-kerian analysis is described. The current state of the art is that moderately good analyses of small extracts can be generated, but more information is required about the criteria by which analysts make decisions among alternative interpretations in the course of analysis. The software described here allows the procedure of reduction to be examined while in process, allowing decision points, and potentially criteria, to become clear

    Software development predictors, error analysis, reliability models and software metric analysis

    Get PDF
    The use of dynamic characteristics as predictors for software development was studied. It was found that there are some significant factors that could be useful as predictors. From a study on software errors and complexity, it was shown that meaningful results can be obtained which allow insight into software traits and the environment in which it is developed. Reliability models were studied. The research included the field of program testing because the validity of some reliability models depends on the answers to some unanswered questions about testing. In studying software metrics, data collected from seven software engineering laboratory (FORTRAN) projects were examined and three effort reporting accuracy checks were applied to demonstrate the need to validate a data base. Results are discussed

    A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Get PDF
    The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. Additionally, the team has developed processes for implementing and validating these algorithms for concept validation and risk reduction for the SLS program. The flexibility of the Vehicle Management End-to-end Testbed (VMET) enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS. The intent of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software development infrastructure and its related testing entities. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test cases into flight software compounded with potential human errors throughout the development lifecycle. Risk reduction is addressed by the M&FM analysis group working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses that can be tested in VMET to ensure that failures can be detected, and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM such as telemetry packing and processing. The baseline plan for use of VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes

    Software for Data Analysis

    Get PDF

    A GPU-based survey for millisecond radio transients using ARTEMIS

    Get PDF
    Astrophysical radio transients are excellent probes of extreme physical processes originating from compact sources within our Galaxy and beyond. Radio frequency signals emitted from these objects provide a means to study the intervening medium through which they travel. Next generation radio telescopes are designed to explore the vast unexplored parameter space of high time resolution astronomy, but require High Performance Computing (HPC) solutions to process the enormous volumes of data that are produced by these telescopes. We have developed a combined software /hardware solution (code named ARTEMIS) for real-time searches for millisecond radio transients, which uses GPU technology to remove interstellar dispersion and detect millisecond radio bursts from astronomical sources in real-time. Here we present an introduction to ARTEMIS. We give a brief overview of the software pipeline, then focus specifically on the intricacies of performing incoherent de-dispersion. We present results from two brute-force algorithms. The first is a GPU based algorithm, designed to exploit the L1 cache of the NVIDIA Fermi GPU. Our second algorithm is CPU based and exploits the new AVX units in Intel Sandy Bridge CPUs.Comment: 4 pages, 7 figures. To appear in the proceedings of ADASS XXI, ed. P.Ballester and D.Egret, ASP Conf. Se

    The Architecture of MEG Simulation and Analysis Software

    Full text link
    MEG (Mu to Electron Gamma) is an experiment dedicated to search for the μ+e+γ\mu^+ \rightarrow e^+\gamma decay that is strongly suppressed in the Standard Model but predicted in several Super Symmetric extensions of it at an accessible rate. MEG is a small-size experiment (5060\approx 50-60 physicists at any time) with a life span of about 10 years. The limited human resource available, in particular in the core offline group, emphasized the importance of reusing software and exploiting existing expertise. Great care has been devoted to provide a simple system that hides implementation details to the average programmer. That allowed many members of the collaboration to contribute to the development of the software of the experiment with limited programming skill. The offline software is based on two frameworks: {\bf REM} in FORTRAN 77 used for the event generation and detector simulation package {\bf GEM}, based on GEANT 3, and {\bf ROME} in C++ used in the readout simulation {\bf Bartender} and in the reconstruction and analysis program {\bf Analyzer}. Event display in the simulation is based on GEANT 3 graphic libraries and in the reconstruction on ROOT graphic libraries. Data are stored in different formats in various stage of the processing. The frameworks include utilities for input/output, database handling and format conversion transparent to the user.Comment: Presented at the IEEE NSS Knoxville, 2010 Revised according to referee's remarks Accepted by European Physical Journal Plu
    corecore