6,449 research outputs found

    ROOT - A C++ Framework for Petabyte Data Storage, Statistical Analysis and Visualization

    Full text link
    ROOT is an object-oriented C++ framework conceived in the high-energy physics (HEP) community, designed for storing and analyzing petabytes of data in an efficient way. Any instance of a C++ class can be stored into a ROOT file in a machine-independent compressed binary format. In ROOT the TTree object container is optimized for statistical data analysis over very large data sets by using vertical data storage techniques. These containers can span a large number of files on local disks, the web, or a number of different shared file systems. In order to analyze this data, the user can chose out of a wide set of mathematical and statistical functions, including linear algebra classes, numerical algorithms such as integration and minimization, and various methods for performing regression analysis (fitting). In particular, ROOT offers packages for complex data modeling and fitting, as well as multivariate classification based on machine learning techniques. A central piece in these analysis tools are the histogram classes which provide binning of one- and multi-dimensional data. Results can be saved in high-quality graphical formats like Postscript and PDF or in bitmap formats like JPG or GIF. The result can also be stored into ROOT macros that allow a full recreation and rework of the graphics. Users typically create their analysis macros step by step, making use of the interactive C++ interpreter CINT, while running over small data samples. Once the development is finished, they can run these macros at full compiled speed over large data sets, using on-the-fly compilation, or by creating a stand-alone batch program. Finally, if processing farms are available, the user can reduce the execution time of intrinsically parallel tasks - e.g. data mining in HEP - by using PROOF, which will take care of optimally distributing the work over the available resources in a transparent way

    Development of the engineering design integration (EDIN) system: A computer aided design development

    Get PDF
    The EDIN (Engineering Design Integration) System which provides a collection of hardware and software, enabling the engineer to perform man-in-the-loop interactive evaluation of aerospace vehicle concepts, was considered. Study efforts were concentrated in the following areas: (1) integration of hardware with the Univac Exec 8 System; (2) development of interactive software for the EDIN System; (3) upgrading of the EDIN technology module library to an interactive status; (4) verification of the soundness of the developing EDIN System; (5) support of NASA in design analysis studies using the EDIN System; (6) provide training and documentation in the use of the EDIN System; and (7) provide an implementation plan for the next phase of development and recommendations for meeting long range objectives

    Current and future graphics requirements for LaRC and proposed future graphics system

    Get PDF
    The findings of an investigation to assess the current and future graphics requirements of the LaRC researchers with respect to both hardware and software are presented. A graphics system designed to meet these requirements is proposed

    Feasibility study of an Integrated Program for Aerospace vehicle Design (IPAD). Volume 3: Support of the design process

    Get PDF
    The user requirements for computer support of the IPAD design process are identified. The user-system interface, language, equipment, and computational requirements are considered

    Sixth Annual Users' Conference

    Get PDF
    Conference papers and presentation outlines which address the use of the Transportable Applications Executive (TAE) and its various applications programs are compiled. Emphasis is given to the design of the user interface and image processing workstation in general. Alternate ports of TAE and TAE subsystems are also covered

    Software framework for geophysical data processing, visualization and code development

    Get PDF
    IGeoS is an integrated open-source software framework for geophysical data processing under development at the UofS seismology group. Unlike other systems, this processing monitor supports structured multicomponent seismic data streams, multidimensional data traces, and employs a unique backpropagation execution logic. This results in an unusual flexibility of processing, allowing the system to handle nearly any geophysical data. In this project, a modern and feature-rich Graphical User Interface (GUI) was developed for the system, allowing editing and submission of processing flows and interaction with running jobs. Multiple jobs can be executed in a distributed multi-processor networks and controlled from the same GUI. Jobs, in their turn, can also be parallelized to take advantage of parallel processing environments such as local area networks and Beowulf clusters. A 3D/2D interactive display server was created and integrated with the IGeoS geophysical data processing framework. With introduction of this major component, the IGeoS system becomes conceptually complete and potentially bridges the gap between the traditional processing and interpretation software. Finally, in a specialized application, network acquisition and relay components were written allowing IGeoS to be used for real-time applications. The completion of this functionality makes the processing and display capabilities of IGeoS available to multiple streams of seismic data from potentially remote sites. Seismic data can be acquired, transferred to the central server, processed, archived, and events picked and placed in database completely automatically

    Conversion of LARSYS III.1 to an IBM 370 computer

    Get PDF
    A software system for processing multispectral aircraft or satellite data (LARSYS) was designed and written at the Laboratory for Applications of Remote Sensing at Purdue University. This system, being implemented on an IBM 360/67 computer utilizing the Cambridge Monitor System, is of an interactive nature. TAMU LARSYS maintains the essential capabilities of Purdue's LARSYS. The machine configuration for which it has been converted is an IBM-compatible Amdahl 470V/6 computer utilizing the time sharing option of the currently implemented OS/VS2 Operating System. Due to TSO limitations, the NASA-JSC deliverable TAMU LARSYS is comprised of two parts. Part one is a TSO Control Card Checker for LARSYS control cards, and part two is a batch version of LARSYS. Used together, they afford most of the capabilities of the original LARSYS III.1. Additionally, two programs have been written by TAMU to support LARSYS processing. The first is an ERTS-to-MIST conversion program used to convert ERTS data to the LARSYS input form, the MIST tape. The second is a system runtable code which maintains tape/file location information for the MIST data sets

    Computational fluid dynamics at NASA Ames and the numerical aerodynamic simulation program

    Get PDF
    Computers are playing an increasingly important role in the field of aerodynamics such as that they now serve as a major complement to wind tunnels in aerospace research and development. Factors pacing advances in computational aerodynamics are identified, including the amount of computational power required to take the next major step in the discipline. The four main areas of computational aerodynamics research at NASA Ames Research Center which are directed toward extending the state of the art are identified and discussed. Example results obtained from approximate forms of the governing equations are presented and discussed, both in the context of levels of computer power required and the degree to which they either further the frontiers of research or apply to programs of practical importance. Finally, the Numerical Aerodynamic Simulation Program--with its 1988 target of achieving a sustained computational rate of 1 billion floating-point operations per second--is discussed in terms of its goals, status, and its projected effect on the future of computational aerodynamics

    Lessons Learned from a Decade of Providing Interactive, On-Demand High Performance Computing to Scientists and Engineers

    Full text link
    For decades, the use of HPC systems was limited to those in the physical sciences who had mastered their domain in conjunction with a deep understanding of HPC architectures and algorithms. During these same decades, consumer computing device advances produced tablets and smartphones that allow millions of children to interactively develop and share code projects across the globe. As the HPC community faces the challenges associated with guiding researchers from disciplines using high productivity interactive tools to effective use of HPC systems, it seems appropriate to revisit the assumptions surrounding the necessary skills required for access to large computational systems. For over a decade, MIT Lincoln Laboratory has been supporting interactive, on-demand high performance computing by seamlessly integrating familiar high productivity tools to provide users with an increased number of design turns, rapid prototyping capability, and faster time to insight. In this paper, we discuss the lessons learned while supporting interactive, on-demand high performance computing from the perspectives of the users and the team supporting the users and the system. Building on these lessons, we present an overview of current needs and the technical solutions we are building to lower the barrier to entry for new users from the humanities, social, and biological sciences.Comment: 15 pages, 3 figures, First Workshop on Interactive High Performance Computing (WIHPC) 2018 held in conjunction with ISC High Performance 2018 in Frankfurt, German

    Concurrent Image Processing Executive (CIPE). Volume 1: Design overview

    Get PDF
    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are described. The target machine for this software is a JPL/Caltech Mark 3fp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules: user interface, host-resident executive, hypercube-resident executive, and application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube, a data management method which distributes, redistributes, and tracks data set information was implemented. The data management also allows data sharing among application programs. The CIPE software architecture provides a flexible environment for scientific analysis of complex remote sensing image data, such as planetary data and imaging spectrometry, utilizing state-of-the-art concurrent computation capabilities
    corecore