187 research outputs found

    A retrospective on the VAX VMM security kernel

    Full text link

    In the Beginning... A Legacy of Computing at Marshall University

    Get PDF
    This book provides a brief history of the early computing technology at Marshall University, Huntington, W.Va., in the forty years: 1959-1999. This was before the move to Intel and Windows based servers. After installation of an IBM Accounting Machine in 1959, which arguably does not fit the modern definition of a computer, the first true computer arrived in 1963 and was installed in a room below the Registrar’s office. For the next twenty years several departments ordered their own midrange standalone systems to fit their individual departmental requirements. These represented different platforms from different vendors, and were not connected to each other. At the same time, the Marshall Computer Center developed an interconnected, multi-processor environment. With the software problems of year 2000, and the I/T move to the new Drinko Library, several systems were scrapped. New systems were installed on the pc server platforms. This book includes images of the various systems, several comments from users, and hardware and software descriptions

    Intermediate Representations for Controllers in Chip Generators

    Get PDF
    Creating parameterized “chip generators” has been proposed as one way to decrease chip NRE costs. While many approaches are available for creating or generating flexible data path elements, the design of flexible controllers is more problematic. The most common approach is to create a microcoded engine as the controller, which offers flexibility through programmable table-based lookup functions. This paper shows that after “programming” the hardware for the desired application, or applications, these flexible controller designs can be easily converted to efficient fixed (or less programmable) solutions using partial evaluation capabilities that are already present in most synthesis tools

    Proceedings of the First NASA Ada Users' Symposium

    Get PDF
    Ada has the potential to be a part of the most significant change in software engineering technology within NASA in the last twenty years. Thus, it is particularly important that all NASA centers be aware of Ada experience and plans at other centers. Ada activity across NASA are covered, with presenters representing five of the nine major NASA centers and the Space Station Freedom Program Office. Projects discussed included - Space Station Freedom Program Office: the implications of Ada on training, reuse, management and the software support environment; Johnson Space Center (JSC): early experience with the use of Ada, software engineering and Ada training and the evaluation of Ada compilers; Marshall Space Flight Center (MSFC): university research with Ada and the application of Ada to Space Station Freedom, the Orbital Maneuvering Vehicle, the Aero-Assist Flight Experiment and the Secure Shuttle Data System; Lewis Research Center (LeRC): the evolution of Ada software to support the Space Station Power Management and Distribution System; Jet Propulsion Laboratory (JPL): the creation of a centralized Ada development laboratory and current applications of Ada including the Real-time Weather Processor for the FAA; and Goddard Space Flight Center (GSFC): experiences with Ada in the Flight Dynamics Division and the Extreme Ultraviolet Explorer (EUVE) project and the implications of GSFC experience for Ada use in NASA. Despite the diversity of the presentations, several common themes emerged from the program: Methodology - NASA experience in general indicates that the effective use of Ada requires modern software engineering methodologies; Training - It is the software engineering principles and methods that surround Ada, rather than Ada itself, which requires the major training effort; Reuse - Due to training and transition costs, the use of Ada may initially actually decrease productivity, as was clearly found at GSFC; and real-time work at LeRC, JPL and GSFC shows that it is possible to use Ada for real-time applications

    Machine characterization and benchmark performance prediction

    Get PDF
    From runs of standard benchmarks or benchmark suites, it is not possible to characterize the machine nor to predict the run time of other benchmarks which have not been run. A new approach to benchmarking and machine characterization is reported. The creation and use of a machine analyzer is described, which measures the performance of a given machine on FORTRAN source language constructs. The machine analyzer yields a set of parameters which characterize the machine and spotlight its strong and weak points. Also described is a program analyzer, which analyzes FORTRAN programs and determines the frequency of execution of each of the same set of source language operations. It is then shown that by combining a machine characterization and a program characterization, we are able to predict with good accuracy the run time of a given benchmark on a given machine. Characterizations are provided for the Cray-X-MP/48, Cyber 205, IBM 3090/200, Amdahl 5840, Convex C-1, VAX 8600, VAX 11/785, VAX 11/780, SUN 3/50, and IBM RT-PC/125, and for the following benchmark programs or suites: Los Alamos (BMK8A1), Baskett, Linpack, Livermore Loops, Madelbrot Set, NAS Kernels, Shell Sort, Smith, Whetstone and Sieve of Erathostenes

    Computer Simulations of Biological Growth Patterns: Tree Modeling Success and Applications

    Get PDF
    The purpose of this investigation was to explore the depiction of trees in three dimensions on a microcomputer. While the use of computer-aided design in landscape architecture is increasing, imagery for plant materials is found to be at a more or less symbolic level. The literature concerning previous inquiries into the mechanisms of tree growth and differentiation provide a good deal of information ranging from physiological basics to sophisticated structural and mathematical growth models. This forms the basis from which programming work proceeded. In this context, the body of work reported here emphasizes the development of a programming methodology for achieving better tree images, rather than the sophistication of the images them-selves. A major goal in this effort was simplicity in the resulting algorithms. This is significant in both minimizing use of computer memory, and in aiding the transfer of the algorithms to other devices and uses. Discussed are the developmental steps taken from an initial tree model requiring a digitizing tablet and the internal storage of coordinates, to a tree model in which machine memory and algorithm complexity are minimized. The methodology deemed most useful is that of storing the tr:es as a general set of rules for image generation, rather than a lengthy data file for each tree. The operational value of this process is intrinsic to future applications; whether six discrete tree types are to be used or sixty types, the computer is working vii with the same amount of data -- the tree generation algorithm. Further applications of this approach could offer savings in both storage requirements and data input for a variety of complex graphic images

    An investigation into the influences upon and determinants of perceived quality achievement in the management of construction projects by multivariate analysis

    Get PDF
    This research concerns a quantitative examination of the influencing factors on the achievement of quality on construction projects. Quality performance on construction projects has been conceived as a function of the design process that occurs before the design of the product, site team collaboration and interpersonal relationships, high work-place-supervision, on-site motivation and role definition. This conception has culminated in postulated determinants of quality achievement on construction based on a theoretical understanding. Aspects of measure of perceived design core job characteristics and site organisationand- management phenomena were factor analysed. The verification of the postulated determinants was accomplished by testing of a network of eight main hypotheses using multivariate analytical technique in multiple regression. Varied results emerged with four main hypotheses supported, two partially supported and the remaining two unsupported by data. The assertion is that manipulative actions on design core job characteristics, team collaboration and consensus with mutual understanding and agreement on project goals, mutual exchange with site supervisory staff and subordinates, and role definitions conducted within an integrated framework would contribute an aggregated beneficiary effect on quality achievement on construction projects

    Computational needs survey of NASA automation and robotics missions. Volume 2: Appendixes

    Get PDF
    NASA's operational use of advanced processor technology in space systems lags behind its commercial development by more than eight years. One of the factors contributing to this is the fact that mission computing requirements are frequency unknown, unstated, misrepresented, or simply not available in a timely manner. NASA must provide clear common requirements to make better use of available technology, to cut development lead time on deployable architectures, and to increase the utilization of new technology. Here, NASA, industry and academic communities are provided with a preliminary set of advanced mission computational processing requirements of automation and robotics (A and R) systems. The results were obtained in an assessment of the computational needs of current projects throughout NASA. The high percent of responses indicated a general need for enhanced computational capabilities beyond the currently available 80386 and 68020 processor technology. Because of the need for faster processors and more memory, 90 percent of the polled automation projects have reduced or will reduce the scope of their implemented capabilities. The requirements are presented with respect to their targeted environment, identifying the applications required, system performance levels necessary to support them, and the degree to which they are met with typical programmatic constraints. Here, appendixes are provided
    corecore