69 research outputs found
The Palomar Testbed Interferometer
The Palomar Testbed Interferometer (PTI) is a long-baseline infrared
interferometer located at Palomar Observatory, California. It was built as a
testbed for interferometric techniques applicable to the Keck Interferometer.
First fringes were obtained in July 1995. PTI implements a dual-star
architecture, tracking two stars simultaneously for phase referencing and
narrow-angle astrometry. The three fixed 40-cm apertures can be combined
pair-wise to provide baselines to 110 m. The interferometer actively tracks the
white-light fringe using an array detector at 2.2 um and active delay lines
with a range of +/- 38 m. Laser metrology of the delay lines allows for servo
control, and laser metrology of the complete optical path enables narrow-angle
astrometric measurements. The instrument is highly automated, using a
multiprocessing computer system for instrument control and sequencing.Comment: ApJ in Press (Jan 99) Fig 1 available from
http://huey.jpl.nasa.gov/~bode/ptiPicture.html, revised duging copy edi
Towards the Teraflop CFD
We are surveying current projects in the area of parallel supercomputers. The machines considered here will become commercially available in the 1990 - 1992 time frame. All are suitable for exploring the critical issues in applying parallel processors to large scale scientific computations, in particular CFD calculations. This chapter presents an overview of the surveyed machines, and a detailed analysis of the various architectural and technology approaches taken. Particular emphasis is placed on the feasibility of a Teraflops capability following the paths proposed by various developers
The Application Of RISC Processors To Training Simulators
Report on a study of the utility of reduced instruction set computer processors as the control computers in a training simulator. Report includes a master\u27s thesis on detailed hardware design for interfacing transputer hardware to the NeXT computer
Operation of a Radar Altimeter over the Greenland Ice Sheet
This thesis presents documentation for the Advanced Application Flight Experiment (AAFE) pulse compression radar altimeter and its role in the NASA Multisensor Airborne Altimetry Experiment over Greenland in 1993. The AAFE Altimeter is a Ku-band microwave radar which has demonstrated 14 centimeter range precision in operation over arctic ice. Recent repairs and improvements were required to make the Greenland missions possible. Transmitter, receiver and software modifications, as well as the integration of a GPS receiver are thoroughly documented. Procedures for installation, and operation of the radar are described. Finally, suggestions are made for further system improvements
Functional requirements document for the Earth Observing System Data and Information System (EOSDIS) Scientific Computing Facilities (SCF) of the NASA/MSFC Earth Science and Applications Division, 1992
Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented
Recommended from our members
Results from the STAR TPC system test
A system test of various components of the Solenoidal Tracker at RHIC (STAR) detector, operating in concern, has recently come on-line. Communication between a major sub-detector, a sector of the Time Projection Chamber (TPC), and the trigger, data acquisition and slow controls systems has been established, enabling data from cosmic ray muons to be collected. First results from an analysis of the TPC data are presented. These include measurements of system noise, electronic parameters such as amplifier gains and pedestal values, and tracking resolution for cosmic ray muons and laser induced ionization tracks. A discussion on the experience gained in integrating the different components for the system test is also given
Computational needs survey of NASA automation and robotics missions. Volume 2: Appendixes
NASA's operational use of advanced processor technology in space systems lags behind its commercial development by more than eight years. One of the factors contributing to this is the fact that mission computing requirements are frequency unknown, unstated, misrepresented, or simply not available in a timely manner. NASA must provide clear common requirements to make better use of available technology, to cut development lead time on deployable architectures, and to increase the utilization of new technology. Here, NASA, industry and academic communities are provided with a preliminary set of advanced mission computational processing requirements of automation and robotics (A and R) systems. The results were obtained in an assessment of the computational needs of current projects throughout NASA. The high percent of responses indicated a general need for enhanced computational capabilities beyond the currently available 80386 and 68020 processor technology. Because of the need for faster processors and more memory, 90 percent of the polled automation projects have reduced or will reduce the scope of their implemented capabilities. The requirements are presented with respect to their targeted environment, identifying the applications required, system performance levels necessary to support them, and the degree to which they are met with typical programmatic constraints. Here, appendixes are provided
New Challenges in Computer Architecture Education
This study was supported by the Ministry of Education, Science and Technological Development of the Republic of Serbia, and these results are parts of Grant No. 451-03-68/2022-14/200132 with the University of Kragujevac - Faculty of Technical Sciences Čačak.This paper provides a brief overview of the development of computer architecture and its impact on approaches to the presentation of appropriate information and computer education. The relative constancy of the concepts that were applied in the architecture of the computer influenced that the classical approaches to the appropriate education are kept until today. The changes that occurred in architecture during the development of computer technology, in conjunction with technological development, required a corresponding adjustment in the sphere of education. The turning point was the advent of the microprocessor, which introduced the x86 architecture into education. The beginning of the new century was marked by the ARM architecture. And today, the RISC-V architecture is emerging more and more as a new design challenge.Publishe
- …