878 research outputs found

    Implementation of a Hardware/Software Platform for Real-Timedata-Intensive Applications in Hazardous Environments

    Get PDF
    Real-Time Technology and Applications Symposium. Brookline, MA, USA, 10-12 Oct. 1996In real-time data-intensive applications, the simultaneous achievement of the required performance and determinism is a difficult issue to address, mainly due to the time needed to perform I/O operations, which is more significant than the CPU processing time. Additional features need to be considered if these applications are intended to perform in hostile environments. In this paper, we address the implementation of a hardware/software platform designed to acquire, transfer, process and store massive amounts of information at sustained rates of several MBytes/sec, capable of supporting real-time applications with stringent throughput requirements under hazardous environmental conditions. A real-world system devoted to the inspection of nuclear power plants is presented as an illustrative examplePublicad

    RAIDX: RAID EXTENDED FOR HETEROGENEOUS ARRAYS

    Get PDF
    The computer hard drive market has diversified with the establishment of solid state disks (SSDs) as an alternative to magnetic hard disks (HDDs). Each hard drive technology has its advantages: the SSDs are faster than HDDs but the HDDs are cheaper. Our goal is to construct a parallel storage system with HDDs and SSDs such that the parallel system is as fast as the SSDs. Achieving this goal is challenging since the slow HDDs store more data and become bottlenecks, while the SSDs remain idle. RAIDX is a parallel storage system designed for disks of different speeds, capacities and technologies. The RAIDX hardware consists of an array of disks; the RAIDX software consists of data structures and algorithms that allow the disks to be viewed as a single storage unit that has capacity equal to the sum of the capacities of its disks, failure rate lower than the failure rate of its individual disks, and speeds close to that of its faster disks. RAIDX achieves its performance goals with the aid of its novel parallel data organization technique that allows storage data to be moved on the fly without impacting the upper level file system. We show that storage data accesses satisfy the locality of reference principle, whereby only a small fraction of storage data are accessed frequently. RAIDX has a monitoring program that identifies frequently accessed blocks and a migration program that moves frequently accessed blocks to faster disks. The faster disks are caches that store the solo copy of frequently accessed data. Experimental evaluation has shown that a HDD+SSD RAIDX array is as fast as an all-SSD array when the workload shows locality of reference

    Robo-line storage: Low latency, high capacity storage systems over geographically distributed networks

    Get PDF
    Rapid advances in high performance computing are making possible more complete and accurate computer-based modeling of complex physical phenomena, such as weather front interactions, dynamics of chemical reactions, numerical aerodynamic analysis of airframes, and ocean-land-atmosphere interactions. Many of these 'grand challenge' applications are as demanding of the underlying storage system, in terms of their capacity and bandwidth requirements, as they are on the computational power of the processor. A global view of the Earth's ocean chlorophyll and land vegetation requires over 2 terabytes of raw satellite image data. In this paper, we describe our planned research program in high capacity, high bandwidth storage systems. The project has four overall goals. First, we will examine new methods for high capacity storage systems, made possible by low cost, small form factor magnetic and optical tape systems. Second, access to the storage system will be low latency and high bandwidth. To achieve this, we must interleave data transfer at all levels of the storage system, including devices, controllers, servers, and communications links. Latency will be reduced by extensive caching throughout the storage hierarchy. Third, we will provide effective management of a storage hierarchy, extending the techniques already developed for the Log Structured File System. Finally, we will construct a protototype high capacity file server, suitable for use on the National Research and Education Network (NREN). Such research must be a Cornerstone of any coherent program in high performance computing and communications

    RAID-2: Design and implementation of a large scale disk array controller

    Get PDF
    We describe the implementation of a large scale disk array controller and subsystem incorporating over 100 high performance 3.5 inch disk drives. It is designed to provide 40 MB/s sustained performance and 40 GB capacity in three 19 inch racks. The array controller forms an integral part of a file server that attaches to a Gb/s local area network. The controller implements a high bandwidth interconnect between an interleaved memory, an XOR calculation engine, the network interface (HIPPI), and the disk interfaces (SCSI). The system is now functionally operational, and we are tuning its performance. We review the design decisions, history, and lessons learned from this three year university implementation effort to construct a truly large scale system assembly

    Introduction to Multiprocessor I/O Architecture

    Get PDF
    The computational performance of multiprocessors continues to improve by leaps and bounds, fueled in part by rapid improvements in processor and interconnection technology. I/O performance thus becomes ever more critical, to avoid becoming the bottleneck of system performance. In this paper we provide an introduction to I/O architectural issues in multiprocessors, with a focus on disk subsystems. While we discuss examples from actual architectures and provide pointers to interesting research in the literature, we do not attempt to provide a comprehensive survey. We concentrate on a study of the architectural design issues, and the effects of different design alternatives

    Optical Sensors for Mapping Temperature and Winds in the Thermosphere from a CubeSat Platform

    Get PDF
    The thermosphere is the region between approximately 80 km and 320 or more km above the earth\u27s surface. While many people consider this elevation to be space rather than atmosphere, there is a small quantity of gasses in this region. The behavior of these gasses influences the orbits of satellites, including the International Space Station, causes space weather events, and influences the weather closer to the surface of the earth. Due to the location and characteristics of the thermosphere, even basic properties such as temperature are very difficult to measure. High spatial and temporal resolution data on temperatures and winds in the thermosphere are needed by both the space weather and earth climate modeling communities. To address this need, Space Dynamics Laboratory (SDL) started the Profiling Oxygen Emissions of the Thermosphere (POET) program. POET consists of a series of sensors designed to fly on sounding rockets, CubeSats, or larger platforms, such as IridiumNEXT SensorPODS. While each sensor design is different, they all use characteristics of oxygen optical emissions to measure space weather properties. The POET program builds upon the work of the RAIDS, Odin, and UARS programs. Our intention is to dramatically reduce the costs of building, launching, and operating spectrometers in space, thus allowing for more sensors to be in operation. Continuous long-term data from multiple sensors is necessary to understand the underlying physics required to accurately model and predict weather in the thermosphere. While previous spectrometers have been built to measure winds and temperatures in the thermosphere, they have all been large and expensive. The POET sensors use new focal plane technology and optical designs to overcome these obstacles. This thesis focuses on the testing and calibration of the two POET sensors: the Oxygen Profiling of the Atmospheric Limb (OPAL) temperature sensor and the Split-field Etalon Doppler Imager (SEDI) wind senso

    Preliminary Electrical Designs for CTEx and AFIT Satellite Ground Station

    Get PDF
    This thesis outlines the design of the electrical components for the space-based ChromoTomography Experiment (CTEx). CTEx is the next step in the development of high-speed chromotomography at the Air Force Institute of Technology. The electrical design of the system is challenging due to the large amount of data that is acquired by the imager and the limited resources that is inherent with space-based systems. Additional complication to the design is the need to know the angle of a spinning prism that is in the field of view very precisely for each image. Without this precise measurement any scene that is reconstructed from the data will be blurry and incomprehensible. This thesis also outlines how the control software for the CTEx space system should be created. The software ow is a balance of complex real time target pointing angles and simplicity to allow the system to function as quick as possible. This thesis also discusses the preliminary design for an AFIT satellite ground station based upon the design of the United States Air Force Academy\u27s ground station. The AFIT ground station will be capable of commanding and controlling satellites produced by USAFA and satellites produced by a burgeoning small satellite program at AFIT

    Automation aids for ATC controllers

    Get PDF
    March 1985Includes bibliographical references (leaf 9)Development of Air Traffic Controller automation aids is frequently hampered by the mathematical nature of the algorithms they are based upon. These limitations are: lack of adaptability to local conditions; high development, testing, and modification costs; and low end-user confidence on the algorithm's behavior. Research in Artificial Intelligence has produced systems whose logic is implemented by means of rules which may be defined by the end-user without explicit programming. Such rule-based systems may provide a flexible, low-cost alternative to mathematical algorithms. Since the end-user can exercise significant control over the behavior of such logic, automation aids built using these methods can be tailored to the user's environment, preferences, and experience more readily than if built around a classical mathematical algorithm. While this is theoretically possible, present rule-based systems technology is insufficient to allow practical implementation of an ATC aid today. An experimental new rule-based core system has been developed which overcomes some of these obstacles, but a number of problems, including that of poor hardware performance, remain outstanding as topic for continuing research

    Implementing Transparent Compression and Leveraging Solid State Disks in a High Performance Parallel File System

    Get PDF
    In recent years computers have been increasing in compute density and speed at a dramatic pace. This increase allows for massively parallel programs to run faster than ever before. Unfortunately, many such programs are being held back by the relatively slow I/O subsystems that they are forced to work with. Storage technology simply has not followed the same curve of progression in the computing world. Because the storage systems are so slow in comparison the processors are forced to idle while waiting for data; a potentially performance crippling condition. This performance disparity is lessened by the advent of parallel file systems. Such file systems allow data to be spread across multiple servers and disks. High speed networking allows for large amounts of bandwidth to and from the file system with relatively low latency. This arrangement allows for very large increases in sustained read and write speeds on large files although performance of the file system can be hampered if an application spends most of its time working on small data sets and files. In recent years there has also been an unprecedented forward shift in high performance I/O systems through the widespread development and deployment of NAND Flash-based solid state disks (SSDs). SSDs offer many advantages over traditional platter-based hard disk drives (HDDs) but also suffer from very specific disadvantages due to their use of Flash memory as a storage medium as well as use of a hardware flash translation layer (FTL). The advantages of SSDs are numerous: faster random and sequential access times, higher I/O operations per second} (IOPS), and much lower power consumption in both idle and load scenarios. SSDs also tend to have a much longer mean time between failure (MTBF); an advantage that can be attributed to their complete lack of moving parts. Two key things prevent SSDs from widespread mass storage deployment: storage capacity and cost per gigabyte. Enterprise level SSDs that utilize single-level cell (SLC) Flash are orders of magnitude more expensive per gigabyte than their enterprise class HDD counterparts (which are also higher capacity per drive). Because of this disparity we propose utilizing relatively small SSDs in conjunction with high capacity HDD arrays in parallel file systems like OrangeFS (previously known as the Parallel Virtual File System, or PVFS). The access latencies and bandwidth of SSDs make them an ideal medium for storing file metadata in a parallel file system. These same characteristics also make them ideal for integration as a persistent server-side cache. We also introduce a method of transparently compressing file data in striped parallel file systems for high-performance streaming reads and writes with increased storage capacity to combat rising checkpoint sizes and bandwidth requirements
    corecore