2,983 research outputs found

    Feasibility study of an Integrated Program for Aerospace vehicle Design (IPAD). Volume 1B: Concise review

    Get PDF
    Reports on the design process, support of the design process, IPAD System design catalog of IPAD technical program elements, IPAD System development and operation, and IPAD benefits and impact are concisely reviewed. The approach used to define the design is described. Major activities performed during the product development cycle are identified. The computer system requirements necessary to support the design process are given as computational requirements of the host system, technical program elements and system features. The IPAD computer system design is presented as concepts, a functional description and an organizational diagram of its major components. The cost and schedules and a three phase plan for IPAD implementation are presented. The benefits and impact of IPAD technology are discussed

    Multi-community command and control systems in law enforcement: An introductory planning guide

    Get PDF
    A set of planning guidelines for multi-community command and control systems in law enforcement is presented. Essential characteristics and applications of these systems are outlined. Requirements analysis, system concept design, implementation planning, and performance and cost modeling are described and demonstrated with numerous examples. Program management techniques and joint powers agreements for multicommunity programs are discussed in detail. A description of a typical multi-community computer-aided dispatch system is appended

    Electronic/electric technology benefits study

    Get PDF
    The benefits and payoffs of advanced electronic/electric technologies were investigated for three types of aircraft. The technologies, evaluated in each of the three airplanes, included advanced flight controls, advanced secondary power, advanced avionic complements, new cockpit displays, and advanced air traffic control techniques. For the advanced flight controls, the near term considered relaxed static stability (RSS) with mechanical backup. The far term considered an advanced fly by wire system for a longitudinally unstable airplane. In the case of the secondary power systems, trades were made in two steps: in the near term, engine bleed was eliminated; in the far term bleed air, air plus hydraulics were eliminated. Using three commercial aircraft, in the 150, 350, and 700 passenger range, the technology value and pay-offs were quantified, with emphasis on the fiscal benefits. Weight reductions deriving from fuel saving and other system improvements were identified and the weight savings were cycled for their impact on TOGW (takeoff gross weight) and upon the performance of the airframes/engines. Maintenance, reliability, and logistic support were the other criteria

    The engineering design integration (EDIN) system

    Get PDF
    A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described

    Tolerating Correlated Failures in Massively Parallel Stream Processing Engines

    Full text link
    Fault-tolerance techniques for stream processing engines can be categorized into passive and active approaches. A typical passive approach periodically checkpoints a processing task's runtime states and can recover a failed task by restoring its runtime state using its latest checkpoint. On the other hand, an active approach usually employs backup nodes to run replicated tasks. Upon failure, the active replica can take over the processing of the failed task with minimal latency. However, both approaches have their own inadequacies in Massively Parallel Stream Processing Engines (MPSPE). The passive approach incurs a long recovery latency especially when a number of correlated nodes fail simultaneously, while the active approach requires extra replication resources. In this paper, we propose a new fault-tolerance framework, which is Passive and Partially Active (PPA). In a PPA scheme, the passive approach is applied to all tasks while only a selected set of tasks will be actively replicated. The number of actively replicated tasks depends on the available resources. If tasks without active replicas fail, tentative outputs will be generated before the completion of the recovery process. We also propose effective and efficient algorithms to optimize a partially active replication plan to maximize the quality of tentative outputs. We implemented PPA on top of Storm, an open-source MPSPE and conducted extensive experiments using both real and synthetic datasets to verify the effectiveness of our approach

    Benchmarking: More Aspects of High Performance Computing

    Get PDF
    The original HPL algorithm makes the assumption that all data can be fit entirely in the main memory. This assumption will obviously give a good performance due to the absence of disk I/O. However, not all applications can fit their entire data in memory. These applications which require a fair amount of I/O to move data to and from main memory and secondary storage, are more indicative of usage of an Massively Parallel Processor (MPP) System. Given this scenario a well designed I/O architecture will play a significant part in the performance of the MPP System on regular jobs. And, this is not represented in the current Benchmark. The modified HPL algorithm is hoped to be a step in filling this void. The most important factor in the performance of out-of-core algorithms is the actual I/O operations performed and their efficiency in transferring data to/from main memory and disk, Various methods were introduced in the report for performing I/O operations. The I/O method to use depends on the design of the out-of-core algorithm. Conversely, the performance of the out-of-core algorithm is affected by the choice of I/O operations. This implies, good performance is achieved when I/O efficiency is closely tied with the out-of-core algorithms. The out-of-core algorithms must be designed from the start. It is easily observed in the timings for various plots, that I/O plays a significant part in the overall execution time. This leads to an important conclusion, retro-fitting an existing code may not be the best choice. The right-looking algorithm selected for the LU factorization is a recursive algorithm and performs well when the entire dataset is in memory. At each stage of the loop the entire trailing submatrix is read into memory panel by panel. This gives a polynomial number of I/O reads and writes. If the left-looking algorithm was selected for the main loop, the number of I/O operations involved will be linear on the number of columns. This is due to the data access pattern for the left-looking factorization. The right-looking algorithm performs better for in-core data, but the left-looking will perform better for out-of-core data due to the reduced I/O operations. Hence the conclusion that out-of-core algorithms will perform better when designed from start. The out-of-core and thread based computation do not interact in this case, since I/O is not done by the threads. The performance of the thread based computation does not depend on I/O as the algorithms are in the BLAS algorithms which assumes all the data to be in memory. This is the reason the out-of-core results and OpenMP threads results were presented separately and no attempt to combine them was made. In general, the modified HPL performs better with larger block sizes, due to less I/O involved for out-of-core part and better cache utilization for the thread based computation

    Optimal Pricing and Capacity Allocation in Vertically Differentiated Web Caching Services

    Get PDF
    Internet infrastructure is a key enabler of e-business. The infrastructure consists of backbone networks (such as UUNET and AT&T), access networks (such as AOL and Earthlink), content delivery networks (CDNs, such as Akamai) and other caching service providers. Together, all of the players make up the digital supply chain for information goods. Caches provisioned by CDNs and other entities are the storage centers, the digital equivalent of warehouses. These caches store and deliver information from the edge of the network and serve to stabilize and add efficiency to content delivery. While the benefits of caching to content providers with regard to scaling content delivery globally, reducing bandwidth costs and response times are well recognized, caching has not become pervasive. This is largely due to misaligned incentives in the delivery chain. Much of the work done to date on Web caching has focused on the technology to provision quality of service and has not dealt with issues of fundamental importance to the business of provisioning caching services, specifically, the design of incentive compatible services, appropriate pricing schemes, and associated resource allocation issues that arise in operating a caching service. We discuss the design of incentive compatible caching services that we refer to as quality of service caching. Pricing plays an important role in aligning the incentives. We develop an analytic model to study the IAPís optimal pricing and capacity allocation policies

    Design requirements for SRB production control system. Volume 5: Appendices

    Get PDF
    A questionnaire to be used to screen potential candidate production control software packages is presented
    corecore