181,002 research outputs found

    Cost modelling for cloud computing utilisation in long term digital preservation

    Get PDF
    The rapid increase in volume of digital information can cause concern among organisations regarding manageability, costs and security of their information in the long-term. As cloud computing technology is often used for digital preservation purposes and is still evolving, there is difficulty in determining its long-term costs. This paper presents the development of a generic cost model for public and private clouds utilisation in long term digital preservation (LTDP), considering the impact of uncertainties and obsolescence issues. The cost model consists of rules and assumptions and was built using a combination of activity based and parametric cost estimation techniques. After generation of cost breakdown structures for both clouds, uncertainties and obsolescence were categorised. To quantify impacts of uncertainties on cost, three-point estimate technique was employed and Monte Carlo simulation was applied to generate the probability distribution on each cost driver. A decision support cost estimation tool with dashboard representation of results was developed

    Management issues in systems engineering

    Get PDF
    When applied to a system, the doctrine of successive refinement is a divide-and-conquer strategy. Complex systems are sucessively divided into pieces that are less complex, until they are simple enough to be conquered. This decomposition results in several structures for describing the product system and the producing system. These structures play important roles in systems engineering and project management. Many of the remaining sections in this chapter are devoted to describing some of these key structures. Structures that describe the product system include, but are not limited to, the requirements tree, system architecture and certain symbolic information such as system drawings, schematics, and data bases. The structures that describe the producing system include the project's work breakdown, schedules, cost accounts and organization

    The Adaptive Priority Queue with Elimination and Combining

    Full text link
    Priority queues are fundamental abstract data structures, often used to manage limited resources in parallel programming. Several proposed parallel priority queue implementations are based on skiplists, harnessing the potential for parallelism of the add() operations. In addition, methods such as Flat Combining have been proposed to reduce contention by batching together multiple operations to be executed by a single thread. While this technique can decrease lock-switching overhead and the number of pointer changes required by the removeMin() operations in the priority queue, it can also create a sequential bottleneck and limit parallelism, especially for non-conflicting add() operations. In this paper, we describe a novel priority queue design, harnessing the scalability of parallel insertions in conjunction with the efficiency of batched removals. Moreover, we present a new elimination algorithm suitable for a priority queue, which further increases concurrency on balanced workloads with similar numbers of add() and removeMin() operations. We implement and evaluate our design using a variety of techniques including locking, atomic operations, hardware transactional memory, as well as employing adaptive heuristics given the workload.Comment: Accepted at DISC'14 - this is the full version with appendices, including more algorithm

    Study of the costs and benefits of composite materials in advanced turbofan engines

    Get PDF
    Composite component designs were developed for a number of applicable engine parts and functions. The cost and weight of each detail component was determined and its effect on the total engine cost to the aircraft manufacturer was ascertained. The economic benefits of engine or nacelle composite or eutectic turbine alloy substitutions was then calculated. Two time periods of engine certification were considered for this investigation, namely 1979 and 1985. Two methods of applying composites to these engines were employed. The first method just considered replacing an existing metal part with a composite part with no other change to the engine. The other method involved major engine redesign so that more efficient composite designs could be employed. Utilization of polymeric composites wherever payoffs were available indicated that a total improvement in Direct Operating Cost (DOC) of 2.82 to 4.64 percent, depending on the engine considered, could be attained. In addition, the percent fuel saving ranged from 1.91 to 3.53 percent. The advantages of using advanced materials in the turbine are more difficult to quantify but could go as high as an improvement in DOC of 2.33 percent and a fuel savings of 2.62 percent. Typically, based on a fleet of one hundred aircraft, a percent savings in DOC represents a savings of four million dollars per year and a percent of fuel savings equals 23,000 cu m (7,000,000 gallons) per year

    Product assurance technology for procuring reliable, radiation-hard, custom LSI/VLSI electronics

    Get PDF
    Advanced measurement methods using microelectronic test chips are described. These chips are intended to be used in acquiring the data needed to qualify Application Specific Integrated Circuits (ASIC's) for space use. Efforts were focused on developing the technology for obtaining custom IC's from CMOS/bulk silicon foundries. A series of test chips were developed: a parametric test strip, a fault chip, a set of reliability chips, and the CRRES (Combined Release and Radiation Effects Satellite) chip, a test circuit for monitoring space radiation effects. The technical accomplishments of the effort include: (1) development of a fault chip that contains a set of test structures used to evaluate the density of various process-induced defects; (2) development of new test structures and testing techniques for measuring gate-oxide capacitance, gate-overlap capacitance, and propagation delay; (3) development of a set of reliability chips that are used to evaluate failure mechanisms in CMOS/bulk: interconnect and contact electromigration and time-dependent dielectric breakdown; (4) development of MOSFET parameter extraction procedures for evaluating subthreshold characteristics; (5) evaluation of test chips and test strips on the second CRRES wafer run; (6) two dedicated fabrication runs for the CRRES chip flight parts; and (7) publication of two papers: one on the split-cross bridge resistor and another on asymmetrical SRAM (static random access memory) cells for single-event upset analysis

    An integrated approach to supply chain risk analysis

    Get PDF
    Despite the increasing attention that supply chain risk management is receiving by both researchers and practitioners, companies still lack a risk culture. Moreover, risk management approaches are either too general or require pieces of information not regularly recorded by organisations. This work develops a risk identification and analysis methodology that integrates widely adopted supply chain and risk management tools. In particular, process analysis is performed by means of the standard framework provided by the Supply Chain Operations Reference Model, the risk identification and analysis tasks are accomplished by applying the Risk Breakdown Structure and the Risk Breakdown Matrix, and the effects of risk occurrence on activities are assessed by indicators that are already measured by companies in order to monitor their performances. In such a way, the framework contributes to increase companies' awareness and communication about risk, which are essential components of the management of modern supply chains. A base case has been developed by applying the proposed approach to a hypothetical manufacturing supply chain. An in-depth validation will be carried out to improve the methodology and further demonstrate its benefits and limitations. Future research will extend the framework to include the understanding of the multiple effects of risky events on different processe

    Rightsizing Project Management for Libraries

    Get PDF
    Project management is a current topic in management, and project management offices are springing up in many organizations. Libraries may not need a project management office, but adoption of project management techniques, rightsized for library needs, can focus scope, define and organize tasks, and manage resources for many kinds of projects. The University of New Hampshire Library has implemented selected aspects of project management and is learning where these principles can be applied most effectively for successful projects. This paper describes UNH’s use of selected project management techniques and tools in a major collection integration and relocation project

    On-a-chip microdischarge thruster arrays inspired by photonic device technology for plasma television

    No full text
    This study shows that the practical scaling of a hollow cathode thruster device to MEMS level should be possible albeit with significant divergence from traditional design. The main divergence is the need to operate at discharge pressures between 1-3bar to maintain emitter diameter pressure products of similar values to conventional hollow cathode devices. Without operating at these pressures emitter cavity dimensions become prohibitively large for maintenance of the hollow cathode effect and without which discharge voltage would be in the hundreds of volts as with conventional microdischarge devices. In addition this requires sufficiently constrictive orifice diameters in the 10µm – 50µm range for single cathodes or <5µm larger arrays. Operation at this pressure results in very small Debye lengths (4 -5.2pm) and leads to large reductions in effective work function (0.3 – 0.43eV) via the Schottky effect. Consequently, simple work function lowering compounds such as lanthanum hexaboride (LaB6) can be used to reduce operating temperature without the significant manufacturing complexity of producing porous impregnated thermionic emitters as with macro scale hollow cathodes, while still operating <1200°C at the emitter surface. The literature shows that LaB6 can be deposited using a variety of standard microfabrication techniques

    Performance Characterization of Multi-threaded Graph Processing Applications on Intel Many-Integrated-Core Architecture

    Full text link
    Intel Xeon Phi many-integrated-core (MIC) architectures usher in a new era of terascale integration. Among emerging killer applications, parallel graph processing has been a critical technique to analyze connected data. In this paper, we empirically evaluate various computing platforms including an Intel Xeon E5 CPU, a Nvidia Geforce GTX1070 GPU and an Xeon Phi 7210 processor codenamed Knights Landing (KNL) in the domain of parallel graph processing. We show that the KNL gains encouraging performance when processing graphs, so that it can become a promising solution to accelerating multi-threaded graph applications. We further characterize the impact of KNL architectural enhancements on the performance of a state-of-the art graph framework.We have four key observations: 1 Different graph applications require distinctive numbers of threads to reach the peak performance. For the same application, various datasets need even different numbers of threads to achieve the best performance. 2 Only a few graph applications benefit from the high bandwidth MCDRAM, while others favor the low latency DDR4 DRAM. 3 Vector processing units executing AVX512 SIMD instructions on KNLs are underutilized when running the state-of-the-art graph framework. 4 The sub-NUMA cache clustering mode offering the lowest local memory access latency hurts the performance of graph benchmarks that are lack of NUMA awareness. At last, We suggest future works including system auto-tuning tools and graph framework optimizations to fully exploit the potential of KNL for parallel graph processing.Comment: published as L. Jiang, L. Chen and J. Qiu, "Performance Characterization of Multi-threaded Graph Processing Applications on Many-Integrated-Core Architecture," 2018 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), Belfast, United Kingdom, 2018, pp. 199-20
    • …
    corecore