21,019 research outputs found
Influence of compressor degradation on optimal operation of a compressor station
Normal practice in a compressor station with compressors in parallel is to allocate the mass flows equally. However, this strategy is not optimal if the compressors are not identical. A common reason why compressors become non-identical is because their performance degrades over time. Degradation increases the power necessary to run the compressor station and changes the optimal allocation of mass flows. This paper presents a framework for optimal operation in a compressor station with degrading compressors. The optimisation framework proposed in this work explicitly includes a model of degradation in the optimisation problem and analyses how the optimal load-sharing changes when the compressors are degrading. The optimisation framework was applied in an industrial case study of a compressor station in which three parallel compressors are subject to degradation. The case study confirms that it is possible to minimise the extra power consumption due to degradation by adjusting the operating conditions of the compressor station. The analysis also gives insights into the impact of degradation on the optimal solution when compressors work at their limits
Performance evaluation of high speed compressors for high speed multipliers
This paper describes high speed compressors for high speed parallel
multipliers like Booth Multiplier, Wallace Tree Multiplier in Digital Signal
Processing (DSP). This paper presents 4-3, 5-3, 6-3 and 7-3 compressors for
high speed multiplication. These compressors reduce vertical critical path
more rapidly than conventional compressors. A 5-3 conventional compressor can
take four steps to reduce bits from 5 to 3, but the proposed 5-3 takes only 2
steps. These compressors are simulated with H-Spice at a temperature of 25°C
at a supply voltage 2.0V using 90nm MOSIS technology. The Power, Delay, Power
Delay Product (PDP) and Energy Delay Product (EDP) of the compressors are
calculated to analyze the total propagation delay and energy consumption. All
the compressors are designed with half adder and full Adders only
Optimizing Lossy Compression Rate-Distortion from Automatic Online Selection between SZ and ZFP
With ever-increasing volumes of scientific data produced by HPC applications,
significantly reducing data size is critical because of limited capacity of
storage space and potential bottlenecks on I/O or networks in writing/reading
or transferring data. SZ and ZFP are the two leading lossy compressors
available to compress scientific data sets. However, their performance is not
consistent across different data sets and across different fields of some data
sets: for some fields SZ provides better compression performance, while other
fields are better compressed with ZFP. This situation raises the need for an
automatic online (during compression) selection between SZ and ZFP, with a
minimal overhead. In this paper, the automatic selection optimizes the
rate-distortion, an important statistical quality metric based on the
signal-to-noise ratio. To optimize for rate-distortion, we investigate the
principles of SZ and ZFP. We then propose an efficient online, low-overhead
selection algorithm that predicts the compression quality accurately for two
compressors in early processing stages and selects the best-fit compressor for
each data field. We implement the selection algorithm into an open-source
library, and we evaluate the effectiveness of our proposed solution against
plain SZ and ZFP in a parallel environment with 1,024 cores. Evaluation results
on three data sets representing about 100 fields show that our selection
algorithm improves the compression ratio up to 70% with the same level of data
distortion because of very accurate selection (around 99%) of the best-fit
compressor, with little overhead (less than 7% in the experiments).Comment: 14 pages, 9 figures, first revisio
Recommended from our members
Two-dimensional DCT/IDCT architecture
A fully parallel architecture for the computation of a two-dimensional (2-D) discrete cosine transform (DCT), based on row-column decomposition is presented. It uses the same one dimensional (1-D) DCT unit for the row and column computations and (N2+N) registers to perform the transposition. It possesses features of regularity and modularity, and is thus well suited for VLSI implementation. It can be used for the computation of either the forward or the inverse 2-D DCT. Each 1-D DCT unit uses N fully parallel vector inner product (VIP) units. The design of the VIP units is based on a systematic design methodology using radix-2â arithmetic, which allows partitioning of the elements of each vector into small groups. Array multipliers without the final adder are used to produce the different partial product terms. This allows a more efficient use of 4:2 compressors for the accumulation of the products in the intermediate stages and reduces the number of accumulators from N to one. Using this procedure, the 2-D DCT architecture requires less than N2 multipliers (in terms of area occupied) and only 2N adders. It can compute a N x N-point DCT at a rate of one complete transform per N cycles after an appropriate initial delay
Optimization of a network of compressors in parallel: Operational and maintenance planning â The air separation plant case
A general mathematical framework for the optimization of compressors operations in air separation plants that considers operating constraints for compressors, several types of maintenance policies and managerial aspects is presented. The proposed approach can be used in a rolling horizon scheme. The operating status, the power consumption, the startup and the shutdown costs for compressors, the compressor-to-header assignments as well as the outlet mass flow rates for compressed air and distillation products are optimized under full demand satisfaction. The power consumption in the compressors is expressed by regression functions that have been derived using technical and historical data. Several case studies of an industrial air separation plant are solved. The results demonstrate that the simultaneous optimization of maintenance and operational tasks of the compressors favor the generation of better solutions in terms of total costs
Significantly Improving Lossy Compression for Scientific Data Sets Based on Multidimensional Prediction and Error-Controlled Quantization
Today's HPC applications are producing extremely large amounts of data, such
that data storage and analysis are becoming more challenging for scientific
research. In this work, we design a new error-controlled lossy compression
algorithm for large-scale scientific data. Our key contribution is
significantly improving the prediction hitting rate (or prediction accuracy)
for each data point based on its nearby data values along multiple dimensions.
We derive a series of multilayer prediction formulas and their unified formula
in the context of data compression. One serious challenge is that the data
prediction has to be performed based on the preceding decompressed values
during the compression in order to guarantee the error bounds, which may
degrade the prediction accuracy in turn. We explore the best layer for the
prediction by considering the impact of compression errors on the prediction
accuracy. Moreover, we propose an adaptive error-controlled quantization
encoder, which can further improve the prediction hitting rate considerably.
The data size can be reduced significantly after performing the variable-length
encoding because of the uneven distribution produced by our quantization
encoder. We evaluate the new compressor on production scientific data sets and
compare it with many other state-of-the-art compressors: GZIP, FPZIP, ZFP,
SZ-1.1, and ISABELA. Experiments show that our compressor is the best in class,
especially with regard to compression factors (or bit-rates) and compression
errors (including RMSE, NRMSE, and PSNR). Our solution is better than the
second-best solution by more than a 2x increase in the compression factor and
3.8x reduction in the normalized root mean squared error on average, with
reasonable error bounds and user-desired bit-rates.Comment: Accepted by IPDPS'17, 11 pages, 10 figures, double colum
Improvements in CO2 Booster Architectures with Different Economizer Arrangements.
CO2 transcritical booster architectures are widely analyzed to be applied in centralized
commercial refrigeration plants in consonance with the irrevocable phase-out of HFCs. Most of these
analyses show the limitations of CO2 cycles in terms of energy e ciency, especially in warm countries.
From the literature, several improvements have been proposed to raise the booster e ciency in high
ambient temperatures. The use of economizers is an interesting technique to reduce the temperature
after the gas cooler and to improve the energy e ciency of transcritical CO2 cycles. The economizer
cools down the high pressureâs line of CO2 by evaporating the same refrigerant extracted from another
point of the facility. Depending on the extraction point, some configurations are possible. In this work,
di erent booster architectures with economizers have been analyzed and compared. From the results,
the combination of the economizer with the additional compressor allows obtaining energy savings
of up to 8.5% in warm countries and up to 4% in cold countries with regard to the flash-by-pass
arrangement and reduce the volumetric displacement required of the MT compressors by up to 37%
Recommended from our members
Normal Rack Grid Generation Method for Screw Machines with Large Helix Angles
Improving the efficiency of the screw machine is highly significant for industry. Numerical simulation is an important tool in developing these machines. The 3D computational fluid dynamic simulation can give a valuable insight into the flow parameters of screw machines. However, it is currently difficult to generate high quality computational grids required for screw rotors with large helix angle. This is mainly due to the excessively high cell skewness of the rotors with large helix angel, which would introduce errors in numerical simulation. This paper presents a novel grid generation algorithm used for the screw rotors with large helix angel. This method is based on the principles developed for the grid generation in transverse cross-section. Such mesh is generated by SCORGTM using normal rack grid generation method which means numerical meshes are generated in a plane normal to the pitch helix line. The mesh lines are then parallel to the helix line and thus an orthogonal mesh will be produced. The main flow and leakage flow directions are orthogonal to the mesh, potentially reducing numerical diffusion. This developed algorithm could also be employed for single screw machines
- âŠ