96 research outputs found
AGGLOMERATION DURING FLUIDIZED-BED COMBUSTION OF BIOMASS
Wheat stalk is tested to investigate the formation of bed agglomeration. The results show that defluidization time decreases with the combustion temperature increasing. The minimum fluidization velocity of the bed material after the test increases. The K, Ca and Si elements play the most important role in bed defluidization
Design and Operation of Biomass Circulating Fluidized Bed Boiler with High Steam Parameter
Two circulating fluidized bed(CFB) boilers with capacity of 12 MWe and 25 MWe, respectively, with biomass as fuel, adopting the basic technology independently developed by Institute of Engineering Thermophysics (IET), Chinese Academy of Sciences, have been in commercial operation since March 2010 in China. This paper focuses on the design principles, the design specifications and operating results of the two CFB boilers
Recommended from our members
A no-reference optical flow-based quality evaluator for stereoscopic videos in curvelet domain
Most of the existing 3D video quality assessment (3D-VQA/SVQA) methods only consider spatial information by directly using an image quality evaluation method. In addition, a few take the motion information of adjacent frames into consideration. In practice, one may assume that a single data-view is unlikely to be sufficient for effectively learning the video quality. Therefore, integration of multi-view information is both valuable and necessary. In this paper, we propose an effective multi-view feature learning metric for blind stereoscopic video quality assessment (BSVQA), which jointly focuses on spatial information, temporal information and inter-frame spatio-temporal information. In our study, a set of local binary patterns (LBP) statistical features extracted from a computed frame curvelet representation are used as spatial and spatio-temporal description, and the local flow statistical features based on the estimation of optical flow are used to describe the temporal distortion. Subsequently, a support vector regression (SVR) is utilized to map the feature vectors of each single view to subjective quality scores. Finally, the scores of multiple views are pooled into the final score according to their contribution rate. Experimental results demonstrate that the proposed metric significantly outperforms the existing metrics and can achieve higher consistency with subjective quality assessment
Sparse representation based stereoscopic image quality assessment accounting for perceptual cognitive process
In this paper, we propose a sparse representation based Reduced-Reference Image Quality Assessment (RR-IQA) index for stereoscopic images from the following two perspectives: 1) Human visual system (HVS) always tries to infer the meaningful information and reduces uncertainty from the visual stimuli, and the entropy of primitive (EoP) can well describe this visual cognitive progress when perceiving natural images. 2) Ocular dominance (also known as binocularity) which represents the interaction between two eyes is quantified by the sparse representation coefficients. Inspired by previous research, the perception and understanding of an image is considered as an active inference process determined by the level of “surprise”, which can be described by EoP. Therefore, the primitives learnt from natural images can be utilized to evaluate the visual information by computing entropy. Meanwhile, considering the binocularity in stereo image quality assessment, a feasible way is proposed to characterize this binocular process according to the sparse representation coefficients of each view. Experimental results on LIVE 3D image databases and MCL database further demonstrate that the proposed algorithm achieves high consistency with subjective evaluation
Stereoscopic video quality assessment based on 3D convolutional neural networks
The research of stereoscopic video quality assessment (SVQA) plays an important role for promoting the development of stereoscopic video system. Existing SVQA metrics rely on hand-crafted features, which is inaccurate and time-consuming because of the diversity and complexity of stereoscopic video distortion. This paper introduces a 3D convolutional neural networks (CNN) based SVQA framework that can model not only local spatio-temporal information but also global temporal information with cubic difference video patches as input. First, instead of using hand-crafted features, we design a 3D CNN architecture to automatically and effectively capture local spatio-temporal features. Then we employ a quality score fusion strategy considering global temporal clues to obtain final video-level predicted score. Extensive experiments conducted on two public stereoscopic video quality datasets show that the proposed method correlates highly with human perception and outperforms state-of-the-art methods by a large margin. We also show that our 3D CNN features have more desirable property for SVQA than hand-crafted features in previous methods, and our 3D CNN features together with support vector regression (SVR) can further boost the performance. In addition, with no complex preprocessing and GPU acceleration, our proposed method is demonstrated computationally efficient and easy to use
No reference quality assessment of stereo video based on saliency and sparsity
With the popularity of video technology, stereoscopic video quality assessment (SVQA) has become increasingly important. Existing SVQA methods cannot achieve good performance because the videos' information is not fully utilized. In this paper, we consider various information in the videos together, construct a simple model to combine and analyze the diverse features, which is based on saliency and sparsity. First, we utilize the 3-D saliency map of sum map, which remains the basic information of stereoscopic video, as a valid tool to evaluate the videos' quality. Second, we use the sparse representation to decompose the sum map of 3-D saliency into coefficients, then calculate the features based on sparse coefficients to obtain the effective expression of videos' message. Next, in order to reduce the relevance between the features, we put them into stacked auto-encoder, mapping vectors to higher dimensional space, and adding the sparse restraint, then input them into support vector machine subsequently, and finally, get the quality assessment scores. Within that process, we take the advantage of saliency and sparsity to extract and simplify features. Through the later experiment, we can see the proposed method is fitting well with the subjective scores
A deep evaluator for image retargeting quality by geometrical and contextual interaction
An image is compressed or stretched during the multidevice displaying, which will have a very big impact on perception quality. In order to solve this problem, a variety of image retargeting methods have been proposed for the retargeting process. However, how to evaluate the results of different image retargeting is a very critical issue. In various application systems, the subjective evaluation method cannot be applied on a large scale. So we put this problem in the accurate objective-quality evaluation. Currently, most of the image retargeting quality assessment algorithms use simple regression methods as the last step to obtain the evaluation result, which are not corresponding with the perception simulation in the human vision system (HVS). In this paper, a deep quality evaluator for image retargeting based on the segmented stacked AutoEnCoder (SAE) is proposed. Through the help of regularization, the designed deep learning framework can solve the overfitting problem. The main contributions in this framework are to simulate the perception of retargeted images in HVS. Especially, it trains two separated SAE models based on geometrical shape and content matching. Then, the weighting schemes can be used to combine the obtained scores from two models. Experimental results in three well-known databases show that our method can achieve better performance than traditional methods in evaluating different image retargeting results
Quality index for stereoscopic images by jointly evaluating cyclopean amplitude and cyclopean phase
With widespread applications of three-dimensional (3-D) technology, measuring quality of experience for 3-D multimedia content plays an increasingly important role. In this paper, we propose a full reference stereo image quality assessment (SIQA) framework which focuses on the innovation of binocular visual properties and applications of low-level features. On one hand, based on the fact that human visual system understands an image mainly according to its low-level features, local phase and local amplitude extracted from phase congruency measurement are employed as primary features. Considering the less prominent performance of amplitude in IQA, visual saliency is applied into the modification on amplitude. On the other hand, by fully considering binocular rivalry phenomena, we create the cyclopean amplitude map and cyclopean phase map. With this method, both image features and binocular visual properties are mutually combined with each other. Meanwhile, a novel binocular modulation function in spatial domain is also adopted into the overall quality prediction of amplitude and phase. Extensive experiments demonstrate that the proposed framework achieves higher consistency with subjective tests than relevant SIQA metrics
Stereoscopic image quality assessment method based on binocular combination saliency model
The objective quality assessment of stereoscopic images plays an important role in three-dimensional (3D) technologies. In this paper, we propose an effective method to evaluate the quality of stereoscopic images that are afflicted by symmetric distortions. The major technical contribution of this paper is that the binocular combination behaviours and human 3D visual saliency characteristics are both considered. In particular, a new 3D saliency map is developed, which not only greatly reduces the computational complexity by avoiding calculation of the depth information, but also assigns appropriate weights to the image contents. Experimental results indicate that the proposed metric not only significantly outperforms conventional 2D quality metrics, but also achieves higher performance than the existing 3D quality assessment models
Blind assessment for stereo images considering binocular characteristics and deep perception map based on deep belief network
© 2018 Elsevier Inc. In recent years, blind image quality assessment in the field of 2D image/video has gained the popularity, but its applications in 3D image/video are to be generalized. In this paper, we propose an effective blind metric evaluating stereo images via deep belief network (DBN). This method is based on wavelet transform with both 2D features from monocular images respectively as image content description and 3D features from a novel depth perception map (DPM) as depth perception description. In particular, the DPM is introduced to quantify longitudinal depth information to align with human stereo visual perception. More specifically, the 2D features are local histogram of oriented gradient (HoG) features from high frequency wavelet coefficients and global statistical features including magnitude, variance and entropy. Meanwhile, the global statistical features from the DPM are characterized as 3D features. Subsequently, considering binocular characteristics, an effective binocular weight model based on multiscale energy estimation of the left and right images is adopted to obtain the content quality. In the training and testing stages, three DBN models for the three types features separately are used to get the final score. Experimental results demonstrate that the proposed stereo image quality evaluation model has high superiority over existing methods and achieve higher consistency with subjective quality assessments
- …