26 research outputs found
Visual Redundancy Removal for Composite Images: A Benchmark Dataset and a Multi-Visual-Effects Driven Incremental Method
Composite images (CIs) typically combine various elements from different scenes, views, and styles, which are a very important information carrier in the era of mixed media such as virtual reality, mixed reality, metaverse, etc. However, the complexity of CI content presents a significant challenge for subsequent visual perception modeling and compression. In addition, the lack of benchmark CI databases also hinders the use of recent advanced data-driven methods. To address these challenges, we first establish one of the earliest visual redundancy prediction (VRP) databases for CIs. Moreover, we propose a multi-visual effect (MVE)-driven incremental learning method that combines the strengths of hand-crafted and data-driven approaches to achieve more accurate VRP modeling. Specifically, we design special incremental rules to learn the visual knowledge flow of MVE. To effectively capture the associated features of MVE, we further develop a three-stage incremental learning approach for VRP based on an encoder-decoder network. Extensive experimental results validate the superiority of the proposed method in terms of subjective, objective, and compression experiments
Real-time detection of moving objects in a video sequence by using data fusion algorithm
The moving object detection and tracking technology has been widely deployed in visual surveillance for security, which is, however, an extremely challenge to achieve real-time performance owing to environmental noise, background complexity and illumination variation. This paper proposes a novel data fusion approach to attack this problem, which combines an entropy-based Canny (EC) operator with the local and global optical flow (LGOF) method, namely EC-LGOF. Its operation contains four steps. The EC operator firstly computes the contour of moving objects in a video sequence, and the LGOF method then establishes the motion vector field. Thirdly, the minimum error threshold selection (METS) method is employed to distinguish the moving object from the background. Finally, edge information fuses temporal information concerning the optic flow to label the moving objects. Experiments are conducted and the results are given to show the feasibility and effectiveness of the proposed method
Investigation of the luminescence properties and thermal stability of dysprosium, terbium, and europium ions singly- and co-doped strontium yttrium borate phosphors
<p>A series of trivalent rare-earth element ions (europium, terbium, dysprosium) singly- and co-doped strontium yttrium borate phosphors was synthesized via the sol–gel method. The phase formation, luminescence properties, decay times, and energy transfer behaviors from terbium ions to europium ions, the thermal stability, and the Commission Internationale de L’Eclairage coordinates were investigated. Under the excitation of ultraviolet light, the singly doped phosphors exhibited green emission of terbium ions, white emission of dysprosium ions, and red emission of europium ions, respectively. For the terbium and europium ions co-doped strontium yttrium borate samples, a white emission can be realized by blending the doping concentration of terbium and europium ions. The critical distance between terbium and europium ions has been calculated to be about 14.52 Å and the energy transfer from terbium to europium occurred through the dipole–quadrupole interaction. At 150°C, the emission intensity of terbium and europium in the 12 mol% terbium and 14 mol% europium co-doped strontium yttrium borate sample was maintained at about 74% and 87% of their corresponding initial values, respectively, and the dysprosium ions singly doped strontium yttrium borate sample showed about 70% of its initial emission intensity at room temperature. The above results suggested that europium, terbium, dysprosium ions singly- and co-doped strontium yttrium borate phosphors have potential applications as ultraviolet-convertible phosphors.</p
Spectral-spatial classification of hyperspectral imagery based on random forests
Conference Name:5th International Conference on Internet Multimedia Computing and Service, ICIMCS 2013. Conference Address: Huangshan, China. Time:August 17, 2013 - August 19, 2013.Hefei University of Technology; National Natural Foundation of China; SIGMM China ChapterThe high dimensionality of hyperspectral images are usually coupled with limited reference data available, which degenerates the performances of supervised classification techniques such as random forests (RF). The commonly used pixel-wise classification lacks information about spatial structures of the image. In order to improve the performances of classification, incorporation of spectral and spatial is needed. This paper proposes a novel scheme for accurate spectral-spatial classification of hyperspectral image. It is based on random forests, followed by majority voting within the superpixels obtained by oversegmentation through a graph-based technique. The scheme combines the result of a pixel-wise RF classification and the segmentation map obtained by oversegmentation. Our experimental results on two hyperspectral images show that the proposed framework combining spectral information with spatial context can greatly improve the final result with respect to pixel-wise classification with Random Forests. ? 2013 ACM