17 research outputs found
An Interactive Concave Volume Clipping Method Based on GPU Ray Casting with Boolean Operation
Volume clipping techniques can display inner structures and avoid difficulties in specifying an appropriate transfer function. We present an interactive concave volume clipping method by implementing both rendering and Boolean operation on GPU. Common analytical convex objects, such as polyhedrons and spheres, are determined by parameters. So it consumes very little video memory to implement concave volume clipping with Boolean operations on GPU. The intersection, subtraction and union operations are implemented on GPU by converting 3D Boolean operation into 1D Boolean operation. To enhance visual effects, a pseudo color based rendering model is proposed and the Phong illumination model is enabled on the clipped surfaces. Users are allowed to select a color scheme from several schemes that are pre-defined or specified by users, to obtain clear views of inner anatomical structures. At last, several experiments were performed on a standard PC with a GeForce FX8600 graphics card. Experimental results show that the three basic Boolean operations are correctly performed, and our approach can freely clip and visualize volumetric datasets at interactive frame rates
A variational surface deformation and subdivision-based modeling framework for noisy and small n-furcated tube-like structures
It is challenging to construct an accurate and smooth mesh for noisy and small n-furcated tube-like structures, such as arteries, veins, and pathological vessels, due to tiny vessel size, noise, n -furcations, and irregular shapes of pathological vessels. We propose a framework by dividing the modeling process into mesh construction and mesh refinement. In the first step, we focus on mesh topological correctness, and just create an initial rough mesh for the n-furcated tube-like structures. In the second step, we propose a variational surface deformation method to push the initial mesh to structure boundaries for positional accuracy improvement. By iteratively solving Euler-Lagrange equations derived from the minimization of the shell and distance energies, the initial mesh can be gradually pushed to the boundaries. A mesh dilation method is proposed to prevent the extremely deviated initial mesh moving toward wrong boundaries. We combine deformation and subdivision to propose a coarse-to-fine modeling framework for the improvement of efficiency and accuracy. Experiments show our method can construct an accurate and smooth mesh for noisy and small n-furcated tube-like structures, and it is useful in hemodynamics, quantitative measurement, and analysis of vessels
DTI image segmentation algorithm based on Markov random field and fuzzy C-means clustering
The traditional fuzzy C-means clustering (FCM) algorithm only considered the gray information of the image,not including the neighborhood information of it,which would lead to an unsatisfactory anti-noise performance. In order to make full use of image space information,an improved adaptive weighted FCM algorithm combining with Markov random fields (MRF) was proposed in this paper. According to the local density,the discrete types of pixels in the neighborhood of the window were estimated,and the weights of MRF spatial constraint field and membership field were changed adaptively according to the discrete types of pixels,so as to eliminate the influence of noise and maintain the diffusion tensor imaging(DTI)image details as much as possible. The experimental results showed that this algorithm could segment DTI image accurately and achieve the segmentation with clear edge and satisfying detail information reservation. Compared with FCM algorithm and existing MRF and FCM fusion algorithm,the segmentation coefficient was improved by at least 3%,the segmentation entropy was reduced by at least 2%. At the same the segmentation clustering effect was improved,and the segmentation coefficient and the segmentation entropy were not easily affected by the noise amplitude
Deep Learning-Based CSI Feedback for RIS-Aided Massive MIMO Systems with Time Correlation
In this paper, we consider an reconfigurable intelligent surface (RIS)-aided
frequency division duplex (FDD) massive multiple-input multiple-output (MIMO)
downlink system.In the FDD systems, the downlink channel state information
(CSI) should be sent to the base station through the feedback link. However,
the overhead of CSI feedback occupies substantial uplink bandwidth resources in
RIS-aided communication systems. In this work, we propose a deep learning
(DL)-based scheme to reduce the overhead of CSI feedback by compressing the
cascaded CSI. In the practical RIS-aided communication systems, the cascaded
channel at the adjacent slots inevitably has time correlation. We use long
short-term memory to learn time correlation, which can help the neural network
to improve the recovery quality of the compressed CSI. Moreover, the attention
mechanism is introduced to further improve the CSI recovery quality. Simulation
results demonstrate that our proposed DLbased scheme can significantly
outperform other DL-based methods in terms of the CSI recovery qualit
Prediction of mobile image saliency and quality under cloud computing environment
10.1016/j.dsp.2018.12.006Digital Signal Processin
Learning visual saliency for stereoscopic images
International audienceCurrently, there are various saliency detection models proposed for saliency prediction in 2D images/video in the previous decades. With the rapid development of stereoscopic display techniques, stereoscopic saliency detection is much desired for the emerging stereoscopic applications. Compared with 2D saliency detection, the depth factor has to be considered in stereoscopic saliency detection. Inspired by the wide applications of machine learning techniques in 2D saliency detection, we propose to use the machine learning technique for stereoscopic saliency detection in this paper. The contrast features from color, luminance and texture in 2D images are adopted in the proposed framework. For the depth factor, we consider both the depth contrast and depth degree in the proposed learned model. Additionally, the center-bias factor is also used as an input feature for learning the model. Experimental results on a recent large-scale eye tracking database show the better performance of the proposed model over other existing ones