327 research outputs found
Vulnerable GPU Memory Management: Towards Recovering Raw Data from GPU
In this paper, we present that security threats coming with existing GPU
memory management strategy are overlooked, which opens a back door for
adversaries to freely break the memory isolation: they enable adversaries
without any privilege in a computer to recover the raw memory data left by
previous processes directly. More importantly, such attacks can work on not
only normal multi-user operating systems, but also cloud computing platforms.
To demonstrate the seriousness of such attacks, we recovered original data
directly from GPU memory residues left by exited commodity applications,
including Google Chrome, Adobe Reader, GIMP, Matlab. The results show that,
because of the vulnerable memory management strategy, commodity applications in
our experiments are all affected
A CNN based system for predicting the implied volatility and option prices.
The evaluations of option prices and implied volatility are critical for option risk management and trading. Common strategies in existing studies relied on the parametric models. However, these models are based on several idealistic assumptions. In addition, previous research of option pricing mainly depends on the historical transaction records without considering the performance of other concurrent options. To address these challenges, we proposed a convolutional neural network (CNN) based system for predicting the implied volatility and the option prices. Specifically, the customized non-parametric learning approach is first used to estimate the implied volatility. Second, several traditional parametric models are also implemented to estimate these prices as well. The convolutional neural network is utilized to obtain the predictions based on the estimation of the implied volatility. Our experiments based on Chinese SSE 50ETF options demonstrate that the proposed framework outperforms the traditional methods with at least 40.12% performance enhancement in terms of RMSE
SSDRec: Self-Augmented Sequence Denoising for Sequential Recommendation
Traditional sequential recommendation methods assume that users' sequence
data is clean enough to learn accurate sequence representations to reflect user
preferences. In practice, users' sequences inevitably contain noise (e.g.,
accidental interactions), leading to incorrect reflections of user preferences.
Consequently, some pioneer studies have explored modeling sequentiality and
correlations in sequences to implicitly or explicitly reduce noise's influence.
However, relying on only available intra-sequence information (i.e.,
sequentiality and correlations in a sequence) is insufficient and may result in
over-denoising and under-denoising problems (OUPs), especially for short
sequences. To improve reliability, we propose to augment sequences by inserting
items before denoising. However, due to the data sparsity issue and
computational costs, it is challenging to select proper items from the entire
item universe to insert into proper positions in a target sequence. Motivated
by the above observation, we propose a novel framework--Self-augmented Sequence
Denoising for sequential Recommendation (SSDRec) with a three-stage learning
paradigm to solve the above challenges. In the first stage, we empower SSDRec
by a global relation encoder to learn multi-faceted inter-sequence relations in
a data-driven manner. These relations serve as prior knowledge to guide
subsequent stages. In the second stage, we devise a self-augmentation module to
augment sequences to alleviate OUPs. Finally, we employ a hierarchical
denoising module in the third stage to reduce the risk of false augmentations
and pinpoint all noise in raw sequences. Extensive experiments on five
real-world datasets demonstrate the superiority of \model over state-of-the-art
denoising methods and its flexible applications to mainstream sequential
recommendation models. The source code is available at
https://github.com/zc-97/SSDRec.Comment: ICDE 202
Empowering Low-Light Image Enhancer through Customized Learnable Priors
Deep neural networks have achieved remarkable progress in enhancing low-light
images by improving their brightness and eliminating noise. However, most
existing methods construct end-to-end mapping networks heuristically,
neglecting the intrinsic prior of image enhancement task and lacking
transparency and interpretability. Although some unfolding solutions have been
proposed to relieve these issues, they rely on proximal operator networks that
deliver ambiguous and implicit priors. In this work, we propose a paradigm for
low-light image enhancement that explores the potential of customized learnable
priors to improve the transparency of the deep unfolding paradigm. Motivated by
the powerful feature representation capability of Masked Autoencoder (MAE), we
customize MAE-based illumination and noise priors and redevelop them from two
perspectives: 1) \textbf{structure flow}: we train the MAE from a normal-light
image to its illumination properties and then embed it into the proximal
operator design of the unfolding architecture; and m2) \textbf{optimization
flow}: we train MAE from a normal-light image to its gradient representation
and then employ it as a regularization term to constrain noise in the model
output. These designs improve the interpretability and representation
capability of the model.Extensive experiments on multiple low-light image
enhancement datasets demonstrate the superiority of our proposed paradigm over
state-of-the-art methods. Code is available at
https://github.com/zheng980629/CUE.Comment: Accepted by ICCV 202
Random Weights Networks Work as Loss Prior Constraint for Image Restoration
In this paper, orthogonal to the existing data and model studies, we instead
resort our efforts to investigate the potential of loss function in a new
perspective and present our belief ``Random Weights Networks can Be Acted as
Loss Prior Constraint for Image Restoration''. Inspired by Functional theory,
we provide several alternative solutions to implement our belief in the strict
mathematical manifolds including Taylor's Unfolding Network, Invertible Neural
Network, Central Difference Convolution and Zero-order Filtering as ``random
weights network prototype'' with respect of the following four levels: 1) the
different random weights strategies; 2) the different network architectures,
\emph{eg,} pure convolution layer or transformer; 3) the different network
architecture depths; 4) the different numbers of random weights network
combination. Furthermore, to enlarge the capability of the randomly initialized
manifolds, we devise the manner of random weights in the following two
variants: 1) the weights are randomly initialized only once during the whole
training procedure; 2) the weights are randomly initialized at each training
iteration epoch. Our propose belief can be directly inserted into existing
networks without any training and testing computational cost. Extensive
experiments across multiple image restoration tasks, including image
de-noising, low-light image enhancement, guided image super-resolution
demonstrate the consistent performance gains obtained by introducing our
belief. To emphasize, our main focus is to spark the realms of loss function
and save their current neglected status. Code will be publicly available
Over 10 Gbps VLC for long-distance applications using a GaN-based series-biased micro-LED array
By employing a GaN-based series-biased micro-light emitting diode (µLED) array and orthogonal frequency division multiplexing modulation format, a high-speed free-space visible light communication system for long-distance applications has been demonstrated. The blue series-biased µLED array, which consists of 3×3, 20 µm-diameter µLED elements, presents promising performance with an optical power and -6dB electrical modulation bandwidth of over 10 mW and 980 MHz, respectively. Record data transmission rates have been successfully achieved at different free-space distances. Within 5 m transmission distances, over 10 Gbps data rates at the forward error correction (FEC) floor of 3.8×10-3are accomplished. Extending the transmission distances to 20m, the data rates are maintained at the Gbps level at the FEC floor
Gb/s Underwater Wireless Optical Communications Using Series-Connected GaN Micro-LED Arrays
High speed wireless communications are highly desirable for many industrial and scientific underwater applications. Acoustic communications suffer from high latency and limited data rates, while Radio Frequency communications are severely limited by attenuation in seawater. Optical communications are a promising alternative, offering high transmission rates (up to Gb/s), while water has relatively low attenuation at visible wavelengths. Here we demonstrate the use of series-connected micro-light-emitting-diode (μLED) arrays consisting of 6 μLED pixels either 60 μm or 80 μm in diameter and operating at 450 nm. These devices increase the output power whilst maintaining relatively high modulation bandwidth. Using orthogonal frequency division multiplexing (OFDM) we demonstrate underwater wireless data transmission at rates of up to 4.92 Gb/s, 3.22 Gb/s and 3.4 Gb/s over 1.5 m, 3 m and 4.5 m, respectively, with corresponding bit error ratios (BERs) of 1.5×10-3, 1.1×10-3 and 3.1×10-3, through clear tap water, and Mb/s rates through >5 attenuation lengths (ALs) in turbid waters
The synergized diagnostic value of VTQ with chemokine CXCL13 in lung tumors
Virtual Touch Tissue Quantification (VTQ) offers several advantages in the diagnosis of various lung diseases. Chemokine expression levels, such as CXCL13, play a vital role in the occurrence and development of tumors and aid in the diagnosis process. The purpose of this study was to evaluate the combined value of VTQ and changes in CXCL13 expression levels for the diagnosis of lung tumors. A total of 60 patients with thoracic nodules and pleural effusion were included, with 30 of them having malignant pleural effusion (based on pathology) and the remaining 30 having benign thoracic nodules and pleural effusion. The relative expression level of CXCL13 was measured in the collected pleural effusions using Enzyme-Linked Immunosorbent Assay (ELISA). The relationship between CXCL13 expression levels and various clinical features was analyzed. A Receiver Operating Characteristic (ROC) curve analysis was conducted on the VTQ results and relative expression levels of CXCL13, and the areas under the curve, critical values, sensitivity, and specificity were calculated. Multivariate analysis incorporating multiple indicators was performed to determine the accuracy of lung tumor diagnosis. The results showed that the expression levels of CXCL13 and VTQ were significantly higher in the lung cancer group compared to the control group (P < 0.05). In the Non-Small Cell Lung Cancer (NSCLC) group, CXCL13 expression levels increased with later TNM staging and poorer tumor differentiation. The expression level of CXCL13 in adenocarcinoma was higher than that in squamous cell carcinoma. The ROC curve analysis revealed that CXCL13 had an area under the curve (AUC) of 0.74 (0.61, 0.86) with an optimal cut-off value of 777.82 pg/ml for diagnosing lung tumors. The ROC curve analysis of VTQ showed an AUC of 0.67 (0.53, 0.82) with a sensitivity of 60.0% and a specificity of 83.3%, and an optimal diagnostic cut-off of 3.33 m/s. The combination of CXCL13 and VTQ for diagnosing thoracic tumors had an AUC of 0.842 (0.74, 0.94), which was significantly higher than either factor alone. The results of the study demonstrate the strong potential of combining VTQ results with chemokine CXCL13 expression levels for lung tumor diagnosis. Additionally, the findings suggest that elevated relative expression of CXCL13 in cases of malignant pleural effusion caused by non-small cell lung cancer may indicate a poor prognosis. This provides promising potential for using CXCL13 as a screening tool and prognostic indicator for patients with advanced lung cancer complicated by malignant pleural effusion
- …