154 research outputs found
Your Smart Home Can't Keep a Secret: Towards Automated Fingerprinting of IoT Traffic with Neural Networks
The IoT (Internet of Things) technology has been widely adopted in recent
years and has profoundly changed the people's daily lives. However, in the
meantime, such a fast-growing technology has also introduced new privacy
issues, which need to be better understood and measured. In this work, we look
into how private information can be leaked from network traffic generated in
the smart home network. Although researchers have proposed techniques to infer
IoT device types or user behaviors under clean experiment setup, the
effectiveness of such approaches become questionable in the complex but
realistic network environment, where common techniques like Network Address and
Port Translation (NAPT) and Virtual Private Network (VPN) are enabled. Traffic
analysis using traditional methods (e.g., through classical machine-learning
models) is much less effective under those settings, as the features picked
manually are not distinctive any more. In this work, we propose a traffic
analysis framework based on sequence-learning techniques like LSTM and
leveraged the temporal relations between packets for the attack of device
identification. We evaluated it under different environment settings (e.g.,
pure-IoT and noisy environment with multiple non-IoT devices). The results
showed our framework was able to differentiate device types with a high
accuracy. This result suggests IoT network communications pose prominent
challenges to users' privacy, even when they are protected by encryption and
morphed by the network gateway. As such, new privacy protection methods on IoT
traffic need to be developed towards mitigating this new issue
Energy Efficiency Optimization of Intelligent Reflective Surface-assisted Terahertz-RSMA System
This paper examines the energy efficiency optimization problem of intelligent
reflective surface (IRS)-assisted multi-user rate division multiple access
(RSMA) downlink systems under terahertz propagation. The objective function for
energy efficiency is optimized using the salp swarm algorithm (SSA) and
compared with the successive convex approximation (SCA) technique. SCA
technique requires multiple iterations to solve non-convex resource allocation
problems, whereas SSA can consume less time to improve energy efficiency
effectively. The simulation results show that SSA is better than SCA in
improving system energy efficiency, and the time required is significantly
reduced, thus optimizing the system's overall performance
Quantum Gaussian process regression
In this paper, a quantum algorithm based on gaussian process regression model
is proposed. The proposed quantum algorithm consists of three sub-algorithms.
One is the first quantum subalgorithm to efficiently generate mean predictor.
The improved HHL algorithm is proposed to obtain the sign of outcomes.
Therefore, the terrible situation that results is ambiguous in terms of
original HHL algorithm is avoided, which makes whole algorithm more clear and
exact. The other is to product covariance predictor with same method. Thirdly,
the squared exponential covariance matrices are prepared that annihilation
operator and generation operator are simulated by the unitary linear
decomposition Hamiltonian simulation and kernel function vectors is generated
with blocking coding techniques on covariance matrices. In addition, it is
shown that the proposed quantum gaussian process regression algorithm can
achieve quadratic faster over the classical counterpart
Tanshinone IIA inhibits exosome-induced cardiomyocyte pyroptosis through NLRP3/caspase 1 pathway
Purpose: To investigate the effect of Salvia miltiorrhiza, a traditional Chinese medicinal plant, on exosome-induced cardiomyocyte pyroptosis.
Methods: Pyroptosis was induced in human AC cells using exosomes. Then, the effect of Danshen (dried roots of S. miltiorrhiza) on exosome-induced pyroptosis was determined using flow cytometry. The expressions of pro-inflammatory cytokines were measured by enzyme-linked immunosorbent assay (ELISA), while protein levels of cytokines were assayed by Western blotting.
Results: Tanshinone IIA (Tan IIA), the bioactive molecule in Danshen, inhibited cardiomyocyte pyroptosis by significantly reducing the expressions of proinflammatory cytokines (p < 0.001). Thus, Tan IIA reduced pyroptosis induced by cardiomyocyte-derived exosome via inhibition of the expression of NLRP3 inflammasome in human AC cells.
Conclusion: This study has identified a potential mechanism through which Danshen functions to prevent cardiac diseases. It involves, at least in part, the inhibition of pyroptosis in cardiomyocytes. Thus, tanshinone IIA may be a pharmacologically beneficial cardioprotective compound, especially when used against heart failure
VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
Text-to-video generation aims to produce a video based on a given prompt.
Recently, several commercial video models have been able to generate plausible
videos with minimal noise, excellent details, and high aesthetic scores.
However, these models rely on large-scale, well-filtered, high-quality videos
that are not accessible to the community. Many existing research works, which
train models using the low-quality WebVid-10M dataset, struggle to generate
high-quality videos because the models are optimized to fit WebVid-10M. In this
work, we explore the training scheme of video models extended from Stable
Diffusion and investigate the feasibility of leveraging low-quality videos and
synthesized high-quality images to obtain a high-quality video model. We first
analyze the connection between the spatial and temporal modules of video models
and the distribution shift to low-quality videos. We observe that full training
of all modules results in a stronger coupling between spatial and temporal
modules than only training temporal modules. Based on this stronger coupling,
we shift the distribution to higher quality without motion degradation by
finetuning spatial modules with high-quality images, resulting in a generic
high-quality video model. Evaluations are conducted to demonstrate the
superiority of the proposed method, particularly in picture quality, motion,
and concept composition.Comment: Homepage: https://ailab-cvc.github.io/videocrafter; Github:
https://github.com/AILab-CVC/VideoCrafte
StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter
Text-to-video (T2V) models have shown remarkable capabilities in generating
diverse videos. However, they struggle to produce user-desired stylized videos
due to (i) text's inherent clumsiness in expressing specific styles and (ii)
the generally degraded style fidelity. To address these challenges, we
introduce StyleCrafter, a generic method that enhances pre-trained T2V models
with a style control adapter, enabling video generation in any style by
providing a reference image. Considering the scarcity of stylized video
datasets, we propose to first train a style control adapter using style-rich
image datasets, then transfer the learned stylization ability to video
generation through a tailor-made finetuning paradigm. To promote content-style
disentanglement, we remove style descriptions from the text prompt and extract
style information solely from the reference image using a decoupling learning
strategy. Additionally, we design a scale-adaptive fusion module to balance the
influences of text-based content features and image-based style features, which
helps generalization across various text and style combinations. StyleCrafter
efficiently generates high-quality stylized videos that align with the content
of the texts and resemble the style of the reference images. Experiments
demonstrate that our approach is more flexible and efficient than existing
competitors.Comment: Project page: https://gongyeliu.github.io/StyleCrafter.github.io/ ;
GitHub repository: https://github.com/GongyeLiu/StyleCrafte
Preparation and Characterization of Nano-Cu/Polysaccharide Composite Antimicrobial Film and Its Control Effect on Black Spot Disease of Winter Jujube
In this study, a nano-Cu/polysaccharide composite film was prepared by the solution casting method using gelatin and sodium alginate as film-forming substrates. Before casting, green synthesized nano-Cu was incorporated into the film-forming solution by co-blending method. Field emission scanning electron microscopy (FE-SEM), Fourier transform infrared spectroscopy (FT-IR), thermogravimetric analysis (TAG), diffuse reflectance spectroscopy (DRS), texture analysis (TA) and inductively coupled plasma-mass spectrometry (ICP-MS) were used to characterize the structure, light transmittance and physicochemical properties of nano-Cu and the composite film. The antifungal activity of the composite film was also evaluated and applied to the biological control of the black spot disease of winter jujube. Finally, the migration of Cu2+ in the composite film was measured. The results showed that the particle size of green synthetic nano-Cu was approximately 44 nm, and gelatin/sodium alginate could be used as an excellent carrier for nano-Cu. The composite film had good thermal stability, barrier properties and mechanical properties. In addition, the inhibition rates of the composite films with different concentrations of nano-Cu against Alternaria alternata, Fusarium and Botrytis cinerea were up to 87.80%, 77.73% and 81.96%, respectively, showing good and broad-spectrum antifungal properties. The half maximal inhibitory concentration (IC50) of nano-Cu against A. alternata biomass was 0.25 g/L. When stored for 10 days, the composite film with nano-Cu at 0.25 g/L reduced the lesion diameter by 52.53% and the incidence of black spot disease by 53.16% compared with the control group, and the migration of Cu2+ was 0.018 7 μg/mL. This study provides a new idea for the application of nano-Cu and a theoretical basis for the development of new antifungal materials
Animate-A-Story: Storytelling with Retrieval-Augmented Video Generation
Generating videos for visual storytelling can be a tedious and complex
process that typically requires either live-action filming or graphics
animation rendering. To bypass these challenges, our key idea is to utilize the
abundance of existing video clips and synthesize a coherent storytelling video
by customizing their appearances. We achieve this by developing a framework
comprised of two functional modules: (i) Motion Structure Retrieval, which
provides video candidates with desired scene or motion context described by
query texts, and (ii) Structure-Guided Text-to-Video Synthesis, which generates
plot-aligned videos under the guidance of motion structure and text prompts.
For the first module, we leverage an off-the-shelf video retrieval system and
extract video depths as motion structure. For the second module, we propose a
controllable video generation model that offers flexible controls over
structure and characters. The videos are synthesized by following the
structural guidance and appearance instruction. To ensure visual consistency
across clips, we propose an effective concept personalization approach, which
allows the specification of the desired character identities through text
prompts. Extensive experiments demonstrate that our approach exhibits
significant advantages over various existing baselines.Comment: Github: https://github.com/VideoCrafter/Animate-A-Story Project page:
https://videocrafter.github.io/Animate-A-Stor
- …