6 research outputs found
FedFusion: Manifold Driven Federated Learning for Multi-satellite and Multi-modality Fusion
Multi-satellite, multi-modality in-orbit fusion is a challenging task as it
explores the fusion representation of complex high-dimensional data under
limited computational resources. Deep neural networks can reveal the underlying
distribution of multi-modal remote sensing data, but the in-orbit fusion of
multimodal data is more difficult because of the limitations of different
sensor imaging characteristics, especially when the multimodal data follows
non-independent identically distribution (Non-IID) distributions. To address
this problem while maintaining classification performance, this paper proposes
a manifold-driven multi-modality fusion framework, FedFusion, which randomly
samples local data on each client to jointly estimate the prominent manifold
structure of shallow features of each client and explicitly compresses the
feature matrices into a low-rank subspace through cascading and additive
approaches, which is used as the feature input of the subsequent classifier.
Considering the physical space limitations of the satellite constellation, we
developed a multimodal federated learning module designed specifically for
manifold data in a deep latent space. This module achieves iterative updating
of the sub-network parameters of each client through global weighted averaging,
constructing a framework that can represent compact representations of each
client. The proposed framework surpasses existing methods in terms of
performance on three multimodal datasets, achieving a classification average
accuracy of 94.35 while compressing communication costs by a factor of 4.
Furthermore, extensive numerical evaluations of real-world satellite images
were conducted on the orbiting edge computing architecture based on Jetson TX2
industrial modules, which demonstrated that FedFusion significantly reduced
training time by 48.4 minutes (15.18%) while optimizing accuracy.
MDFL: Multi-domain Diffusion-driven Feature Learning
High-dimensional images, known for their rich semantic information, are
widely applied in remote sensing and other fields. The spatial information in
these images reflects the object's texture features, while the spectral
information reveals the potential spectral representations across different
bands. Currently, the understanding of high-dimensional images remains limited
to a single-domain perspective with performance degradation. Motivated by the
masking texture effect observed in the human visual system, we present a
multi-domain diffusion-driven feature learning network (MDFL) , a scheme to
redefine the effective information domain that the model really focuses on.
This method employs diffusion-based posterior sampling to explicitly consider
joint information interactions between the high-dimensional manifold structures
in the spectral, spatial, and frequency domains, thereby eliminating the
influence of masking texture effects in visual models. Additionally, we
introduce a feature reuse mechanism to gather deep and raw features of
high-dimensional data. We demonstrate that MDFL significantly improves the
feature extraction performance of high-dimensional data, thereby providing a
powerful aid for revealing the intrinsic patterns and structures of such data.
The experimental results on three multi-modal remote sensing datasets show that
MDFL reaches an average overall accuracy of 98.25%, outperforming various
state-of-the-art baseline schemes. The code will be released, contributing to
the computer vision community
FedDiff: Diffusion Model Driven Federated Learning for Multi-Modal and Multi-Clients
With the rapid development of imaging sensor technology in the field of
remote sensing, multi-modal remote sensing data fusion has emerged as a crucial
research direction for land cover classification tasks. While diffusion models
have made great progress in generative models and image classification tasks,
existing models primarily focus on single-modality and single-client control,
that is, the diffusion process is driven by a single modal in a single
computing node. To facilitate the secure fusion of heterogeneous data from
clients, it is necessary to enable distributed multi-modal control, such as
merging the hyperspectral data of organization A and the LiDAR data of
organization B privately on each base station client. In this study, we propose
a multi-modal collaborative diffusion federated learning framework called
FedDiff. Our framework establishes a dual-branch diffusion model feature
extraction setup, where the two modal data are inputted into separate branches
of the encoder. Our key insight is that diffusion models driven by different
modalities are inherently complementary in terms of potential denoising steps
on which bilateral connections can be built. Considering the challenge of
private and efficient communication between multiple clients, we embed the
diffusion model into the federated learning communication structure, and
introduce a lightweight communication module. Qualitative and quantitative
experiments validate the superiority of our framework in terms of image quality
and conditional consistency
Sensitized Photooxygenation of Cholesterol and Pseudocholesterol Derivatives via Singlet Oxygen
3-Substituted cholesterols and 7-substituted pseudocholesterols undergo a facile photooxygenation sensitized by 9, 10-dicyanoanthracene (DCA) and lumiflavin (LF) to give similar, oppositely-positioned enol derivatives. Both steroids showed the same reaction pattern associated with the endocyclic 5- and 4-olefin units, respectively. The reaction was proposed to proceed via the ene reaction of singlet oxygen and subsequent rearrangement of the initially formed 5a-hydroperoxides