270 research outputs found

    Local convexity inspired low-complexity non-coherent signal detector for nano-scale molecular communications

    Get PDF
    Molecular communications via diffusion (MCvD) represents a relatively new area of wireless data transfer with especially attractive characteristics for nanoscale applications. Due to the nature of diffusive propagation, one of the key challenges is to mitigate inter-symbol interference (ISI) that results from the long tail of channel response. Traditional coherent detectors rely on accurate channel estimations and incur a high computational complexity. Both of these constraints make coherent detection unrealistic for MCvD systems. In this paper, we propose a low-complexity and noncoherent signal detector, which exploits essentially the local convexity of the diffusive channel response. A threshold estimation mechanism is proposed to detect signals blindly, which can also adapt to channel variations. Compared to other noncoherent detectors, the proposed algorithm is capable of operating at high data rates and suppressing ISI from a large number of previous symbols. Numerical results demonstrate that not only is the ISI effectively suppressed, but the complexity is also reduced by only requiring summation operations. As a result, the proposed noncoherent scheme will provide the necessary potential to low-complexity molecular communications, especially for nanoscale applications with a limited computation and energy budget

    Keypoint-Augmented Self-Supervised Learning for Medical Image Segmentation with Limited Annotation

    Full text link
    Pretraining CNN models (i.e., UNet) through self-supervision has become a powerful approach to facilitate medical image segmentation under low annotation regimes. Recent contrastive learning methods encourage similar global representations when the same image undergoes different transformations, or enforce invariance across different image/patch features that are intrinsically correlated. However, CNN-extracted global and local features are limited in capturing long-range spatial dependencies that are essential in biological anatomy. To this end, we present a keypoint-augmented fusion layer that extracts representations preserving both short- and long-range self-attention. In particular, we augment the CNN feature map at multiple scales by incorporating an additional input that learns long-range spatial self-attention among localized keypoint features. Further, we introduce both global and local self-supervised pretraining for the framework. At the global scale, we obtain global representations from both the bottleneck of the UNet, and by aggregating multiscale keypoint features. These global features are subsequently regularized through image-level contrastive objectives. At the local scale, we define a distance-based criterion to first establish correspondences among keypoints and encourage similarity between their features. Through extensive experiments on both MRI and CT segmentation tasks, we demonstrate the architectural advantages of our proposed method in comparison to both CNN and Transformer-based UNets, when all architectures are trained with randomly initialized weights. With our proposed pretraining strategy, our method further outperforms existing SSL methods by producing more robust self-attention and achieving state-of-the-art segmentation results. The code is available at https://github.com/zshyang/kaf.git.Comment: Camera ready for NeurIPS 2023. Code available at https://github.com/zshyang/kaf.gi

    Low-complexity non-coherent signal detection for nano-scale molecular communications

    Get PDF
    Nano-scale molecular communication is a viable way of exchanging information between nano-machines. In this letter, a low-complexity and non-coherent signal detection technique is proposed to mitigate the intersymbol-interference (ISI) and additive noise. In contrast to existing coherent detection methods of high complexity, the proposed non-coherent signal detector is more practical when the channel conditions are hard to acquire accurately or hidden from the receiver. The proposed scheme employs the concentration difference to detect the ISI corrupted signals and we demonstrate that it can suppress the ISI effectively. The concentration difference is a stable characteristic, irrespective of the diffusion channel conditions. In terms of complexity, by excluding matrix operations or likelihood calculations, the new detection scheme is particularly suitable for nano-scale molecular communication systems with a small energy budget or limited computation resource

    The First Verification Test of Space-Ground Collaborative Intelligence via Cloud-Native Satellites

    Full text link
    Recent advancements in satellite technologies and the declining cost of access to space have led to the emergence of large satellite constellations in Low Earth Orbit. However, these constellations often rely on bent-pipe architecture, resulting in high communication costs. Existing onboard inference architectures suffer from limitations in terms of low accuracy and inflexibility in the deployment and management of in-orbit applications. To address these challenges, we propose a cloud-native-based satellite design specifically tailored for Earth Observation tasks, enabling diverse computing paradigms. In this work, we present a case study of a satellite-ground collaborative inference system deployed in the Tiansuan constellation, demonstrating a remarkable 50\% accuracy improvement and a substantial 90\% data reduction. Our work sheds light on in-orbit energy, where in-orbit computing accounts for 17\% of the total onboard energy consumption. Our approach represents a significant advancement of cloud-native satellite, aiming to enhance the accuracy of in-orbit computing while simultaneously reducing communication cost.Comment: Accepted by China Communication

    Federated NLP in Few-shot Scenarios

    Full text link
    Natural language processing (NLP) sees rich mobile applications. To support various language understanding tasks, a foundation NLP model is often fine-tuned in a federated, privacy-preserving setting (FL). This process currently relies on at least hundreds of thousands of labeled training samples from mobile clients; yet mobile users often lack willingness or knowledge to label their data. Such an inadequacy of data labels is known as a few-shot scenario; it becomes the key blocker for mobile NLP applications. For the first time, this work investigates federated NLP in the few-shot scenario (FedFSL). By retrofitting algorithmic advances of pseudo labeling and prompt learning, we first establish a training pipeline that delivers competitive accuracy when only 0.05% (fewer than 100) of the training data is labeled and the remaining is unlabeled. To instantiate the workflow, we further present a system FFNLP, addressing the high execution cost with novel designs. (1) Curriculum pacing, which injects pseudo labels to the training workflow at a rate commensurate to the learning progress; (2) Representational diversity, a mechanism for selecting the most learnable data, only for which pseudo labels will be generated; (3) Co-planning of a model's training depth and layer capacity. Together, these designs reduce the training delay, client energy, and network traffic by up to 46.0×\times, 41.2×\times and 3000.0×\times, respectively. Through algorithm/system co-design, FFNLP demonstrates that FL can apply to challenging settings where most training samples are unlabeled

    Towards Practical Few-shot Federated NLP

    Full text link
    Transformer-based pre-trained models have emerged as the predominant solution for natural language processing (NLP). Fine-tuning such pre-trained models for downstream tasks often requires a considerable amount of labeled private data. In practice, private data is often distributed across heterogeneous mobile devices and may be prohibited from being uploaded. Moreover, well-curated labeled data is often scarce, presenting an additional challenge. To address these challenges, we first introduce a data generator for federated few-shot learning tasks, which encompasses the quantity and skewness of scarce labeled data in a realistic setting. Subsequently, we propose AUG-FedPrompt, a prompt-based federated learning system that exploits abundant unlabeled data for data augmentation. Our experiments indicate that AUG-FedPrompt can perform on par with full-set fine-tuning with a limited amount of labeled data. However, such competitive performance comes at a significant system cost.Comment: EuroSys23 worksho

    Accelerating Vertical Federated Learning

    Full text link
    Privacy, security and data governance constraints rule out a brute force process in the integration of cross-silo data, which inherits the development of the Internet of Things. Federated learning is proposed to ensure that all parties can collaboratively complete the training task while the data is not out of the local. Vertical federated learning is a specialization of federated learning for distributed features. To preserve privacy, homomorphic encryption is applied to enable encrypted operations without decryption. Nevertheless, together with a robust security guarantee, homomorphic encryption brings extra communication and computation overhead. In this paper, we analyze the current bottlenecks of vertical federated learning under homomorphic encryption comprehensively and numerically. We propose a straggler-resilient and computation-efficient accelerating system that reduces the communication overhead in heterogeneous scenarios by 65.26% at most and reduces the computation overhead caused by homomorphic encryption by 40.66% at most. Our system can improve the robustness and efficiency of the current vertical federated learning framework without loss of security
    • …
    corecore