15 research outputs found
MLAN: Multi-Level Adversarial Network for Domain Adaptive Semantic Segmentation
Recent progresses in domain adaptive semantic segmentation demonstrate the
effectiveness of adversarial learning (AL) in unsupervised domain adaptation.
However, most adversarial learning based methods align source and target
distributions at a global image level but neglect the inconsistency around
local image regions. This paper presents a novel multi-level adversarial
network (MLAN) that aims to address inter-domain inconsistency at both global
image level and local region level optimally. MLAN has two novel designs,
namely, region-level adversarial learning (RL-AL) and co-regularized
adversarial learning (CR-AL). Specifically, RL-AL models prototypical regional
context-relations explicitly in the feature space of a labelled source domain
and transfers them to an unlabelled target domain via adversarial learning.
CR-AL fuses region-level AL and image-level AL optimally via mutual
regularization. In addition, we design a multi-level consistency map that can
guide domain adaptation in both input space (, image-to-image
translation) and output space (, self-training) effectively. Extensive
experiments show that MLAN outperforms the state-of-the-art with a large margin
consistently across multiple datasets.Comment: Submitted to P
Privacy-Preserving Collaborative Sharing for Sharing Economy in Fog-Enhanced IoT
Fog-enhanced Internet of Things (IoT) has been widely deployed in the field of collaboration and sharing. However, participants expressed concern about the fairness of cost-sharing and privacy of data because of complicated collaborative sharing and untrusted fog nodes in network. In this paper, a novel privacy-preserving collaborative sharing protocol is proposed in fog-enhanced IoT. This protocol, based on the Paillier cryptosystems, can guarantee that only the coarse aggregate of users’ requests are used to achieve fair cost-sharing without any communication between users. In addition, with the proposed protocol, the data stored in the device can be accurately transmitted to the user in accordance with each user’s request without prying into the user’s personal schedule. To demonstrate the security of our protocol, the thorough security analysis is performed. A significant number of experiments and comparison with existing schemes indicates that the suggested protocol is feasible
Prognostic Value of Soluble Suppression of Tumorigenicity 2 in Chronic Kidney Disease Patients: A Meta-Analysis
Objective. Previous studies have controversial results about the prognostic role of soluble suppression of tumorigenicity 2 (sST2) in chronic kidney disease (CKD). Therefore, we conduct this meta-analysis to access the association between sST2 and all-cause mortality, cardiovascular disease (CVD) mortality, and CVD events in patients with CKD. Methods. The publication studies on the association of sST2 with all-cause mortality, CVD mortality, and CVD events from PubMed and Embase were searched through August 2020. We pooled the hazard ratio (HR) comparing high versus low levels of sST2 and subgroup analysis based on treatment, continent, and diabetes mellitus (DM) proportion, and sample size was also performed. Results. There were 15 eligible studies with 11,063 CKD patients that were included in our meta-analysis. Elevated level of sST2 was associated with increased risk of all-cause mortality (HR 2.05; 95% confidence interval (CI), 1.51â2.78), CVD mortality (HR 1.68; 95% CI, 1.35â2.09), total CVD events (HR 1.88; 95% CI, 1.26â2.80), and HF (HR 1.35; 95% CI, 1.11â1.64). Subgroup analysis based on continent, DM percentage, and sample size showed that these factors did not influence the prognostic role of sST2 levels to all-cause mortality. Conclusions. Our results show that high levels of sST2 could predict the all-cause mortality, CVD mortality, and CVD events in CKD patients
Multi-level adversarial network for domain adaptive semantic segmentation
Recent progresses in domain adaptive semantic segmentation demonstrate the effectiveness of adversarial learning (AL) in unsupervised domain adaptation. However, most adversarial learning based methods align source and target distributions at a global image level but neglect the inconsistency around local image regions. This paper presents a novel multi-level adversarial network (MLAN) that aims to address inter-domain inconsistency at both global image level and local region level optimally. MLAN has two novel designs, namely, region-level adversarial learning (RL-AL) and co-regularized adversarial learning (CR-AL). Specifically, RL-AL models prototypical regional context-relations explicitly in the feature space of a labelled source domain and transfers them to an unlabelled target domain via adversarial learning. CR-AL fuses region-level AL and image-level AL optimally via mutual regularization. In addition, we design a multi-level consistency map that can guide domain adaptation in both input space (i.e., image-to-image translation) and output space (i.e., self-training) effectively. Extensive experiments show that MLAN outperforms the state-of-the-art with a large margin consistently across multiple datasets.Submitted/Accepted versio
Semanticâaware visual consistency network for fused image harmonisation
Abstract With a focus on integrated sensing, communication, and computation (ISCC) systems, multiple sensor devices collect information of different objects and upload it to data processing servers for fusion. Appearance gaps in composite images caused by distinct capture conditions can degrade the visual quality and affect the accuracy of other image processing and analysis results. The authors propose a fusedâimage harmonisation method that aims to eliminate appearance gaps among different objects. First, the authors modify a lightweight image harmonisation backbone and combined it with a pretrained segmentation model, in which the extracted semantic features were fed to both the encoder and decoder. Then the authors implement a semanticârelated backgroundâtoâforeground style transfer by leveraging spatial separation adaptive instance normalisation (SAIN). To better preserve the input semantic information, the authors design a simple and effective semanticâaware adaptive denormalisation (SADE) module. Experimental results demonstrate that the authorsâ proposed method achieves competitive performance on the iHarmony4 dataset and benefits from the harmonisation of fused images with incompatible appearance gaps
Scale variance minimization for unsupervised domain adaptation in image segmentation
We focus on unsupervised domain adaptation (UDA) in image segmentation. Existing works address this challenge largely by aligning inter-domain representations, which may lead over-alignment that impairs the semantic structures of images and further target-domain segmentation performance. We design a scale variance minimization (SVMin) method by enforcing the intra-image semantic structure consistency in the target domain. Specifically, SVMin leverages an intrinsic property that simple scale transformation has little effect on the semantic structures of images. It thus introduces certain supervision in the target domain by imposing a scale-invariance constraint while learning to segment an image and its scale-transformation concurrently. Additionally, SVMin is complementary to most existing UDA techniques and can be easily incorporated with consistent performance boost but little extra parameters. Extensive experiments show that our method achieves superior domain adaptive segmentation performance as compared with the state-of-the-art. Preliminary studies show that SVMin can be easily adapted for UDA-based image classification.Submitted/Accepted versionThis research was conducted at Singtel Cognitive and Artificial Intelligence Lab for Enterprises (SCALE@NTU), which is a collaboration between Singapore Telecommunications Limited (Singtel) and Nanyang Technological University (NTU) that is funded by the Singapore Government through the Industry Alignment Fund - Industry Collaboration Projects Grant
CAPNet: Context and Attribute Perception for Pedestrian Detection
With a focus on practical applications in the real world, a number of challenges impede the progress of pedestrian detection. Scale variance, cluttered backgrounds and ambiguous pedestrian features are the main culprits of detection failures. According to existing studies, consistent feature fusion, semantic context mining and inherent pedestrian attributes seem to be feasible solutions. In this paper, to tackle the prevalent problems of pedestrian detection, we propose an anchor-free pedestrian detector, named context and attribute perception (CAPNet). In particular, we first generate features with consistent well-defined semantics and local details by introducing a feature extraction module with a multi-stage and parallel-stream structure. Then, a global feature mining and aggregation (GFMA) network is proposed to implicitly reconfigure, reassign and aggregate features so as to suppress irrelevant features in the background. At last, in order to bring more heuristic rules to the network, we improve the detection head with an attribute-guided multiple receptive field (AMRF) module, leveraging the pedestrian shape as an attribute to guide learning. Experimental results demonstrate that introducing the context and attribute perception greatly facilitates detection. As a result, CAPNet achieves new state-of-the-art performance on Caltech and CityPersons datasets