582 research outputs found
Kernel Code Integrity Protection Based on a Virtualized Memory Architecture
Kernel rootkits pose significant challenges on defensive techniques as they run at the highest privilege level along with the protection systems. Modern architectural approaches such as the NX protection have been used in mitigating attacks, however determined attackers can still bypass these defenses with specifically crafted payloads. In this paper, we propose a virtualized Harvard memory architecture to address the kernel code integrity problem, which virtually separates the code fetch and data access on the kernel code to prevent kernel from code modifications. We have implemented the proposed mechanism in commodity operating system, and the experimental results show that our approach is effective and incurs very low overhead
Anatomy-specific classification of medical images using deep convolutional nets
Automated classification of human anatomy is an important prerequisite for
many computer-aided diagnosis systems. The spatial complexity and variability
of anatomy throughout the human body makes classification difficult. "Deep
learning" methods such as convolutional networks (ConvNets) outperform other
state-of-the-art methods in image classification tasks. In this work, we
present a method for organ- or body-part-specific anatomical classification of
medical images acquired using computed tomography (CT) with ConvNets. We train
a ConvNet, using 4,298 separate axial 2D key-images to learn 5 anatomical
classes. Key-images were mined from a hospital PACS archive, using a set of
1,675 patients. We show that a data augmentation approach can help to enrich
the data set and improve classification performance. Using ConvNets and data
augmentation, we achieve anatomy-specific classification error of 5.9 % and
area-under-the-curve (AUC) values of an average of 0.998 in testing. We
demonstrate that deep learning can be used to train very reliable and accurate
classifiers that could initialize further computer-aided diagnosis.Comment: Presented at: 2015 IEEE International Symposium on Biomedical
Imaging, April 16-19, 2015, New York Marriott at Brooklyn Bridge, NY, US
Towards Runtime Customizable Trusted Execution Environment on FPGA-SoC
Processing sensitive data and deploying well-designed Intellectual Property
(IP) cores on remote Field Programmable Gate Array (FPGA) are prone to private
data leakage and IP theft. One effective solution is constructing Trusted
Execution Environment (TEE) on FPGA-SoCs (FPGA System on Chips). Researchers
have integrated this type TEE with Trusted Platform Module (TPM)-based trusted
boot, denoted as FPGA-SoC tbTEE. But there is no effort on secure and trusted
runtime customization of FPGA-SoC TEE. This paper extends FPGA-SoC tbTEE to
build Runtime Customizable TEE (RCTEE) on FPGA-SoC by additive three major
components (our work): 1) CrloadIP, which can load an IP core at runtime such
that RCTEE can be adjusted dynamically and securely; 2) CexecIP, which can not
only execute an IP core without modifying the operating system of FPGA-SoC TEE,
but also prevent insider attacks from executing IPs deployed in RCTEE; 3)
CremoAT, which can provide the newly measured RCTEE state and establish a
secure and trusted communication path between remote verifiers and RCTEE. We
conduct a security analysis of RCTEE and its performance evaluation on Xilinx
Zynq UltraScale+ XCZU15EG 2FFVB1156 MPSoC
CRS-FL: Conditional Random Sampling for Communication-Efficient and Privacy-Preserving Federated Learning
Federated Learning (FL), a privacy-oriented distributed ML paradigm, is being
gaining great interest in Internet of Things because of its capability to
protect participants data privacy. Studies have been conducted to address
challenges existing in standard FL, including communication efficiency and
privacy-preserving. But they cannot achieve the goal of making a tradeoff
between communication efficiency and model accuracy while guaranteeing privacy.
This paper proposes a Conditional Random Sampling (CRS) method and implements
it into the standard FL settings (CRS-FL) to tackle the above-mentioned
challenges. CRS explores a stochastic coefficient based on Poisson sampling to
achieve a higher probability of obtaining zero-gradient unbiasedly, and then
decreases the communication overhead effectively without model accuracy
degradation. Moreover, we dig out the relaxation Local Differential Privacy
(LDP) guarantee conditions of CRS theoretically. Extensive experiment results
indicate that (1) in communication efficiency, CRS-FL performs better than the
existing methods in metric accuracy per transmission byte without model
accuracy reduction in more than 7% sampling ratio (# sampling size / # model
size); (2) in privacy-preserving, CRS-FL achieves no accuracy reduction
compared with LDP baselines while holding the efficiency, even exceeding them
in model accuracy under more sampling ratio conditions
PA-iMFL: Communication-Efficient Privacy Amplification Method against Data Reconstruction Attack in Improved Multi-Layer Federated Learning
Recently, big data has seen explosive growth in the Internet of Things (IoT).
Multi-layer FL (MFL) based on cloud-edge-end architecture can promote model
training efficiency and model accuracy while preserving IoT data privacy. This
paper considers an improved MFL, where edge layer devices own private data and
can join the training process. iMFL can improve edge resource utilization and
also alleviate the strict requirement of end devices, but suffers from the
issues of Data Reconstruction Attack (DRA) and unacceptable communication
overhead. This paper aims to address these issues with iMFL. We propose a
Privacy Amplification scheme on iMFL (PA-iMFL). Differing from standard MFL, we
design privacy operations in end and edge devices after local training,
including three sequential components, local differential privacy with Laplace
mechanism, privacy amplification subsample, and gradient sign reset.
Benefitting from privacy operations, PA-iMFL reduces communication overhead and
achieves privacy-preserving. Extensive results demonstrate that against
State-Of-The-Art (SOTA) DRAs, PA-iMFL can effectively mitigate private data
leakage and reach the same level of protection capability as the SOTA defense
model. Moreover, due to adopting privacy operations in edge devices, PA-iMFL
promotes up to 2.8 times communication efficiency than the SOTA compression
method without compromising model accuracy.Comment: 12 pages, 11 figure
- …