13 research outputs found
Blockchain-based Charging Coordination Mechanism for Smart Grid Energy Storage Units
Energy storage units (ESUs) enable several attractive features of modern
smart grids such as enhanced grid resilience, effective demand response, and
reduced bills. However, uncoordinated charging of ESUs stresses the power
system and can lead to a blackout. On the other hand, existing charging
coordination mechanisms suffer from several limitations. First, the need for a
central charging coordinator (CC) presents a single point of failure that
jeopardizes the effectiveness of the charging coordination. Second, a
transparent charging coordination mechanism does not exist where users are not
aware whether the CC is honest or not in coordination charging requests among
them in a fair way. Third, existing mechanisms overlook the privacy concerns of
the involved customers. To address these limitations, in this paper, we
leverage the blockchain and smart contracts to build a decentralized charging
coordination mechanism without the need for a centralized charging coordinator.
First ESUs should use tokens for anonymously authenticate themselves to the
blockchain. Then each ESU sends a charging request that contains its
State-of-Charge (SoC), Time-to-complete-charge (TCC) and amount of required
charging to the smart contract address on the blockchain. The smart contract
will then run the charging coordination mechanism in a self-executed manner
such that ESUs with the highest priorities are charged in the present time slot
while charging requests of lower priority ESUs are deferred to future time
slots. In this way, each ESU can make sure that charging schedules are computed
correctly. Finally, we have implemented the proposed mechanism on the Ethereum
test-bed blockchain, and our analysis shows that execution cost can be
acceptable in terms of gas consumption while enabling decentralized charging
coordination with increased transparency, reliability, and privacy preserving
Privacy-Preserving Smart Parking System Using Blockchain and Private Information Retrieval
Searching for available parking spaces is a major problem for drivers
especially in big crowded cities, causing traffic congestion and air pollution,
and wasting drivers' time. Smart parking systems are a novel solution to enable
drivers to have real-time parking information for pre-booking. However, current
smart parking requires drivers to disclose their private information, such as
desired destinations. Moreover, the existing schemes are centralized and
vulnerable to the bottleneck of the single point of failure and data breaches.
In this paper, we propose a distributed privacy-preserving smart parking system
using blockchain. A consortium blockchain created by different parking lot
owners to ensure security, transparency, and availability is proposed to store
their parking offers on the blockchain. To preserve drivers' location privacy,
we adopt a private information retrieval (PIR) technique to enable drivers to
retrieve parking offers from blockchain nodes privately, without revealing
which parking offers are retrieved. Furthermore, a short randomizable signature
is used to enable drivers to reserve available parking slots in an anonymous
manner. Besides, we introduce an anonymous payment system that cannot link
drivers' to specific parking locations. Finally, our performance evaluations
demonstrate that the proposed scheme can preserve drivers' privacy with low
communication and computation overhead
A Stealthy Hardware Trojan Exploiting the Architectural Vulnerability of Deep Learning Architectures: Input Interception Attack (IIA)
Deep learning architectures (DLA) have shown impressive performance in
computer vision, natural language processing and so on. Many DLA make use of
cloud computing to achieve classification due to the high computation and
memory requirements. Privacy and latency concerns resulting from cloud
computing has inspired the deployment of DLA on embedded hardware accelerators.
To achieve short time-to-market and have access to global experts,
state-of-the-art techniques of DLA deployment on hardware accelerators are
outsourced to untrusted third parties. This outsourcing raises security
concerns as hardware Trojans can be inserted into the hardware design of the
mapped DLA of the hardware accelerator. We argue that existing hardware Trojan
attacks highlighted in literature have no qualitative means how definite they
are of the triggering of the Trojan. Also, most inserted Trojans show a obvious
spike in the number of hardware resources utilized on the accelerator at the
time of triggering the Trojan or when the payload is active. In this paper, we
introduce a hardware Trojan attack called Input Interception Attack (IIA). In
this attack, we make use of the statistical properties of layer-by-layer output
to ensure that asides from being stealthy. Our IIA is able to trigger with some
measure of definiteness. Moreover, this IIA attack is tested on DLA used to
classify MNIST and Cifar-10 data sets. The attacked design utilizes
approximately up to 2% more LUTs respectively compared to the un-compromised
designs. Finally, this paper discusses potential defensive mechanisms that
could be used to combat such hardware Trojans based attack in hardware
accelerators for DLA
A Scalable Multilabel Classification to Deploy Deep Learning Architectures For Edge Devices
Convolution Neural Networks (CNN) have performed well in many applications
such as object detection, pattern recognition, video surveillance and so on.
CNN carryout feature extraction on labelled data to perform classification.
Multi-label classification assigns more than one label to a particular data
sample in a data set. In multi-label classification, properties of a data point
that are considered to be mutually exclusive are classified. However, existing
multi-label classification requires some form of data pre-processing that
involves image training data cropping or image tiling. The computation and
memory requirement of these multi-label CNN models makes their deployment on
edge devices challenging. In this paper, we propose a methodology that solves
this problem by extending the capability of existing multi-label classification
and provide models with lower latency that requires smaller memory size when
deployed on edge devices. We make use of a single CNN model designed with
multiple loss layers and multiple accuracy layers. This methodology is tested
on state-of-the-art deep learning algorithms such as AlexNet, GoogleNet and
SqueezeNet using the Stanford Cars Dataset and deployed on Raspberry Pi3. From
the results the proposed methodology achieves comparable accuracy with 1.8x
less MACC operation, 0.97x reduction in latency and 0.5x, 0.84x and 0.97x
reduction in size for the generated AlexNet, GoogleNet and SqueezeNet CNN
models respectively when compared to conventional ways of achieving multi-label
classification like hard-coding multi-label instances into single labels. The
methodology also yields CNN models that achieve 50\% less MACC operations, 50%
reduction in latency and size of generated versions of AlexNet, GoogleNet and
SqueezeNet respectively when compared to conventional ways using 2 different
single-labelled models to achieve multi-label classification
Practical Fast Gradient Sign Attack against Mammographic Image Classifier
Artificial intelligence (AI) has been a topic of major research for many
years. Especially, with the emergence of deep neural network (DNN), these
studies have been tremendously successful. Today machines are capable of making
faster, more accurate decision than human. Thanks to the great development of
machine learning (ML) techniques, ML have been used many different fields such
as education, medicine, malware detection, autonomous car etc. In spite of
having this degree of interest and much successful research, ML models are
still vulnerable to adversarial attacks. Attackers can manipulate clean data in
order to fool the ML classifiers to achieve their desire target. For instance;
a benign sample can be modified as a malicious sample or a malicious one can be
altered as benign while this modification can not be recognized by human
observer. This can lead to many financial losses, or serious injuries, even
deaths. The motivation behind this paper is that we emphasize this issue and
want to raise awareness. Therefore, the security gap of mammographic image
classifier against adversarial attack is demonstrated. We use mamographic
images to train our model then evaluate our model performance in terms of
accuracy. Later on, we poison original dataset and generate adversarial samples
that missclassified by the model. We then using structural similarity index
(SSIM) analyze similarity between clean images and adversarial images. Finally,
we show how successful we are to misuse by using different poisoning factors
A Multi-Authority Attribute-Based Signcryption Scheme with Efficient Revocation for Smart Grid Downlink Communication
In this paper, we propose a multi-authority attribute-based signcryption
scheme with efficient revocation for smart grid downlink communications. In the
proposed scheme, grid operators and electricity vendors can send multicast
messages securely to different groups of consumers which is required in
different applications such as firmware update distribution and sending direct
load control messages. Our scheme can ensure the confidentiality and the
integrity of the multicasted messages, allows consumers to authenticate the
source of the multicasted messages, achieves and non-repudiation property, and
allows prompt revocation, simultaneously which are required for the smart grid
downlink communications. Our security analysis demonstrates that the proposed
scheme can thwart various security threats to the smart grid. Our experiments
conducted on an advanced metering infrastructure (AMI) testbed confirm that the
proposed scheme has low computational overhead
Expansion of Cyber Attack Data From Unbalanced Datasets Using Generative Techniques
Machine learning techniques help to understand patterns of a dataset to
create a defense mechanism against cyber attacks. However, it is difficult to
construct a theoretical model due to the imbalances in the dataset for
discriminating attacks from the overall dataset. Multilayer Perceptron (MLP)
technique will provide improvement in accuracy and increase the performance of
detecting the attack and benign data from a balanced dataset. We have worked on
the UGR'16 dataset publicly available for this work. Data wrangling has been
done due to prepare test set from in the original set. We fed the neural
network classifier larger input to the neural network in an increasing manner
(i.e. 10000, 50000, 1 million) to see the distribution of features over the
accuracy. We have implemented a GAN model that can produce samples of different
attack labels (e.g. blacklist, anomaly spam, ssh scan). We have been able to
generate as many samples as necessary based on the data sample we have taken
from the UGR'16. We have tested the accuracy of our model with the imbalance
dataset initially and then with the increasing the attack samples and found
improvement of classification performance for the latter
On Sharing Models Instead of Data using Mimic learning for Smart Health Applications
Electronic health records (EHR) systems contain vast amounts of medical
information about patients. These data can be used to train machine learning
models that can predict health status, as well as to help prevent future
diseases or disabilities. However, getting patients' medical data to obtain
well-trained machine learning models is a challenging task. This is because
sharing the patients' medical records is prohibited by law in most countries
due to patients privacy concerns. In this paper, we tackle this problem by
sharing the models instead of the original sensitive data by using the mimic
learning approach. The idea is first to train a model on the original sensitive
data, called the teacher model. Then, using this model, we can transfer its
knowledge to another model, called the student model, without the need to learn
the original data used in training the teacher model. The student model is then
shared to the public and can be used to make accurate predictions. To assess
the mimic learning approach, we have evaluated our scheme using different
medical datasets. The results indicate that the student model mimics the
teacher model performance in terms of prediction accuracy without the need to
access to the patients' original data records.Comment: This paper is accepted in IEEE International Conference on
Informatics, IoT, and Enabling Technologies (ICIoT'20
Optimizing Joint Probabilistic Caching and Channel Access for Clustered D2D Networks
Caching at mobile devices and leveraging device-to-device (D2D) communication
are two promising approaches to support massive content delivery over wireless
networks. Analysis of such D2D caching networks based on a physical
interference model is usually carried out by assuming uniformly distributed
devices. However, this approach does not capture the notion of device
clustering. In this regard, this paper proposes a joint communication and
caching optimization framework for clustered D2D networks. Devices are
spatially distributed into disjoint clusters and are assumed to have a surplus
memory that is utilized to proactively cache files, following a random
probabilistic caching scheme. The cache offloading gain is maximized by jointly
optimizing channel access and caching scheme. A closed-form caching solution is
obtained and bisection search method is adopted to heuristically obtain the
optimal channel access probability. Results show significant improvement in the
offloading gain reaching up to 10% compared to the Zipf caching baseline.Comment: arXiv admin note: substantial text overlap with arXiv:1810.0551
Performance Analysis of Mobile Cellular-Connected Drones under Practical Antenna Configurations
Providing seamless connectivity to unmanned aerial vehicle user equipments
(UAV-UEs) is very challenging due to the encountered line-of-sight interference
and reduced gains of down-tilted base station (BS) antennas. For instance, as
the altitude of UAV-UEs increases, their cell association and handover
procedure become driven by the side-lobes of the BS antennas. In this paper,
the performance of cellular-connected UAV-UEs is studied under 3D practical
antenna configurations. Two scenarios are studied: scenarios with static,
hovering UAV- UEs and scenarios with mobile UAV-UEs. For both scenarios, the
UAV-UE coverage probability is characterized as a function of the system
parameters. The effects of the number of antenna elements on the UAV-UE
coverage probability and handover rate of mobile UAV-UEs are then investigated.
Results reveal that the UAV-UE coverage probability under a practical antenna
pattern is worse than that under a simple antenna model. Moreover,
vertically-mobile UAV-UEs are susceptible to altitude handover due to
consecutive crossings of the nulls and peaks of the antenna side-lobes