44,848 research outputs found
Recommended from our members
Memory-Based High-Level Synthesis Optimizations Security Exploration on the Power Side-Channel
High-level synthesis (HLS) allows hardware designers to think algorithmically and not worry about low-level, cycle-by-cycle details. This provides the ability to quickly explore the architectural design space and tradeoffs between resource utilization and performance. Unfortunately, security evaluation is not a standard part of the HLS design flow. In this article, we aim to understand the effects of memory-based HLS optimizations on power side-channel leakage. We use Xilinx Vivado HLS to develop different cryptographic cores, implement them on a Spartan-6 FPGA, and collect power traces. We evaluate the designs with respect to resource utilization, performance, and information leakage through power consumption. We have two important observations and contributions. First, the choice of resource optimization directive results in different levels of side-channel vulnerabilities. Second, the partitioning optimization directive can greatly compromise the hardware cryptographic system through power side-channel leakage due to the deployment of memory control logic. We describe an evaluation procedure for power side-channel leakage and use it to make best-effort recommendations about how to design more secure architectures in the cryptographic domain
Generative Adversarial Networks (GANs): Challenges, Solutions, and Future Directions
Generative Adversarial Networks (GANs) is a novel class of deep generative
models which has recently gained significant attention. GANs learns complex and
high-dimensional distributions implicitly over images, audio, and data.
However, there exists major challenges in training of GANs, i.e., mode
collapse, non-convergence and instability, due to inappropriate design of
network architecture, use of objective function and selection of optimization
algorithm. Recently, to address these challenges, several solutions for better
design and optimization of GANs have been investigated based on techniques of
re-engineered network architectures, new objective functions and alternative
optimization algorithms. To the best of our knowledge, there is no existing
survey that has particularly focused on broad and systematic developments of
these solutions. In this study, we perform a comprehensive survey of the
advancements in GANs design and optimization solutions proposed to handle GANs
challenges. We first identify key research issues within each design and
optimization technique and then propose a new taxonomy to structure solutions
by key research issues. In accordance with the taxonomy, we provide a detailed
discussion on different GANs variants proposed within each solution and their
relationships. Finally, based on the insights gained, we present the promising
research directions in this rapidly growing field.Comment: 42 pages, Figure 13, Table
Neural Networks for Information Retrieval
Machine learning plays a role in many aspects of modern IR systems, and deep
learning is applied in all of them. The fast pace of modern-day research has
given rise to many different approaches for many different IR problems. The
amount of information available can be overwhelming both for junior students
and for experienced researchers looking for new research topics and directions.
Additionally, it is interesting to see what key insights into IR problems the
new technologies are able to give us. The aim of this full-day tutorial is to
give a clear overview of current tried-and-trusted neural methods in IR and how
they benefit IR research. It covers key architectures, as well as the most
promising future directions.Comment: Overview of full-day tutorial at SIGIR 201
Sub-nanosecond signal propagation in anisotropy engineered nanomagnetic logic chains
Energy efficient nanomagnetic logic (NML) computing architectures propagate
and process binary information by relying on dipolar field coupling to reorient
closely-spaced nanoscale magnets. Signal propagation in nanomagnet chains of
various sizes, shapes, and magnetic orientations has been previously
characterized by static magnetic imaging experiments with low-speed adiabatic
operation; however the mechanisms which determine the final state and their
reproducibility over millions of cycles in high-speed operation (sub-ns time
scale) have yet to be experimentally investigated. Monitoring NML operation at
its ultimate intrinsic speed reveals features undetectable by conventional
static imaging including individual nanomagnetic switching events and
systematic error nucleation during signal propagation. Here, we present a new
study of NML operation in a high speed regime at fast repetition rates. We
perform direct imaging of digital signal propagation in permalloy nanomagnet
chains with varying degrees of shape-engineered biaxial anisotropy using
full-field magnetic soft x-ray transmission microscopy after applying single
nanosecond magnetic field pulses. Further, we use time-resolved magnetic
photo-emission electron microscopy to evaluate the sub-nanosecond dipolar
coupling signal propagation dynamics in optimized chains with 100 ps time
resolution as they are cycled with nanosecond field pulses at a rate of 3 MHz.
An intrinsic switching time of 100 ps per magnet is observed. These
experiments, and accompanying macro-spin and micromagnetic simulations, reveal
the underlying physics of NML architectures repetitively operated on nanosecond
timescales and identify relevant engineering parameters to optimize performance
and reliability.Comment: Main article (22 pages, 4 figures), Supplementary info (11 pages, 5
sections
Generative Mixture of Networks
A generative model based on training deep architectures is proposed. The
model consists of K networks that are trained together to learn the underlying
distribution of a given data set. The process starts with dividing the input
data into K clusters and feeding each of them into a separate network. After
few iterations of training networks separately, we use an EM-like algorithm to
train the networks together and update the clusters of the data. We call this
model Mixture of Networks. The provided model is a platform that can be used
for any deep structure and be trained by any conventional objective function
for distribution modeling. As the components of the model are neural networks,
it has high capability in characterizing complicated data distributions as well
as clustering data. We apply the algorithm on MNIST hand-written digits and
Yale face datasets. We also demonstrate the clustering ability of the model
using some real-world and toy examples.Comment: 9 page
- …