6,137 research outputs found
Generate To Adapt: Aligning Domains using Generative Adversarial Networks
Domain Adaptation is an actively researched problem in Computer Vision. In
this work, we propose an approach that leverages unsupervised data to bring the
source and target distributions closer in a learned joint feature space. We
accomplish this by inducing a symbiotic relationship between the learned
embedding and a generative adversarial network. This is in contrast to methods
which use the adversarial framework for realistic data generation and
retraining deep models with such data. We demonstrate the strength and
generality of our approach by performing experiments on three different tasks
with varying levels of difficulty: (1) Digit classification (MNIST, SVHN and
USPS datasets) (2) Object recognition using OFFICE dataset and (3) Domain
adaptation from synthetic to real data. Our method achieves state-of-the art
performance in most experimental settings and by far the only GAN-based method
that has been shown to work well across different datasets such as OFFICE and
DIGITS.Comment: Accepted as spotlight talk at CVPR 2018. Code available here:
https://github.com/yogeshbalaji/Generate_To_Adap
Single Field Baryogenesis
We propose a new variant of the Affleck-Dine baryogenesis mechanism in which
a rolling scalar field couples directly to left- and right-handed neutrinos,
generating a Dirac mass term through neutrino Yukawa interactions. In this
setup, there are no explicitly CP violating couplings in the Lagrangian. The
rolling scalar field is also taken to be uncharged under the quantum
numbers. During the phase of rolling, scalar field decays generate a
non-vanishing number density of left-handed neutrinos, which then induce a net
baryon number density via electroweak sphaleron transitions.Comment: 4 pages, LaTe
PyCARL: A PyNN Interface for Hardware-Software Co-Simulation of Spiking Neural Network
We present PyCARL, a PyNN-based common Python programming interface for
hardware-software co-simulation of spiking neural network (SNN). Through
PyCARL, we make the following two key contributions. First, we provide an
interface of PyNN to CARLsim, a computationally-efficient, GPU-accelerated and
biophysically-detailed SNN simulator. PyCARL facilitates joint development of
machine learning models and code sharing between CARLsim and PyNN users,
promoting an integrated and larger neuromorphic community. Second, we integrate
cycle-accurate models of state-of-the-art neuromorphic hardware such as
TrueNorth, Loihi, and DynapSE in PyCARL, to accurately model hardware latencies
that delay spikes between communicating neurons and degrade performance. PyCARL
allows users to analyze and optimize the performance difference between
software-only simulation and hardware-software co-simulation of their machine
learning models. We show that system designers can also use PyCARL to perform
design-space exploration early in the product development stage, facilitating
faster time-to-deployment of neuromorphic products. We evaluate the memory
usage and simulation time of PyCARL using functionality tests, synthetic SNNs,
and realistic applications. Our results demonstrate that for large SNNs, PyCARL
does not lead to any significant overhead compared to CARLsim. We also use
PyCARL to analyze these SNNs for a state-of-the-art neuromorphic hardware and
demonstrate a significant performance deviation from software-only simulations.
PyCARL allows to evaluate and minimize such differences early during model
development.Comment: 10 pages, 25 figures. Accepted for publication at International Joint
Conference on Neural Networks (IJCNN) 202
- …
