74 research outputs found
Hyper-parameter optimization of Deep Convolutional Networks for object recognition
Recently sequential model based optimization (SMBO) has emerged as a
promising hyper-parameter optimization strategy in machine learning. In this
work, we investigate SMBO to identify architecture hyper-parameters of deep
convolution networks (DCNs) object recognition. We propose a simple SMBO
strategy that starts from a set of random initial DCN architectures to generate
new architectures, which on training perform well on a given dataset. Using the
proposed SMBO strategy we are able to identify a number of DCN architectures
that produce results that are comparable to state-of-the-art results on object
recognition benchmarks.Comment: 4 pages, 1 figure, 3 tables, Submitted to ICIP 201
Computational Modeling of Channelrhodopsin-2 Photocurrent Characteristics in Relation to Neural Signaling
Channelrhodopsins-2 (ChR2) are a class of light sensitive proteins that offer
the ability to use light stimulation to regulate neural activity with
millisecond precision. In order to address the limitations in the efficacy of
the wild-type ChR2 (ChRwt) to achieve this objective, new variants of ChR2 that
exhibit fast mono-exponential photocurrent decay characteristics have been
recently developed and validated. In this paper, we investigate whether the
framework of transition rate model with 4 states, primarily developed to mimic
the bi-exponential photocurrent decay kinetics of ChRwt, as opposed to the low
complexity 3 state model, is warranted to mimic the mono-exponential
photocurrent decay kinetics of the newly developed fast ChR2 variants: ChETA
(Gunaydin et al., Nature Neurosci, 13:387-392, 2010) and ChRET/TC (Berndt et
al., PNAS, 108:7595-7600, 2011). We begin by estimating the parameters for the
3-state and 4-state models from experimental data on the photocurrent kinetics
of ChRwt, ChETA and ChRET/TC. We then incorporate these models into a
fast-spiking interneuron model (Wang and Buzsaki., J Neurosci,
16:6402-6413,1996) and a hippocampal pyramidal cell model (Golomb et al., J
Neurophysiol, 96:1912-1926, 2006) and investigate the extent to which the
experimentally observed neural response to various optostimulation protocols
can be captured by these models. We demonstrate that for all ChR2 variants
investigated, the 4 state model implementation is better able to capture neural
response consistent with experiments across wide range of optostimulation
protocol. We conclude by analytically investigating the conditions under which
the characteristic specific to the 3-state model, namely the mono-exponential
photocurrent decay of the newly developed variants of ChR2, can occurs in the
framework of the 4-state model.Comment: 10 figure
Fast SVM training using approximate extreme points
Applications of non-linear kernel Support Vector Machines (SVMs) to large
datasets is seriously hampered by its excessive training time. We propose a
modification, called the approximate extreme points support vector machine
(AESVM), that is aimed at overcoming this burden. Our approach relies on
conducting the SVM optimization over a carefully selected subset, called the
representative set, of the training dataset. We present analytical results that
indicate the similarity of AESVM and SVM solutions. A linear time algorithm
based on convex hulls and extreme points is used to compute the representative
set in kernel space. Extensive computational experiments on nine datasets
compared AESVM to LIBSVM \citep{LIBSVM}, CVM \citep{Tsang05}, BVM
\citep{Tsang07}, LASVM \citep{Bordes05},
\citep{Joachims09}, and the random features method \citep{rahimi07}. Our AESVM
implementation was found to train much faster than the other methods, while its
classification accuracy was similar to that of LIBSVM in all cases. In
particular, for a seizure detection dataset, AESVM training was almost
times faster than LIBSVM and LASVM and more than forty times faster than CVM
and BVM. Additionally, AESVM also gave competitively fast classification times.Comment: The manuscript in revised form has been submitted to J. Machine
Learning Researc
- …