2,390,150 research outputs found
Classification of multidimensional inflationary models
We define under which circumstances two multi-warped product spacetimes can
be considered equivalent and then we classify the spaces of constant curvature
in the Euclidean and Lorentzian signature. For dimension D=2, we get
essentially twelve representations, for D=3 exactly eighteen. More general, for
every even D, 5D+2 cases exist, whereas for every odd D, 5D+3 cases exist. For
every D, exactly one half of them has the Euclidean signature. Our definition
is well suited for the discussion of multidimensional cosmological models, and
our results give a simple algorithm to decide whether a given metric represents
the inflationary de Sitter spacetime (in unusual coordinates) or not.Comment: 21 pages, LaTeX, no figures, J. Math. Phys. in prin
A Classification of Models for Concurrency
Models for concurrency can be classified with respect to the three relevant parameters: behaviour/system, interleaving/noninterleaving, linear/branching time. When modelling a process, a choice concerning such parameters corresponds to choosing the level of abstraction of the resulting semantics. The classifications are formalised through the medium of category theory
Continual Classification Learning Using Generative Models
Continual learning is the ability to sequentially learn over time by
accommodating knowledge while retaining previously learned experiences. Neural
networks can learn multiple tasks when trained on them jointly, but cannot
maintain performance on previously learned tasks when tasks are presented one
at a time. This problem is called catastrophic forgetting. In this work, we
propose a classification model that learns continuously from sequentially
observed tasks, while preventing catastrophic forgetting. We build on the
lifelong generative capabilities of [10] and extend it to the classification
setting by deriving a new variational bound on the joint log likelihood, .Comment: 5 pages, 4 figures, under review in Continual learning Workshop NIPS
201
A Classification of Toroidal Orientifold Models
We develop the general tools for model building with orientifolds, including
SS supersymmetry breaking. In this paper, we work out the general formulae of
the tadpole conditions for a class of non supersymmetric orientifold models of
type IIB string theory compactified on , based on the general properties
of the orientifold group elements. By solving the tadpoles we obtain the
general anomaly free massless spectrum.Comment: 35 pages, 3 eps figures, Latex2
Using Word Embeddings in Twitter Election Classification
Word embeddings and convolutional neural networks (CNN)
have attracted extensive attention in various classification
tasks for Twitter, e.g. sentiment classification. However,
the effect of the configuration used to train and generate
the word embeddings on the classification performance has
not been studied in the existing literature. In this paper,
using a Twitter election classification task that aims to detect
election-related tweets, we investigate the impact of
the background dataset used to train the embedding models,
the context window size and the dimensionality of word
embeddings on the classification performance. By comparing
the classification results of two word embedding models,
which are trained using different background corpora
(e.g. Wikipedia articles and Twitter microposts), we show
that the background data type should align with the Twitter
classification dataset to achieve a better performance. Moreover,
by evaluating the results of word embeddings models
trained using various context window sizes and dimensionalities,
we found that large context window and dimension
sizes are preferable to improve the performance. Our experimental
results also show that using word embeddings and
CNN leads to statistically significant improvements over various
baselines such as random, SVM with TF-IDF and SVM
with word embeddings
Reducing Spatial Data Complexity for Classification Models
Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly
increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy
corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be
frequently retrained which fiirther hinders their use. Various data reduction techniques ranging from data sampling up to
density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do
not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our
response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled
spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we
demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are
moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled
by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of
the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions.
As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with
the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced
dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments
if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of
classification performance at the comparable compression levels
Generative and Discriminative Text Classification with Recurrent Neural Networks
We empirically characterize the performance of discriminative and generative
LSTM models for text classification. We find that although RNN-based generative
models are more powerful than their bag-of-words ancestors (e.g., they account
for conditional dependencies across words in a document), they have higher
asymptotic error rates than discriminatively trained RNN models. However we
also find that generative models approach their asymptotic error rate more
rapidly than their discriminative counterparts---the same pattern that Ng &
Jordan (2001) proved holds for linear classification models that make more
naive conditional independence assumptions. Building on this finding, we
hypothesize that RNN-based generative classification models will be more robust
to shifts in the data distribution. This hypothesis is confirmed in a series of
experiments in zero-shot and continual learning settings that show that
generative models substantially outperform discriminative models
- …
