2,653 research outputs found
Optimizing graph layout by t-SNE perplexity estimation
AbstractPerplexity is one of the key parameters of dimensionality reduction algorithm of t-distributed stochastic neighbor embedding (t-SNE). In this paper, we investigated the relationship of t-SNE perplexity and graph layout evaluation metrics including graph stress, preserved neighborhood information and visual inspection. As we found that a small perplexity is correlated with a relative higher normalized stress while preserving neighborhood information with a higher precision but less global structure information, we proposed our method to estimate appropriate perplexity either based on a modified standard t-SNE or the sklearn BarnesâHut TSNE. Experimental results demonstrate effectiveness and ease of use of our approach when tested on a set of benchmark datasets.</jats:p
Approximated and User Steerable tSNE for Progressive Visual Analytics
Progressive Visual Analytics aims at improving the interactivity in existing
analytics techniques by means of visualization as well as interaction with
intermediate results. One key method for data analysis is dimensionality
reduction, for example, to produce 2D embeddings that can be visualized and
analyzed efficiently. t-Distributed Stochastic Neighbor Embedding (tSNE) is a
well-suited technique for the visualization of several high-dimensional data.
tSNE can create meaningful intermediate results but suffers from a slow
initialization that constrains its application in Progressive Visual Analytics.
We introduce a controllable tSNE approximation (A-tSNE), which trades off speed
and accuracy, to enable interactive data exploration. We offer real-time
visualization techniques, including a density-based solution and a Magic Lens
to inspect the degree of approximation. With this feedback, the user can decide
on local refinements and steer the approximation level during the analysis. We
demonstrate our technique with several datasets, in a real-world research
scenario and for the real-time analysis of high-dimensional streams to
illustrate its effectiveness for interactive data analysis
Deep Metric Learning via Lifted Structured Feature Embedding
Learning the distance metric between pairs of examples is of great importance
for learning and visual recognition. With the remarkable success from the state
of the art convolutional neural networks, recent works have shown promising
results on discriminatively training the networks to learn semantic feature
embeddings where similar examples are mapped close to each other and dissimilar
examples are mapped farther apart. In this paper, we describe an algorithm for
taking full advantage of the training batches in the neural network training by
lifting the vector of pairwise distances within the batch to the matrix of
pairwise distances. This step enables the algorithm to learn the state of the
art feature embedding by optimizing a novel structured prediction objective on
the lifted problem. Additionally, we collected Online Products dataset: 120k
images of 23k classes of online products for metric learning. Our experiments
on the CUB-200-2011, CARS196, and Online Products datasets demonstrate
significant improvement over existing deep feature embedding methods on all
experimented embedding sizes with the GoogLeNet network.Comment: 11 page
Conditional t-SNE: Complementary t-SNE embeddings through factoring out prior information
Dimensionality reduction and manifold learning methods such as t-Distributed
Stochastic Neighbor Embedding (t-SNE) are routinely used to map
high-dimensional data into a 2-dimensional space to visualize and explore the
data. However, two dimensions are typically insufficient to capture all
structure in the data, the salient structure is often already known, and it is
not obvious how to extract the remaining information in a similarly effective
manner. To fill this gap, we introduce \emph{conditional t-SNE} (ct-SNE), a
generalization of t-SNE that discounts prior information from the embedding in
the form of labels. To achieve this, we propose a conditioned version of the
t-SNE objective, obtaining a single, integrated, and elegant method. ct-SNE has
one extra parameter over t-SNE; we investigate its effects and show how to
efficiently optimize the objective. Factoring out prior knowledge allows
complementary structure to be captured in the embedding, providing new
insights. Qualitative and quantitative empirical results on synthetic and
(large) real data show ct-SNE is effective and achieves its goal
FAST: A Fully Asynchronous Split Time-Integrator for Self-Gravitating Fluid
We describe a new algorithm for the integration of self-gravitating fluid
systems using SPH method. We split the Hamiltonian of a self-gravitating fluid
system to the gravitational potential and others (kinetic and internal
energies) and use different time-steps for their integrations. The time
integration is done in the way similar to that used in the mixed variable or
multiple stepsize symplectic schemes. We performed three test calculations. One
was the spherical collapse and the other was an explosion. We also performed a
realistic test, in which the initial model was taken from a simulation of
merging galaxies. In all test calculations, we found that the number of
time-steps for gravitational interaction were reduced by nearly an order of
magnitude when we adopted our integration method. In the case of the realistic
test, in which the dark matter potential dominates the total system, the total
calculation time was significantly reduced. Simulation results were almost the
same with those of simulations with the ordinary individual time-step method.
Our new method achieves good performance without sacrificing the accuracy of
the time integration.Comment: 14 pages, 8 figures, accepted for publication in PAS
- âŠ