10,390 research outputs found
Natural Visualizations
This paper demonstrates the prevalence of a shared characteristic between visualizations and images of nature. We have analyzed visualization competitions and user studies of visualizations and found that the more preferred, better performing visualizations exhibit more natural characteristics. Due to our brain being wired to perceive natural images [SO01], testing a visualization for properties similar to those of natural images can help show how well our brain is capable of absorbing the data. In turn, a metric that finds a visualization’s similarity to a natural image may help determine the effectiveness of that visualization. We have found that the results of comparing the sizes and distribution of the objects in a visualization with those of natural standards strongly correlate to one’s preference of that visualization
Subsonic investigations of vortex interaction control for enhanced high-alpha aerodynamics of a chine forebody/Delta wing configuration
A proposed concept to alleviate high alpha asymmetry and lateral/directional instability by decoupling of forebody and wing vortices was studied on a generic chine forebody/ 60 deg. delta configuration in the NASA Langley 7 by 10 foot High Speed Tunnel. The decoupling technique involved inboard leading edge flaps of varying span and deflection angle. Six component force/moment characteristics, surface pressure distributions and vapor-screen flow visualizations were acquired, on the basic wing-body configuration and with both single and twin vertical tails at M sub infinity = 0.1 and 0.4, and in the range alpha = 0 to 50 deg and beta = -10 to +10 degs. Results are presented which highlight the potential of vortex decoupling via leading edge flaps for enhanced high alpha lateral/directional characteristics
Visualizing and Understanding Sum-Product Networks
Sum-Product Networks (SPNs) are recently introduced deep tractable
probabilistic models by which several kinds of inference queries can be
answered exactly and in a tractable time. Up to now, they have been largely
used as black box density estimators, assessed only by comparing their
likelihood scores only. In this paper we explore and exploit the inner
representations learned by SPNs. We do this with a threefold aim: first we want
to get a better understanding of the inner workings of SPNs; secondly, we seek
additional ways to evaluate one SPN model and compare it against other
probabilistic models, providing diagnostic tools to practitioners; lastly, we
want to empirically evaluate how good and meaningful the extracted
representations are, as in a classic Representation Learning framework. In
order to do so we revise their interpretation as deep neural networks and we
propose to exploit several visualization techniques on their node activations
and network outputs under different types of inference queries. To investigate
these models as feature extractors, we plug some SPNs, learned in a greedy
unsupervised fashion on image datasets, in supervised classification learning
tasks. We extract several embedding types from node activations by filtering
nodes by their type, by their associated feature abstraction level and by their
scope. In a thorough empirical comparison we prove them to be competitive
against those generated from popular feature extractors as Restricted Boltzmann
Machines. Finally, we investigate embeddings generated from random
probabilistic marginal queries as means to compare other tractable
probabilistic models on a common ground, extending our experiments to Mixtures
of Trees.Comment: Machine Learning Journal paper (First Online), 24 page
- …