47,896 research outputs found
Approximations from Anywhere and General Rough Sets
Not all approximations arise from information systems. The problem of fitting
approximations, subjected to some rules (and related data), to information
systems in a rough scheme of things is known as the \emph{inverse problem}. The
inverse problem is more general than the duality (or abstract representation)
problems and was introduced by the present author in her earlier papers. From
the practical perspective, a few (as opposed to one) theoretical frameworks may
be suitable for formulating the problem itself. \emph{Granular operator spaces}
have been recently introduced and investigated by the present author in her
recent work in the context of antichain based and dialectical semantics for
general rough sets. The nature of the inverse problem is examined from
number-theoretic and combinatorial perspectives in a higher order variant of
granular operator spaces and some necessary conditions are proved. The results
and the novel approach would be useful in a number of unsupervised and semi
supervised learning contexts and algorithms.Comment: 20 Pages. Scheduled to appear in IJCRS'2017 LNCS Proceedings,
Springe
Visual and interactive exploration of point data
Point data, such as Unit Postcodes (UPC), can provide very detailed information at fine
scales of resolution. For instance, socio-economic attributes are commonly assigned to
UPC. Hence, they can be represented as points and observable at the postcode level.
Using UPC as a common field allows the concatenation of variables from disparate data
sources that can potentially support sophisticated spatial analysis. However, visualising
UPC in urban areas has at least three limitations. First, at small scales UPC occurrences
can be very dense making their visualisation as points difficult. On the other hand,
patterns in the associated attribute values are often hardly recognisable at large scales.
Secondly, UPC can be used as a common field to allow the concatenation of highly
multivariate data sets with an associated postcode. Finally, socio-economic variables
assigned to UPC (such as the ones used here) can be non-Normal in their distributions
as a result of a large presence of zero values and high variances which constrain their
analysis using traditional statistics.
This paper discusses a Point Visualisation Tool (PVT), a proof-of-concept system
developed to visually explore point data. Various well-known visualisation techniques
were implemented to enable their interactive and dynamic interrogation. PVT provides
multiple representations of point data to facilitate the understanding of the relations
between attributes or variables as well as their spatial characteristics. Brushing between
alternative views is used to link several representations of a single attribute, as well as
to simultaneously explore more than one variable. PVTâs functionality shows how the
use of visual techniques embedded in an interactive environment enable the exploration
of large amounts of multivariate point data
Conservation Laws and the Multiplicity Evolution of Spectra at the Relativistic Heavy Ion Collider
Transverse momentum distributions in ultra-relativistic heavy ion collisions
carry considerable information about the dynamics of the hot system produced.
Direct comparison with the same spectra from collisions has proven
invaluable to identify novel features associated with the larger system, in
particular, the "jet quenching" at high momentum and apparently much stronger
collective flow dominating the spectral shape at low momentum. We point out
possible hazards of ignoring conservation laws in the comparison of high- and
low-multiplicity final states. We argue that the effects of energy and momentum
conservation actually dominate many of the observed systematics, and that
collisions may be much more similar to heavy ion collisions than generally
thought.Comment: 15 pages, 14 figures, submitted to PRC; Figures 2,4,5,6,12 updated,
Tables 1 and 3 added, typo in Tab.V fixed, appendix B partially rephrased,
minor typo in Eq.B1 fixed, minor wording; references adde
On Hilberg's Law and Its Links with Guiraud's Law
Hilberg (1990) supposed that finite-order excess entropy of a random human
text is proportional to the square root of the text length. Assuming that
Hilberg's hypothesis is true, we derive Guiraud's law, which states that the
number of word types in a text is greater than proportional to the square root
of the text length. Our derivation is based on some mathematical conjecture in
coding theory and on several experiments suggesting that words can be defined
approximately as the nonterminals of the shortest context-free grammar for the
text. Such operational definition of words can be applied even to texts
deprived of spaces, which do not allow for Mandelbrot's ``intermittent
silence'' explanation of Zipf's and Guiraud's laws. In contrast to
Mandelbrot's, our model assumes some probabilistic long-memory effects in human
narration and might be capable of explaining Menzerath's law.Comment: To appear in Journal of Quantitative Linguistic
Stochastic efficiency analysis with risk aversion bounds: a simplified approach
A method of stochastic dominance analysis with respect to a function (SDRF) is described and illustrated. The method, called stochastic efficiency with respect to a function (SERF), orders a set of risky alternatives in terms of certainty equivalents for a specified range of attitudes to risk. It can be applied for conforming utility functions with risk attitudes defined by corresponding ranges of absolute, relative or partial risk aversion coefficients. Unlike conventional SDRF, SERF involves comparing each alternative with all the other alternatives simultaneously, not pairwise, and hence can produce a smaller efficient set than that found by simple pairwise SDRF over the same range of risk attitudes. Moreover, the method can be implemented in a simple spreadsheet with no special software needed.Risk and Uncertainty,
- âŚ