2 research outputs found
Pragmatic Nonsense
Inspired by the early Wittgenstein's concept of nonsense (meaning that which
lies beyond the limits of language), we define two different, yet
complementary, types of nonsense: formal nonsense and pragmatic nonsense. The
simpler notion of formal nonsense is initially defined within Tarski's semantic
theory of truth; the notion of pragmatic nonsense, by its turn, is formulated
within the context of the theory of pragmatic truth, also known as quasi-truth,
as formalized by da Costa and his collaborators. While an expression will be
considered formally nonsensical if the formal criteria required for the
assignment of any truth-value (whether true, false, pragmatically true, or
pragmatically false) to such sentence are not met, a (well-formed) formula will
be considered pragmatically nonsensical if the pragmatic criteria (inscribed
within the context of scientific practice) required for the assignment of any
truth-value to such sentence are not met. Thus, in the context of the theory of
pragmatic truth, any (well-formed) formula of a formal language interpreted on
a simple pragmatic structure will be considered pragmatically nonsensical if
the set of primary sentences of such structure is not well-built, that is, if
it does not include the relevant observational data and/or theoretical results,
or if it does include sentences that are inconsistent with such data
The simplicity bubble effect as a zemblanitous phenomenon in learning systems
The ubiquity of Big Data and machine learning in society evinces the need of
further investigation of their fundamental limitations. In this paper, we
extend the
``too-much-information-tends-to-behave-like-very-little-information''
phenomenon to formal knowledge about lawlike universes and arbitrary
collections of computably generated datasets. This gives rise to the simplicity
bubble problem, which refers to a learning algorithm equipped with a formal
theory that can be deceived by a dataset to find a locally optimal model which
it deems to be the global one. However, the actual high-complexity globally
optimal model unpredictably diverges from the found low-complexity local
optimum. Zemblanity is defined by an undesirable but expected finding that
reveals an underlying problem or negative consequence in a given model or
theory, which is in principle predictable in case the formal theory contains
sufficient information. Therefore, we argue that there is a ceiling above which
formal knowledge cannot further decrease the probability of zemblanitous
findings, should the randomly generated data made available to the learning
algorithm and formal theory be sufficiently large in comparison to their joint
complexity