72,472 research outputs found
Algorithmic Randomness as Foundation of Inductive Reasoning and Artificial Intelligence
This article is a brief personal account of the past, present, and future of
algorithmic randomness, emphasizing its role in inductive inference and
artificial intelligence. It is written for a general audience interested in
science and philosophy. Intuitively, randomness is a lack of order or
predictability. If randomness is the opposite of determinism, then algorithmic
randomness is the opposite of computability. Besides many other things, these
concepts have been used to quantify Ockham's razor, solve the induction
problem, and define intelligence.Comment: 9 LaTeX page
Algorithmic Fairness from a Non-ideal Perspective
Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered a variety of algorithms in attempts to satisfy subsets of these parities or to trade o the degree to which they are satised against utility. In this paper, we connect this approach to fair machine learning to the literature on ideal and non-ideal methodological approaches in political philosophy. The ideal approach requires positing the principles according to which a just world would operate. In the most straightforward application of ideal theory, one supports a proposed policy by arguing that it closes a discrepancy between the real and the perfectly just world. However, by failing to account for the mechanisms by which our non-ideal world arose, the responsibilities of various decision-makers, and the impacts of proposed policies, naive applications of ideal thinking can lead to misguided interventions. In this paper, we demonstrate a connection between the fair machine learning literature and the ideal approach in political philosophy, and argue that the increasingly apparent shortcomings of proposed fair machine learning algorithms reflect broader troubles
faced by the ideal approach. We conclude with a critical discussion of the harms of misguided solutions, a
reinterpretation of impossibility results, and directions for future researc
Numerical Investigation of Graph Spectra and Information Interpretability of Eigenvalues
We undertake an extensive numerical investigation of the graph spectra of
thousands regular graphs, a set of random Erd\"os-R\'enyi graphs, the two most
popular types of complex networks and an evolving genetic network by using
novel conceptual and experimental tools. Our objective in so doing is to
contribute to an understanding of the meaning of the Eigenvalues of a graph
relative to its topological and information-theoretic properties. We introduce
a technique for identifying the most informative Eigenvalues of evolving
networks by comparing graph spectra behavior to their algorithmic complexity.
We suggest that extending techniques can be used to further investigate the
behavior of evolving biological networks. In the extended version of this paper
we apply these techniques to seven tissue specific regulatory networks as
static example and network of a na\"ive pluripotent immune cell in the process
of differentiating towards a Th17 cell as evolving example, finding the most
and least informative Eigenvalues at every stage.Comment: Forthcoming in 3rd International Work-Conference on Bioinformatics
and Biomedical Engineering (IWBBIO), Lecture Notes in Bioinformatics, 201
Approximations of Algorithmic and Structural Complexity Validate Cognitive-behavioural Experimental Results
We apply methods for estimating the algorithmic complexity of sequences to
behavioural sequences of three landmark studies of animal behavior each of
increasing sophistication, including foraging communication by ants, flight
patterns of fruit flies, and tactical deception and competition strategies in
rodents. In each case, we demonstrate that approximations of Logical Depth and
Kolmogorv-Chaitin complexity capture and validate previously reported results,
in contrast to other measures such as Shannon Entropy, compression or ad hoc.
Our method is practically useful when dealing with short sequences, such as
those often encountered in cognitive-behavioural research. Our analysis
supports and reveals non-random behavior (LD and K complexity) in flies even in
the absence of external stimuli, and confirms the "stochastic" behaviour of
transgenic rats when faced that they cannot defeat by counter prediction. The
method constitutes a formal approach for testing hypotheses about the
mechanisms underlying animal behaviour.Comment: 28 pages, 7 figures and 2 table
On the Complexity and Behaviour of Cryptocurrencies Compared to Other Markets
We show that the behaviour of Bitcoin has interesting similarities to stock
and precious metal markets, such as gold and silver. We report that whilst
Litecoin, the second largest cryptocurrency, closely follows Bitcoin's
behaviour, it does not show all the reported properties of Bitcoin. Agreements
between apparently disparate complexity measures have been found, and it is
shown that statistical, information-theoretic, algorithmic and fractal measures
have different but interesting capabilities of clustering families of markets
by type. The report is particularly interesting because of the range and novel
use of some measures of complexity to characterize price behaviour, because of
the IRS designation of Bitcoin as an investment property and not a currency,
and the announcement of the Canadian government's own electronic currency
MintChip.Comment: 16 pages, 11 figures, 4 table
- …