1,186 research outputs found
Information Theory for Complex Systems Scientists
In the 21st century, many of the crucial scientific and technical issues
facing humanity can be understood as problems associated with understanding,
modelling, and ultimately controlling complex systems: systems comprised of a
large number of non-trivially interacting components whose collective behaviour
can be difficult to predict. Information theory, a branch of mathematics
historically associated with questions about encoding and decoding messages,
has emerged as something of a lingua franca for those studying complex systems,
far exceeding its original narrow domain of communication systems engineering.
In the context of complexity science, information theory provides a set of
tools which allow researchers to uncover the statistical and effective
dependencies between interacting components; relationships between systems and
their environment; mereological whole-part relationships; and is sensitive to
non-linearities missed by commonly parametric statistical models.
In this review, we aim to provide an accessible introduction to the core of
modern information theory, aimed specifically at aspiring (and established)
complex systems scientists. This includes standard measures, such as Shannon
entropy, relative entropy, and mutual information, before building to more
advanced topics, including: information dynamics, measures of statistical
complexity, information decomposition, and effective network inference. In
addition to detailing the formal definitions, in this review we make an effort
to discuss how information theory can be interpreted and develop the intuition
behind abstract concepts like "entropy," in the hope that this will enable
interested readers to understand what information is, and how it is used, at a
more fundamental level
Generalized Decomposition of Multivariate Information
Since its introduction, the partial information decomposition (PID) has
emerged as a powerful, information-theoretic technique useful for studying the
structure of (potentially higher-order) interactions in complex systems.
Despite its utility, the applicability of the PID is restricted by the need to
assign elements as either inputs or targets, as well as the specific structure
of the mutual information itself. Here, we introduce a generalized information
decomposition that relaxes the source/target distinction while still satisfying
the basic intuitions about information. This approach is based on the
decomposition of the Kullback-Leibler divergence, and consequently allows for
the analysis of any information gained when updating from an arbitrary prior to
an arbitrary posterior. Consequently, any information-theoretic measure that
can be written in as a Kullback-Leibler divergence admits a decomposition in
the style of Williams and Beer, including the total correlation, the
negentropy, and the mutual information as special cases. In this paper, we
explore how the generalized information decomposition can reveal novel insights
into existing measures, as well as the nature of higher-order synergies. We
show that synergistic information is intimately related to the well-known
Tononi-Sporns-Edelman (TSE) complexity, and that synergistic information
requires a similar integration/segregation balance as a high TSE complexity.
Finally, we end with a discussion of how this approach fits into other attempts
to generalize the PID and the possibilities for empirical applications.Comment: 15 pages, 39 reference
Evolving higher-order synergies reveals a trade-off between stability and information integration capacity in complex systems
There has recently been an explosion of interest in how "higher-order"
structures emerge in complex systems. This "emergent" organization has been
found in a variety of natural and artificial systems, although at present the
field lacks a unified understanding of what the consequences of higher-order
synergies and redundancies are for systems. Typical research treat the presence
(or absence) of synergistic information as a dependent variable and report
changes in the level of synergy in response to some change in the system. Here,
we attempt to flip the script: rather than treating higher-order information as
a dependent variable, we use evolutionary optimization to evolve boolean
networks with significant higher-order redundancies, synergies, or statistical
complexity. We then analyse these evolved populations of networks using
established tools for characterizing discrete dynamics: the number of
attractors, average transient length, and Derrida coefficient. We also assess
the capacity of the systems to integrate information. We find that high-synergy
systems are unstable and chaotic, but with a high capacity to integrate
information. In contrast, evolved redundant systems are extremely stable, but
have negligible capacity to integrate information. Finally, the complex systems
that balance integration and segregation (known as Tononi-Sporns-Edelman
complexity) show features of both chaosticity and stability, with a greater
capacity to integrate information than the redundant systems while being more
stable than the random and synergistic systems. We conclude that there may be a
fundamental trade-off between the robustness of a systems dynamics and its
capacity to integrate information (which inherently requires flexibility and
sensitivity), and that certain kinds of complexity naturally balance this
trade-off
Next-to-Minimal Supersymmetric Model Higgs Scenarios for Partially Universal GUT Scale Boundary Conditions
We examine the extent to which it is possible to realize the NMSSM "ideal
Higgs" models espoused in several papers by Gunion et al in the context of
partially universal GUT scale boundary conditions. To this end we use the
powerful methodology of nested sampling. We pay particular attention to whether
ideal-Higgs-like points not only pass LEP constraints but are also acceptable
in terms of the numerous constraints now available, including those from the
Tevatron and -factory data, and the relic density .
In general for this particular methodology and range of parameters chosen, very
few points corresponding to said previous studies were found, and those that
were found were at best away from the preferred relic density value.
Instead, there exist a class of points, which combine a mostly singlet-like
Higgs with a mostly singlino-like neutralino coannihilating with the lightest
stau, that are able to effectively pass all implemented constraints in the
region . It seems that the spin-independent direct detection cross
section acts as a key discriminator between ideal Higgs points and the hard to
detect singlino-like points.Comment: 22 pages, 15 figure
Simulating Charged Defects at Database Scale
Point defects have a strong influence on the physical properties of
materials, often dominating the electronic and optical behavior in
semiconductors and insulators. The simulation and analysis of point defects is
therefore crucial for understanding the growth and operation of materials
especially for optoelectronics applications. In this work, we present a
general-purpose Python framework for the analysis of point defects in
crystalline materials, as well as a generalized workflow for their treatment
with high-throughput simulations. The distinguishing feature of our approach is
an emphasis on a unique, unitcell, structure-only, definition of point defects
which decouples the defect definition and the specific supercell representation
used to simulate the defect. This allows the results of first-principles
calculations to be aggregated into a database without extensive provenance
information and is a crucial step in building a persistent database of point
defects that can grow over time, a key component towards realizing the idea of
a ``defect genome' that can yield more complex relationships governing the
behavior of defects in materials. We demonstrate several examples of the
approach for three technologically relevant materials and highlight current
pitfalls that must be considered when employing these methodologies, as well as
their potential solutions
Genome-wide association analysis and functional annotation of positional candidate genes for feed conversion efficiency and growth rate in pigs
This project has received funding from the European Union‘s Seventh Framework Programme for research, technological development and demonstration as part of the ECO-FCE project under grant agreement No. 311794.peer-reviewedFeed conversion efficiency is a measure of how well an animal converts feed into live weight and it is typically expressed as feed conversion ratio (FCR). FCR and related traits like growth rate (e.g. days to 110 kg—D110) are of high interest for animal breeders, farmers and society due to implications on animal performance, feeding costs and environmental sustainability. The objective of this study was to identify genomic regions associated with FCR and D110 in pigs. A total of 952 terminal line boars, showing an individual variation in FCR, were genotyped using 60K SNP-Chips. Markers were tested for associations with estimated breeding values (EBV) for FCR and D110. For FCR, the largest number of associated SNPs was located on chromosomes 4 (30 SNPs), 1 (25 SNPs), X (15 SNPs) and 6 (12 SNPs). The most prominent genomic regions for D110 were identified on chromosomes 15 (10 SNPs), 1 and 4 (both 9 SNPs). The most significantly associated SNPs for FCR and D110 mapped 129.8 Kb from METTL11B (chromosome 4) and 32Kb from MBD5 (chromosome 15), respectively. A list of positional genes, closest to significantly associated SNPs, was used to identify enriched pathways and biological functions related to the QTL for both traits. A number of candidate genes were significantly overrepresented in pathways of immune cell trafficking, lymphoid tissue structure, organ morphology, endocrine system function, lipid metabolism, and energy production. After resequencing the coding region of selected positional and functional candidate genes, six SNPs were genotyped in a subset of boars. SNPs in PRKDC, SELL, NR2E1 and AKRIC3 showed significant associations with EBVs for FCR/D110. The study revealed a number of chromosomal regions and candidate genes affecting FCR/D110 and pointed to corresponding biological pathways related to lipid metabolism, olfactory reception, and also immunological status.This project has received funding from the European Union‘s Seventh Framework Programme for research, technological development and demonstration as part of the ECO-FCE project under grant agreement No. 311794
Variations in hypoxia impairs muscle oxygenation and performance during simulated team-sport running
Purpose: To quantify the effect of acute hypoxia on muscle oxygenation and power during simulated team-sport running. Methods: Seven individuals performed repeated and single sprint efforts, embedded in a simulated team-sport running protocol, on a non-motorized treadmill in normoxia (sea-level), and acute normobaric hypoxia (simulated altitudes of 2,000 and 3,000 m). Mean and peak power was quantified during all sprints and repeated sprints. Mean total work, heart rate, blood oxygen saturation, and quadriceps muscle deoxyhaemoglobin concentration (assessed via near-infrared spectroscopy) were measured over the entire protocol. A linear mixed model was used to estimate performance and physiological effects across each half of the protocol. Changes were expressed in standardized units for assessment of magnitude. Uncertainty in the changes was expressed as a 90% confidence interval and interpreted via non-clinical magnitude-based inference. Results: Mean total work was reduced at 2,000 m (−10%, 90% confidence limits ±6%) and 3,000 m (−15%, ±5%) compared with sea-level. Mean heart rate was reduced at 3,000 m compared with 2,000 m (−3, ±3 min(−1)) and sea-level (−3, ±3 min(−1)). Blood oxygen saturation was lower at 2,000 m (−8, ±3%) and 3,000 m (−15, ±2%) compared with sea-level. Sprint mean power across the entire protocol was reduced at 3,000 m compared with 2,000 m (−12%, ±3%) and sea-level (−14%, ±4%). In the second half of the protocol, sprint mean power was reduced at 3,000 m compared to 2,000 m (−6%, ±4%). Sprint mean peak power across the entire protocol was lowered at 2,000 m (−10%, ±6%) and 3,000 m (−16%, ±6%) compared with sea-level. During repeated sprints, mean peak power was lower at 2,000 m (−8%, ±7%) and 3,000 m (−8%, ±7%) compared with sea-level. In the second half of the protocol, repeated sprint mean power was reduced at 3,000 m compared to 2,000 m (−7%, ±5%) and sea-level (−9%, ±5%). Quadriceps muscle deoxyhaemoglobin concentration was lowered at 3,000 m compared to 2,000 m (−10, ±12%) and sea-level (−11, ±12%). Conclusions: Simulated team-sport running is impaired at 3,000 m compared to 2,000 m and sea-level, likely due to a higher muscle deoxygenation
Hybrid Adaptive Filter development for the minimisation of transient fluctuations superimposed on electrotelluric field recordings mainly by magnetic storms
The method of Hybrid Adaptive Filtering (HAF) aims to recover the recorded electric field signals from anomalies of magnetotelluric origin induced mainly by magnetic storms. An adaptive filter incorporating neuro-fuzzy technology has been developed to remove any significant distortions from the equivalent magnetic field signal, as retrieved from the original electric field signal by reversing the magnetotelluric method. Testing with further unseen data verifies the reliability of the model and demonstrates the effectiveness of the HAF method
Recommended from our members
Serotonergic psychedelics LSD & psilocybin increase the fractal dimension of cortical brain activity in spatial and temporal domains.
Psychedelic drugs, such as psilocybin and LSD, represent unique tools for researchers investigating the neural origins of consciousness. Currently, the most compelling theories of how psychedelics exert their effects is by increasing the complexity of brain activity and moving the system towards a critical point between order and disorder, creating more dynamic and complex patterns of neural activity. While the concept of criticality is of central importance to this theory, few of the published studies on psychedelics investigate it directly, testing instead related measures such as algorithmic complexity or Shannon entropy. We propose using the fractal dimension of functional activity in the brain as a measure of complexity since findings from physics suggest that as a system organizes towards criticality, it tends to take on a fractal structure. We tested two different measures of fractal dimension, one spatial and one temporal, using fMRI data from volunteers under the influence of both LSD and psilocybin. The first was the fractal dimension of cortical functional connectivity networks and the second was the fractal dimension of BOLD time-series. In addition to the fractal measures, we used a well-established, non-fractal measure of signal complexity and show that they behave similarly. We were able to show that both psychedelic drugs significantly increased the fractal dimension of functional connectivity networks, and that LSD significantly increased the fractal dimension of BOLD signals, with psilocybin showing a non-significant trend in the same direction. With both LSD and psilocybin, we were able to localize changes in the fractal dimension of BOLD signals to brain areas assigned to the dorsal-attenion network. These results show that psychedelic drugs increase the fractal dimension of activity in the brain and we see this as an indicator that the changes in consciousness triggered by psychedelics are associated with evolution towards a critical zone.NIHR
Wellcome
NSF-NRT
MRC
Beckley Foundation
Alex Mosley Charitable Trust
Ad Astria Chandaria Foundation.
Neuro-psychoanalysis Foundation
Multidisplinary Association for Psychedelic Studies
The Heffter Research Institut
- …