341 research outputs found
Chains of rotational tori and filamentary structures close to high multiplicity periodic orbits in a 3D galactic potential
This paper discusses phase space structures encountered in the neighborhood
of periodic orbits with high order multiplicity in a 3D autonomous Hamiltonian
system with a potential of galactic type. We consider 4D spaces of section and
we use the method of color and rotation [Patsis and Zachilas 1994] in order to
visualize them. As examples we use the case of two orbits, one 2-periodic and
one 7-periodic. We investigate the structure of multiple tori around them in
the 4D surface of section and in addition we study the orbital behavior in the
neighborhood of the corresponding simple unstable periodic orbits. By
considering initially a few consequents in the neighborhood of the orbits in
both cases we find a structure in the space of section, which is in direct
correspondence with what is observed in a resonance zone of a 2D autonomous
Hamiltonian system. However, in our 3D case we have instead of stability
islands rotational tori, while the chaotic zone connecting the points of the
unstable periodic orbit is replaced by filaments extending in 4D following a
smooth color variation. For more intersections, the consequents of the orbit
which started in the neighborhood of the unstable periodic orbit, diffuse in
phase space and form a cloud that occupies a large volume surrounding the
region containing the rotational tori. In this cloud the colors of the points
are mixed. The same structures have been observed in the neighborhood of all
m-periodic orbits we have examined in the system. This indicates a generic
behavior.Comment: 12 pages,22 figures, Accepted for publication in the International
Journal of Bifurcation and Chao
Structure of the WipA protein reveals a novel tyrosine protein phosphatase effector from Legionella pneumophila
Legionnaires' disease is a severe form of pneumonia caused by the bacterium Legionella pneumophila. L. pneumophila pathogenicity relies on secretion of more than 300 effector proteins by a type IVb secretion system. Among these Legionella effectors, WipA has been primarily studied because of its dependence on a chaperone complex, IcmSW, for translocation through the secretion system, but its role in pathogenicity has remained unknown. In this study, we present the crystal structure of a large fragment of WipA, WipA435. Surprisingly, this structure revealed a serine/threonine phosphatase fold that unexpectedly targets tyrosine-phosphorylated peptides. The structure also revealed a sequence insertion that folds into an α-helical hairpin, the tip of which adopts a canonical coiled-coil structure. The purified protein was a dimer whose dimer interface involves interactions between the coiled coil of one WipA molecule and the phosphatase domain of another. Given the ubiquity of protein-protein interaction mediated by interactions between coiled-coils, we hypothesize that WipA can thereby transition from a homodimeric state to a heterodimeric state in which the coiled-coil region of WipA is engaged in a protein-protein interaction with a tyrosine-phosphorylated host target. In conclusion, these findings help advance our understanding of the molecular mechanisms of an effector involved in Legionella virulence and may inform approaches to elucidate the function of other effectors
Towards a legal definition of machine intelligence: the argument for artificial personhood in the age of deep learning.
The paper dissects the intricacies of Automated Decision Making (ADM) and urges for refining the current legal definition of AI when pinpointing the role of algorithms in the advent of ubiquitous computing, data analytics and deep learning. ADM relies upon a plethora of algorithmic approaches and has already found a wide range of applications in marketing automation, social networks, computational neuroscience, robotics, and other fields. Our main aim here is to explain how a thorough understanding of the layers of ADM could be a first good step towards this direction: AI operates on a formula based on several degrees of automation employed in the interaction between the programmer, the user, and the algorithm; this can take various shapes and thus yield different answers to key issues regarding agency. The paper offers a fresh look at the concept of “Machine Intelligence”, which exposes certain vulnerabilities in its current legal interpretation. Most importantly, it further helps us to explore whether the argument for “artificial personhood” holds any water. To highlight this argument, analysis proceeds in two parts: Part 1 strives to provide a taxonomy of the various levels of automation that reflects distinct degrees of Human Machine interaction and can thus serve as a point of reference for outlining distinct rights and obligations of the programmer and the consumer: driverless cars are used as a case study to explore the several layers of human and machine interaction. These different degrees of automation reflect various levels of complexities in the underlying algorithms, and pose very interesting questions in terms of agency and dynamic tasks carried out by software agents. Part 2 further discusses the intricate nature of the underlying algorithms and arti!cial neural networks (ANN) that implement them and considers how one can interpret and utilize observed patterns in acquired data. Is “artificial personhood” a sufficient legal response to highly sophisticated machine learning techniques employed in decision making that successfully emulate or even enhance human cognitive abilities
Recommended from our members
New approaches for studying cortical representations
All rights reserved. We review two new approaches for studying cortical representations of sensory stimuli. These exploit optimization algorithms and auto-encoders from machine learning and high resolution electrophysiology data. We show how these approaches can shed new light into the information processing and maintenance taking place in neuronal populations. These approaches allow us to study: 1. Changes in the precision of error representations as a result of neuromodulation. 2. Differences in the cortical connectivity underlying memory representations for different stimuli
Recommended from our members
A study into the layers of automated decision-making: emergent normative and legal aspects of deep learning
The paper dissects the intricacies of automated decision making (ADM) and urges for refining the current legal definition of artificial intelligence (AI) when pinpointing the role of algorithms in the advent of ubiquitous computing, data analytics and deep learning. Whilst coming up with a toolkit to measure algorithmic determination in automated/semi-automated tasks might be proven to be a tedious task for the legislator, our main aim here is to explain how a thorough understanding of the layers of ADM could be a first good step towards this direction: AI operates on a formula based on several degrees of automation employed in the interaction between the programmer, the user, and the algorithm. The paper offers a fresh look at AI, which exposes certain vulnerabilities in its current legal interpretation. To highlight this argument, analysis proceeds in two parts: Part 1 strives to provide a taxonomy of the various levels of automation that reflects distinct degrees of human–machine interaction. Part 2 further discusses the intricate nature of AI algorithms and considers how one can utilize observed patterns in acquired data. Finally, the paper explores the legal challenges that result from user empowerment and the requirement for data transparency
Neural masses and fields in dynamic causal modeling
Dynamic causal modeling (DCM) provides a framework for the analysis of effective connectivity among neuronal subpopulations that subtend invasive (electrocorticograms and local field potentials) and non-invasive (electroencephalography and magnetoencephalography) electrophysiological responses. This paper reviews the suite of neuronal population models including neural masses, fields and conductance-based models that are used in DCM. These models are expressed in terms of sets of differential equations that allow one to model the synaptic underpinnings of connectivity. We describe early developments using neural mass models, where convolution-based dynamics are used to generate responses in laminar-specific populations of excitatory and inhibitory cells. We show that these models, though resting on only two simple transforms, can recapitulate the characteristics of both evoked and spectral responses observed empirically. Using an identical neuronal architecture, we show that a set of conductance based models—that consider the dynamics of specific ion-channels—present a richer space of responses; owing to non-linear interactions between conductances and membrane potentials. We propose that conductance-based models may be more appropriate when spectra present with multiple resonances. Finally, we outline a third class of models, where each neuronal subpopulation is treated as a field; in other words, as a manifold on the cortical surface. By explicitly accounting for the spatial propagation of cortical activity through partial differential equations (PDEs), we show that the topology of connectivity—through local lateral interactions among cortical layers—may be inferred, even in the absence of spatially resolved data. We also show that these models allow for a detailed analysis of structure–function relationships in the cortex. Our review highlights the relationship among these models and how the hypothesis asked of empirical data suggests an appropriate model class.Wellcome TrustVTCR
Neural masses and fields: modeling the dynamics of brain activity
This technical note introduces a conductance-based neural field model that combines biologically realistic synaptic dynamics—based on transmembrane currents—with neural field equations, describing the propagation of spikes over the cortical surface. This model allows for fairly realistic inter-and intra-laminar intrinsic connections that underlie spatiotemporal neuronal dynamics. We focus on the response functions of expected neuronal states (such as depolarization) that generate observed electrophysiological signals (like LFP recordings and EEG). These response functions characterize the model's transfer functions and implicit spectral responses to (uncorrelated) input. Our main finding is that both the evoked responses (impulse response functions) and induced responses (transfer functions) show qualitative differences depending upon whether one uses a neural mass or field model. Furthermore, there are differences between the equivalent convolution and conductance models. Overall, all models reproduce a characteristic increase in frequency, when inhibition was increased by increasing the rate constants of inhibitory populations. However, convolution and conductance-based models showed qualitatively different changes in power, with convolution models showing decreases with increasing inhibition, while conductance models show the opposite effect. These differences suggest that conductance based field models may be important in empirical studies of cortical gain control or pharmacological manipulations
Recommended from our members
Working Memory Load Modulates Neuronal Coupling
There is a severe limitation in the number of items that can be held in working memory. However, the neurophysiological limits remain unknown. We asked whether the capacity limit might be explained by differences in neuronal coupling. We developed a theoretical model based on Predictive Coding and used it to analyze Cross Spectral Density data from the prefrontal cortex (PFC), frontal eye fields (FEF), and lateral intraparietal area (LIP). Monkeys performed a change detection task. The number of objects that had to be remembered (memory load) was varied (1–3 objects in the same visual hemifield). Changes in memory load changed the connectivity in the PFC–FEF–LIP network. Feedback (top-down) coupling broke down when the number of objects exceeded cognitive capacity. Thus, impaired behavioral performance coincided with a break-down of Prediction signals. This provides new insights into the neuronal underpinnings of cognitive capacity and how coupling in a distributed working memory network is affected by memory load
Recommended from our members
On conductance-based neural field models
This technical note introduces a conductance-based neural field model that combines biologically realistic synaptic dynamics—based on transmembrane currents—with neural field equations, describing the propagation of spikes over the cortical surface. This model allows for fairly realistic inter-and intra-laminar intrinsic connections that underlie spatiotemporal neuronal dynamics. We focus on the response functions of expected neuronal states (such as depolarization) that generate observed electrophysiological signals (like LFP recordings and EEG). These response functions characterize the model's transfer functions and implicit spectral responses to (uncorrelated) input. Our main finding is that both the evoked responses (impulse response functions) and induced responses (transfer functions) show qualitative differences depending upon whether one uses a neural mass or field model. Furthermore, there are differences between the equivalent convolution and conductance models. Overall, all models reproduce a characteristic increase in frequency, when inhibition was increased by increasing the rate constants of inhibitory populations. However, convolution and conductance-based models showed qualitatively different changes in power, with convolution models showing decreases with increasing inhibition, while conductance models show the opposite effect. These differences suggest that conductance based field models may be important in empirical studies of cortical gain control or pharmacological manipulations
Recommended from our members
Linking canonical microcircuits and neuronal activity: Dynamic causal modelling of laminar recordings
Neural models describe brain activity at different scales, ranging from single cells to whole brain networks. Here, we attempt to reconcile models operating at the microscopic (compartmental) and mesoscopic (neural mass) scales to analyse data from microelectrode recordings of intralaminar neural activity. Although these two classes of models operate at different scales, it is relatively straightforward to create neural mass models of ensemble activity that are equipped with priors obtained after fitting data generated by detailed microscopic models. This provides generative (forward) models of measured neuronal responses that retain construct validity in relation to compartmental models. We illustrate our approach using cross spectral responses obtained from V1 during a visual perception paradigm that involved optogenetic manipulation of the basal forebrain. We find that the resulting neural mass model can distinguish between activity in distinct cortical layers – both with and without optogenetic activation – and that cholinergic input appears to enhance (disinhibit) superficial layer activity relative to deep layers. This is particularly interesting from the perspective of predictive coding, where neuromodulators are thought to boost prediction errors that ascend the cortical hierarchy
- …
