126 research outputs found

    Monoid automata for displacement context-free languages

    Full text link
    In 2007 Kambites presented an algebraic interpretation of Chomsky-Schutzenberger theorem for context-free languages. We give an interpretation of the corresponding theorem for the class of displacement context-free languages which are equivalent to well-nested multiple context-free languages. We also obtain a characterization of k-displacement context-free languages in terms of monoid automata and show how such automata can be simulated on two stacks. We introduce the simultaneous two-stack automata and compare different variants of its definition. All the definitions considered are shown to be equivalent basing on the geometric interpretation of memory operations of these automata.Comment: Revised version for ESSLLI Student Session 2013 selected paper

    Cerebral differences in explicit and implicit emotional processing - An fMRI study

    Get PDF
    The processing of emotional facial expression is a major part of social communication and understanding. In addition to explicit processing, facial expressions are also processed rapidly and automatically in the absence of explicit awareness. We investigated 12 healthy subjects by presenting them with an implicit and explicit emotional paradigm. The subjects reacted significantly faster in implicit than in explicit trials but did not differ in their error ratio. For the implicit condition increased signals were observed in particular in the thalami, the hippocampi, the frontal inferior gyri and the right middle temporal region. The analysis of the explicit condition showed increased blood-oxygen-level-dependent signals especially in the caudate nucleus, the cingulum and the right prefrontal cortex. The direct comparison of these 2 different processes revealed increased activity for explicit trials in the inferior, superior and middle frontal gyri, the middle cingulum and left parietal regions. Additional signal increases were detected in occipital regions, the cerebellum, and the right angular and lingual gyrus. Our data partially confirm the hypothesis of different neural substrates for the processing of implicit and explicit emotional stimuli. Copyright (c) 2007 S. Karger AG, Basel

    The changing nature of ‘Regulation by Information’: Towards real-time regulation?

    Get PDF
    peer reviewedThe concept of ‘Regulation by Information’ is changing. Past such approaches consisted mainly of signalling regulatory intent and indirectly guiding how and when regulatory discretion should be exercised. We suggest that this conceptual understanding must be reviewed in view of developing regulatory technologies (RegTech) allowing for a far more proactive integration of data flows into regulatory processes. RegTech is thereby changing conditions of Regulation by Information. This article uses financial regulation as an information-intensive and highly regulated policy field to illustrate and analyse RegTech-induced changes to conditions of Regulation by Information. It finds that the rise of near real-time information flows between market participants and regulatory bodies and, consequently, the need for near real-time regulatory responses on the European Union level have led to an ever higher degree of integration of regulatory software into market data flows. Regulatory software now increasingly shapes the definitions of reporting standards and formats, which in turn shape regulatory choices by influencing information flows. The article shows how this development will likely be used in other data- and information-dense policy areas outside of financial markets. Critics of Regulation by Information argue that it can lead to a lack of accountability and transparency, increasing the democratic deficit within the European Union. This article scrutinises both continuities and changes in the role and significance of legal principles and procedures used in regulatory oversight, following the evolution of this new form of Regulation by Information within the EU

    The Changing Nature of ‘Regulation by Information’

    Get PDF
    peer reviewedRegulation by Information is a concept that has long shaped regulatory practices. Significant transformations in today’s technological landscape are radically changing regulation by information. Integrating regulatory technologies, commonly known as RegTech, revolutionises how data flows are incorporated into regulatory processes. Our article, forthcoming in the European Law Journal, explores the impact of RegTech on Regulation by Information, focusing on financial regulation as a prime example. The evolution of Regulation by Information can be understood through four distinct dimensions. Initially, it involved gathering information to support decision-making, which was extended in the second phase to guiding private actors through decision-making practices. With time, imposing information and publication standards became crucial for regulatory decision-making. Today, the advent of RegTech has ushered in the fourth dimension, characterised by integrating regulatory software into market data flows. In the context of financial regulation, the need for near real-time information flows between market participants and regulatory bodies has intensified. Consequently, regulatory software has become instrumental in shaping reporting standards and formats, thus influencing information flows and regulatory choices. This development is not confined to financial markets alone; it is expected to impact other data- and information-intensive areas. We argue that RegTech marks a fundamental change as to how Regulation by Information may take place

    Natural Image Coding in V1: How Much Use is Orientation Selectivity?

    Get PDF
    Orientation selectivity is the most striking feature of simple cell coding in V1 which has been shown to emerge from the reduction of higher-order correlations in natural images in a large variety of statistical image models. The most parsimonious one among these models is linear Independent Component Analysis (ICA), whereas second-order decorrelation transformations such as Principal Component Analysis (PCA) do not yield oriented filters. Because of this finding it has been suggested that the emergence of orientation selectivity may be explained by higher-order redundancy reduction. In order to assess the tenability of this hypothesis, it is an important empirical question how much more redundancies can be removed with ICA in comparison to PCA, or other second-order decorrelation methods. This question has not yet been settled, as over the last ten years contradicting results have been reported ranging from less than five to more than hundred percent extra gain for ICA. Here, we aim at resolving this conflict by presenting a very careful and comprehensive analysis using three evaluation criteria related to redundancy reduction: In addition to the multi-information and the average log-loss we compute, for the first time, complete rate-distortion curves for ICA in comparison with PCA. Without exception, we find that the advantage of the ICA filters is surprisingly small. Furthermore, we show that a simple spherically symmetric distribution with only two parameters can fit the data even better than the probabilistic model underlying ICA. Since spherically symmetric models are agnostic with respect to the specific filter shapes, we conlude that orientation selectivity is unlikely to play a critical role for redundancy reduction

    Cortical Surround Interactions and Perceptual Salience via Natural Scene Statistics

    Get PDF
    Spatial context in images induces perceptual phenomena associated with salience and modulates the responses of neurons in primary visual cortex (V1). However, the computational and ecological principles underlying contextual effects are incompletely understood. We introduce a model of natural images that includes grouping and segmentation of neighboring features based on their joint statistics, and we interpret the firing rates of V1 neurons as performing optimal recognition in this model. We show that this leads to a substantial generalization of divisive normalization, a computation that is ubiquitous in many neural areas and systems. A main novelty in our model is that the influence of the context on a target stimulus is determined by their degree of statistical dependence. We optimized the parameters of the model on natural image patches, and then simulated neural and perceptual responses on stimuli used in classical experiments. The model reproduces some rich and complex response patterns observed in V1, such as the contrast dependence, orientation tuning and spatial asymmetry of surround suppression, while also allowing for surround facilitation under conditions of weak stimulation. It also mimics the perceptual salience produced by simple displays, and leads to readily testable predictions. Our results provide a principled account of orientation-based contextual modulation in early vision and its sensitivity to the homogeneity and spatial arrangement of inputs, and lends statistical support to the theory that V1 computes visual salience

    Effects of Anacetrapib in Patients with Atherosclerotic Vascular Disease

    Get PDF
    BACKGROUND: Patients with atherosclerotic vascular disease remain at high risk for cardiovascular events despite effective statin-based treatment of low-density lipoprotein (LDL) cholesterol levels. The inhibition of cholesteryl ester transfer protein (CETP) by anacetrapib reduces LDL cholesterol levels and increases high-density lipoprotein (HDL) cholesterol levels. However, trials of other CETP inhibitors have shown neutral or adverse effects on cardiovascular outcomes. METHODS: We conducted a randomized, double-blind, placebo-controlled trial involving 30,449 adults with atherosclerotic vascular disease who were receiving intensive atorvastatin therapy and who had a mean LDL cholesterol level of 61 mg per deciliter (1.58 mmol per liter), a mean non-HDL cholesterol level of 92 mg per deciliter (2.38 mmol per liter), and a mean HDL cholesterol level of 40 mg per deciliter (1.03 mmol per liter). The patients were assigned to receive either 100 mg of anacetrapib once daily (15,225 patients) or matching placebo (15,224 patients). The primary outcome was the first major coronary event, a composite of coronary death, myocardial infarction, or coronary revascularization. RESULTS: During the median follow-up period of 4.1 years, the primary outcome occurred in significantly fewer patients in the anacetrapib group than in the placebo group (1640 of 15,225 patients [10.8%] vs. 1803 of 15,224 patients [11.8%]; rate ratio, 0.91; 95% confidence interval, 0.85 to 0.97; P=0.004). The relative difference in risk was similar across multiple prespecified subgroups. At the trial midpoint, the mean level of HDL cholesterol was higher by 43 mg per deciliter (1.12 mmol per liter) in the anacetrapib group than in the placebo group (a relative difference of 104%), and the mean level of non-HDL cholesterol was lower by 17 mg per deciliter (0.44 mmol per liter), a relative difference of -18%. There were no significant between-group differences in the risk of death, cancer, or other serious adverse events. CONCLUSIONS: Among patients with atherosclerotic vascular disease who were receiving intensive statin therapy, the use of anacetrapib resulted in a lower incidence of major coronary events than the use of placebo. (Funded by Merck and others; Current Controlled Trials number, ISRCTN48678192 ; ClinicalTrials.gov number, NCT01252953 ; and EudraCT number, 2010-023467-18 .)
    corecore