541 research outputs found

    Model of Visual Contrast Gain Control and Pattern and Noise Masking

    Get PDF
    The first stage of the model can be subdivided into a global contrast sensitivity function (a 2-D log-parabolic filter of spatial frequency), followed by an array of sensors having Gabor-pattern receptive fields. The second stage is contrast gain control. At this stage, sensor outputs are subjected to an expansive transformation. Then the outputs are pooled and used to inhibit (or “normalize”) each other. Inhibition is strongest between sensors with similar preferences for orientation, spatial frequency and spatial location. In the final stage of the model, the nomalized sensor outputs for each image are subjected to Minkowski pooling. Two-alternative, forced-choice detection accuracy is determined by the probability that the difference between pooled outputs exceeds a random sample from the standard normal distribution

    Connecting psychophysical performance to neuronal response properties II: Contrast decoding and detection

    Get PDF
    The purpose of this article is to provide mathematical insights into the results of some Monte Carlo simulations published by Tolhurst and colleagues (Clatworthy, Chirimuuta, Lauritzen, & Tolhurst, 2003; Chirimuuta & Tolhurst, 2005a). In these simulations, the contrast of a visual stimulus was encoded by a model spiking neuron or a set of such neurons. The mean spike count of each neuron was given by a sigmoidal function of contrast, the Naka-Rushton function. The actual number of spikes generated on each trial was determined by a doubly stochastic Poisson process. The spike counts were decoded using a Bayesian decoder to give an estimate of the stimulus contrast. Tolhurst and colleagues used the estimated contrast values to assess the model's performance in a number of ways, and they uncovered several relationships between properties of the neurons and characteristics of performance. Although this work made a substantial contribution to our understanding of the links between physiology and perceptual performance, the Monte Carlo simulations provided little insight into why the obtained patterns of results arose or how general they are. We overcame these problems by deriving equations that predict the model's performance. We derived an approximation of the model's decoding precision using Fisher information. We also analyzed the model's contrast detection performance and discovered a previously unknown theoretical connection between the Naka-Rushton contrast-response function and the Weibull psychometric function. Our equations give many insights into the theoretical relationships between physiology and perceptual performance reported by Tolhurst and colleagues, explaining how they arise and how they generalize across the neuronal parameter space

    Serial integration of sensory evidence for perceptual decisions and oculomotor responses

    Get PDF
    Perceptual decisions often require the integration of noisy sensory evidence over time. This process is formalized with sequential sampling models, where evidence is accumulated up to a decision threshold before a choice is made. Although classical accounts grounded in cognitive psychology tend to consider the process of decision formation and the preparation of the motor response as occurring serially, neurophysiological studies have proposed that decision formation and response preparation occur in parallel and are inseparable (Cisek, 2007; Shadlen et al., 2008). To address this serial vs. parallel debate, we developed a behavioural, reverse correlation protocol, in which the stimuli that influence perceptual decisions can be distinguished from the stimuli that influence motor responses. We show that the temporal integration windows supporting these two processes are distinct and largely non-overlapping, suggesting that they proceed in a serial or cascaded fashion

    Concept-Guided Chain-of-Thought Prompting for Pairwise Comparison Scaling of Texts with Large Language Models

    Full text link
    Existing text scaling methods often require a large corpus, struggle with short texts, or require labeled data. We develop a text scaling method that leverages the pattern recognition capabilities of generative large language models (LLMs). Specifically, we propose concept-guided chain-of-thought (CGCoT), which uses prompts designed to summarize ideas and identify target parties in texts to generate concept-specific breakdowns, in many ways similar to guidance for human coder content analysis. CGCoT effectively shifts pairwise text comparisons from a reasoning problem to a pattern recognition problem. We then pairwise compare concept-specific breakdowns using an LLM. We use the results of these pairwise comparisons to estimate a scale using the Bradley-Terry model. We use this approach to scale affective speech on Twitter. Our measures correlate more strongly with human judgments than alternative approaches like Wordfish. Besides a small set of pilot data to develop the CGCoT prompts, our measures require no additional labeled data and produce binary predictions comparable to a RoBERTa-Large model fine-tuned on thousands of human-labeled tweets. We demonstrate how combining substantive knowledge with LLMs can create state-of-the-art measures of abstract concepts.Comment: 26 pages, 2 figure

    Balancing health and financial protection in health benefit package design

    Get PDF
    Policymakers face difficult choices over which health interventions to publicly finance. We developed an approach to health benefits package design that accommodates explicit tradeoffs between improvements in health and provision of financial risk protection (FRP). We designed a mathematical optimization model to balance gains in health and FRP across candidate interventions when publicly financed. The optimal subset of interventions selected for inclusion was determined with bi-criterion integer programming conditional on a budget constraint. The optimal set of interventions to publicly finance in a health benefits package varied according to whether the objective for optimization was population health benefits or FRP. When both objectives were considered jointly, the resulting optimal essential benefits package depended on the weights placed on the two objectives. In the Sustainable Development Goals era, smart spending toward universal health coverage is essential. Mathematical optimization provides a quantitative framework for policymakers to design health policies and select interventions that jointly prioritize multiple objectives with explicit financial constraints.publishedVersio

    Essential role for ALCAM gene silencing in megakaryocytic differentiation of K562 cells

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Activated leukocyte cell adhesion molecule (ALCAM/CD166) is expressed by hematopoietic stem cells. However, its role in hematopoietic differentiation has not previously been defined.</p> <p>Results</p> <p>In this study, we show that ALCAM expression is silenced in erythromegakaryocytic progenitor cell lines. In agreement with this finding, the ALCAM promoter is occupied by GATA-1 <it>in vivo</it>, and a cognate motif at -850 inhibited promoter activity in K562 and MEG-01 cells. Gain-of-function studies showed that ALCAM clusters K562 cells in a process that requires PKC. Induction of megakaryocytic differentiation in K562 clones expressing ALCAM activated PKC-δ and triggered apoptosis.</p> <p>Conclusions</p> <p>There is a lineage-specific silencing of ALCAM in bi-potential erythromegakaryocytic progenitor cell lines. Marked apoptosis of ALCAM-expressing K562 clones treated with PMA suggests that aberrant ALCAM expression in erythromegakaryocytic progenitors may contribute to megakaryocytopenia.</p

    Anticorrosion Behaviour of Zinc Oxide on Aluminum in 2 M of Hydrochloric Acidic Solution

    Get PDF
    The inhibitive effect of zinc oxide on the corrosion of aluminum in 2 M HCl solution was studied using gravimetric analysis. Different concentrations of the ZnO were varied for their anticorrosion behavior study on the metal. Results showed that the zinc oxide exhibited very good performance, with inhibition efficiency of up to 89%, at reducing the corrosion of aluminum in the acidic chloride environment

    Analyses of the effects of Avocado oil on mild steel corrosion in 1 M of sulphuric acidic solution

    Get PDF
    This study investigates the effects of Avocado oil on mild steel corrosion in 1 M H2SO4 solution using gravimetric method and its requisite analyses. Concentrations of the oil for the acidic environment were varied for their anticorrosion effectiveness study on the metal. Results obtained show the organic inhibitor compound maintained excellent inhibition efficiencies at 5% concentration of the avocado oil, while the lower concentrations of the oil exhibited increasing trends of effectiveness, on the mild steel corrosion. That this result is suggestive of being due to the adsorption reaction of the avocado oil on the surface of the mild steel samples was detailed via the Langmuir adsorption isotherm modelling analyses presented in study

    Attentional modulation of crowding

    Get PDF
    Outside the fovea, the visual system pools features of adjacent stimuli. Left or right of fixation the tilt of an almost horizontal Gabor pattern becomes difficult to classify when horizontal Gabors appear above and below it. Classification is even harder when flankers are to the left and right of the target. With all four flankers present, observers were required both to classify the target’s tilt and perform a spatial frequency task on two of the four flankers. This dual task proved significantly more difficult when attention was directed to the horizontally aligned flankers. We suggest that covert attention to stimuli can increase the weights of their pooled features

    Robust averaging protects decisions from noise in neural computations

    Get PDF
    An ideal observer will give equivalent weight to sources of information that are equally reliable. However, when averaging visual information, human observers tend to downweight or discount features that are relatively outlying or deviant (‘robust averaging’). Why humans adopt an integration policy that discards important decision information remains unknown. Here, observers were asked to judge the average tilt in a circular array of high-contrast gratings, relative to an orientation boundary defined by a central reference grating. Observers showed robust averaging of orientation, but the extent to which they did so was a positive predictor of their overall performance. Using computational simulations, we show that although robust averaging is suboptimal for a perfect integrator, it paradoxically enhances performance in the presence of “late” noise, i.e. which corrupts decisions during integration. In other words, robust decision strategies increase the brain’s resilience to noise arising in neural computations during decision-making
    corecore