75 research outputs found

    Generating Stimuli for Neuroscience Using PsychoPy

    Get PDF
    PsychoPy is a software library written in Python, using OpenGL to generate very precise visual stimuli on standard personal computers. It is designed to allow the construction of as wide a variety of neuroscience experiments as possible, with the least effort. By writing scripts in standard Python syntax users can generate an enormous variety of visual and auditory stimuli and can interact with a wide range of external hardware (enabling its use in fMRI, EEG, MEG etc.). The structure of scripts is simple and intuitive. As a result, new experiments can be written very quickly, and trying to understand a previously written script is easy, even with minimal code comments. PsychoPy can also generate movies and image sequences to be used in demos or simulated neuroscience experiments. This paper describes the range of tools and stimuli that it provides and the environment in which experiments are conducted

    Luminance cues constrain chromatic blur discrimination in natural scene stimuli

    Get PDF
    Introducing blur into the color components of a natural scene has very little effect on its percept, whereas blur introduced into the luminance component is very noticeable. Here we quantify the dominance of luminance information in blur detection and examine a number of potential causes. We show that the interaction between chromatic and luminance information is not explained by reduced acuity or spatial resolution limitations for chromatic cues, the effective contrast of the luminance cue, or chromatic and achromatic statistical regularities in the images. Regardless of the quality of chromatic information, the visual system gives primacy to luminance signals when determining edge location. In natural viewing, luminance information appears to be specialized for detecting object boundaries while chromatic information may be used to determine surface properties

    Measuring nonlinear signal combination using EEG

    Get PDF
    Relatively little is known about the processes, both linear and nonlinear, by which signals are combined beyond V1. By presenting two stimulus components simultaneously, flickering at different temporal frequencies (frequency tagging) while measuring steady-state visual evoked potentials, we can assess responses to the individual components, including direct measurements of suppression on each other, and various nonlinear responses to their combination found at intermodulation frequencies. The result is a rather rich dataset of frequencies at which responses can be found. We presented pairs of sinusoidal gratings at different temporal frequencies, forming plaid patterns that were "coherent" (looking like a checkerboard) and "noncoherent" (looking like a pair of transparently overlaid gratings), and found clear intermodulation responses to compound stimuli, indicating nonlinear summation. This might have been attributed to cross-orientation suppression except that the pattern of intermodulation responses differed for coherent and noncoherent patterns, whereas the effects of suppression (measured at the component frequencies) did not. A two-stage model of nonlinear summation involving conjunction detection with a logical AND gate described the data well, capturing the difference between coherent and noncoherent plaids over a wide array of possible response frequencies. Multistimulus frequency-tagged EEG in combination with computational modeling may be a very valuable tool in studying the conjunction of these signals. In the current study the results suggest a second-order mechanism responding selectively to coherent plaid patterns

    Cue combination of conflicting color and luminance edges

    Get PDF
    Abrupt changes in the color or luminance of a visual image potentially indicate object boundaries. Here, we consider how these cues to the visual “edge” location are combined when they conflict. We measured the extent to which localization of a compound edge can be predicted from a simple maximum likelihood estimation model using the reliability of chromatic (L−M) and luminance signals alone. Maximum likelihood estimation accurately predicted thepatternof results across a range of contrasts. Predictions consistently overestimated the relative influence of the luminance cue; although L−M is often considered a poor cue for localization, it was used more than expected. This need not indicate that the visual system is suboptimal but that its priors about which cue is moreusefulare not flat. This may be because, although strong changes in chromaticity typically represent object boundaries, changes in luminance can be caused by either a boundary or a shadow

    Using Evolutionary Algorithms for Fitting High-Dimensional Models to Neuronal Data

    Get PDF
    In the study of neurosciences, and of complex biological systems in general, there is frequently a need to fit mathematical models with large numbers of parameters to highly complex datasets. Here we consider algorithms of two different classes, gradient following (GF) methods and evolutionary algorithms (EA) and examine their performance in fitting a 9-parameter model of a filter-based visual neuron to real data recorded from a sample of 107 neurons in macaque primary visual cortex (V1). Although the GF method converged very rapidly on a solution, it was highly susceptible to the effects of local minima in the error surface and produced relatively poor fits unless the initial estimates of the parameters were already very good. Conversely, although the EA required many more iterations of evaluating the model neuron’s response to a series of stimuli, it ultimately found better solutions in nearly all cases and its performance was independent of the starting parameters of the model. Thus, although the fitting process was lengthy in terms of processing time, the relative lack of human intervention in the evolutionary algorithm, and its ability ultimately to generate model fits that could be trusted as being close to optimal, made it far superior in this particular application than the gradient following methods. This is likely to be the case in many further complex systems, as are often found in neuroscience

    At the Biological Modeling and Simulation Frontier

    Get PDF
    We provide a rationale for and describe examples of synthetic modeling and simulation (M&S) of biological systems. We explain how synthetic methods are distinct from familiar inductive methods. Synthetic M&S is a means to better understand the mechanisms that generate normal and disease-related phenomena observed in research, and how compounds of interest interact with them to alter phenomena. An objective is to build better, working hypotheses of plausible mechanisms. A synthetic model is an extant hypothesis: execution produces an observable mechanism and phenomena. Mobile objects representing compounds carry information enabling components to distinguish between them and react accordingly when different compounds are studied simultaneously. We argue that the familiar inductive approaches contribute to the general inefficiencies being experienced by pharmaceutical R&D, and that use of synthetic approaches accelerates and improves R&D decision-making and thus the drug development process. A reason is that synthetic models encourage and facilitate abductive scientific reasoning, a primary means of knowledge creation and creative cognition. When synthetic models are executed, we observe different aspects of knowledge in action from different perspectives. These models can be tuned to reflect differences in experimental conditions and individuals, making translational research more concrete while moving us closer to personalized medicine

    Dissection of a QTL Hotspot on Mouse Distal Chromosome 1 that Modulates Neurobehavioral Phenotypes and Gene Expression

    Get PDF
    A remarkably diverse set of traits maps to a region on mouse distal chromosome 1 (Chr 1) that corresponds to human Chr 1q21–q23. This region is highly enriched in quantitative trait loci (QTLs) that control neural and behavioral phenotypes, including motor behavior, escape latency, emotionality, seizure susceptibility (Szs1), and responses to ethanol, caffeine, pentobarbital, and haloperidol. This region also controls the expression of a remarkably large number of genes, including genes that are associated with some of the classical traits that map to distal Chr 1 (e.g., seizure susceptibility). Here, we ask whether this QTL-rich region on Chr 1 (Qrr1) consists of a single master locus or a mixture of linked, but functionally unrelated, QTLs. To answer this question and to evaluate candidate genes, we generated and analyzed several gene expression, haplotype, and sequence datasets. We exploited six complementary mouse crosses, and combed through 18 expression datasets to determine class membership of genes modulated by Qrr1. Qrr1 can be broadly divided into a proximal part (Qrr1p) and a distal part (Qrr1d), each associated with the expression of distinct subsets of genes. Qrr1d controls RNA metabolism and protein synthesis, including the expression of ∼20 aminoacyl-tRNA synthetases. Qrr1d contains a tRNA cluster, and this is a functionally pertinent candidate for the tRNA synthetases. Rgs7 and Fmn2 are other strong candidates in Qrr1d. FMN2 protein has pronounced expression in neurons, including in the dendrites, and deletion of Fmn2 had a strong effect on the expression of few genes modulated by Qrr1d. Our analysis revealed a highly complex gene expression regulatory interval in Qrr1, composed of multiple loci modulating the expression of functionally cognate sets of genes

    The Classification of Economic Activity

    Full text link
    The Business Cycle Dating Committee (BCDC) of the National Bureau of Economic Research provides a historical chronology of business cycle turning points. This paper investigates three central aspects about this chronology: (1) How skillful is the BCDC in classifying economic activity into expansions and recessions? (2) Which indices of business conditions best capture the current but unobservable state of the business cycle? And (3) Which indicators predict future turning points best and at what horizons? We answer each of these questions in detail with methods novel to economics designed to assess classification ability. In the process we clarify several important features of business cycle phenomena

    The development and validation of a scoring tool to predict the operative duration of elective laparoscopic cholecystectomy

    Get PDF
    Background: The ability to accurately predict operative duration has the potential to optimise theatre efficiency and utilisation, thus reducing costs and increasing staff and patient satisfaction. With laparoscopic cholecystectomy being one of the most commonly performed procedures worldwide, a tool to predict operative duration could be extremely beneficial to healthcare organisations. Methods: Data collected from the CholeS study on patients undergoing cholecystectomy in UK and Irish hospitals between 04/2014 and 05/2014 were used to study operative duration. A multivariable binary logistic regression model was produced in order to identify significant independent predictors of long (> 90 min) operations. The resulting model was converted to a risk score, which was subsequently validated on second cohort of patients using ROC curves. Results: After exclusions, data were available for 7227 patients in the derivation (CholeS) cohort. The median operative duration was 60 min (interquartile range 45–85), with 17.7% of operations lasting longer than 90 min. Ten factors were found to be significant independent predictors of operative durations > 90 min, including ASA, age, previous surgical admissions, BMI, gallbladder wall thickness and CBD diameter. A risk score was then produced from these factors, and applied to a cohort of 2405 patients from a tertiary centre for external validation. This returned an area under the ROC curve of 0.708 (SE = 0.013, p  90 min increasing more than eightfold from 5.1 to 41.8% in the extremes of the score. Conclusion: The scoring tool produced in this study was found to be significantly predictive of long operative durations on validation in an external cohort. As such, the tool may have the potential to enable organisations to better organise theatre lists and deliver greater efficiencies in care
    corecore