35 research outputs found
Dostoevsky’s Ideal Man
This paper aimed to provide a comprehensive examination of the ideal Dostoevsky human being. Through comparison of various characters and concepts found in his texts, a kenotic individual, one who is undifferentiated in their love for all of God\u27s creation, was found to be the ultimate to which Dostoevsky believed man could ascend
Democratic population decisions result in robust policy-gradient learning: A parametric study with GPU simulations
High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the promise of high computational power at a low cost. However, GPU programming is a non-trivial task and moreover architectural limitations raise the question of whether investing effort in this direction may be worthwhile. In this work, we use GPU programming to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and investigate its ability to learn a simplified navigation task using a policy-gradient learning rule stemming from Reinforcement Learning. The purpose of this paper is twofold. First, we want to support the use of GPUs in the field of Computational Neuroscience. Second, using GPU computing power, we investigate the conditions under which the said architecture and learning rule demonstrate best performance. Our work indicates that networks featuring strong Mexican-Hat-shaped recurrent connections in the top layer, where decision making is governed by the formation of a stable activity bump in the neural population (a "non-democratic" mechanism), achieve mediocre learning results at best. In absence of recurrent connections, where all neurons "vote" independently ("democratic") for a decision via population vector readout, the task is generally learned better and more robustly. Our study would have been extremely difficult on a desktop computer without the use of GPU programming. We present the routines developed for this purpose and show that a speed improvement of 5x up to 42x is provided versus optimised Python code. The higher speed is achieved when we exploit the parallelism of the GPU in the search of learning parameters. This suggests that efficient GPU programming can significantly reduce the time needed for simulating networks of spiking neurons, particularly when multiple parameter configurations are investigated. © 2011 Richmond et al
The Vulnverability Cube: A Multi-Dimensional Framework for Assessing Relative Vulnerability
The diversity and abundance of information available for vulnerability assessments can present a challenge to decision-makers. Here we propose a framework to aggregate and present socioeconomic and environmental data in a visual vulnerability assessment that will help prioritize management options for communities vulnerable to environmental change. Socioeconomic and environmental data are aggregated into distinct categorical indices across three dimensions and arranged in a cube, so that individual communities can be plotted in a three-dimensional space to assess the type and relative magnitude of the communities’ vulnerabilities based on their position in the cube. We present an example assessment using a subset of the USEPA National Estuary Program (NEP) estuaries: coastal communities vulnerable to the effects of environmental change on ecosystem health and water quality. Using three categorical indices created from a pool of publicly available data (socioeconomic index, land use index, estuary condition index), the estuaries were ranked based on their normalized averaged scores and then plotted along the three axes to form a vulnerability cube. The position of each community within the three-dimensional space communicates both the types of vulnerability endemic to each estuary and allows for the clustering of estuaries with like-vulnerabilities to be classified into typologies. The typologies highlight specific vulnerability descriptions that may be helpful in creating specific management strategies. The data used to create the categorical indices are flexible depending on the goals of the decision makers, as different data should be chosen based on availability or importance to the system. Therefore, the analysis can be tailored to specific types of communities, allowing a data rich process to inform decision-making
Population pharmacokinetics and dosing implications for cobimetinib in patients with solid tumors.
Effect of biocontrol agent, plant extracts and safe chemicals in suppression of Mungbean Yellow Mosaic Virus
Code Generation in Computational Neuroscience: A Review of Tools and Techniques
International audienceAdvances in experimental techniques and computational power allowing researchers to gather anatomical and electrophysiological data at unprecedented levels of detail have fostered the development of increasingly complex models in computational neuroscience. Large-scale, biophysically detailed cell models pose a particular set of computational challenges, and this has led to the development of a number of domain-specific simulators. At the other level of detail, the ever growing variety of point neuron models increases the implementation barrier even for those based on the relatively simple integrate-and-fire neuron model. Independently of the model complexity, all modeling methods crucially depend on an efficient and accurate transformation of mathematical model descriptions into efficiently executable code. Neuroscientists usually publish model descriptions in terms of the mathematical equations underlying them. However, actually simulating them requires they be translated into code. This can cause problems because errors may be introduced if this process is carried out by hand, and code written by neuroscientists may not be very computationally efficient. Furthermore, the translated code might be generated for different hardware platforms, operating system variants or even written in different languages and thus cannot easily be combined or even compared. Two main approaches to addressing this issues have been followed. The first is to limit users to a fixed set of optimized models, which limits flexibility. The second is to allow model definitions in a high level interpreted language, although this may limit performance. Recently, a third approach has become increasingly popular: using code generation to automatically translate high level descriptions into efficient low level code to combine the best of previous approaches. This approach also greatly enriches efforts to standardize simulator-independent model description languages. In the past few years, a number of code generation pipelines have been developed in the computational neuroscience community, which differ considerably in aim, scope and functionality. This article provides an overview of existing pipelines currently used within the community and contrasts their capabilities and the technologies and concepts behind them