27 research outputs found
Recommended from our members
Structural coupling and magnetic tuning in Mn2–x CoxP magnetocalorics for thermomagnetic power generation
25th annual computational neuroscience meeting: CNS-2016
The same neuron may play different functional roles in the neural circuits to which it belongs. For example, neurons in the Tritonia pedal ganglia may participate in variable phases of the swim motor rhythms [1]. While such neuronal functional variability is likely to play a major role the delivery of the functionality of neural systems, it is difficult to study it in most nervous systems. We work on the pyloric rhythm network of the crustacean stomatogastric ganglion (STG) [2]. Typically network models of the STG treat neurons of the same functional type as a single model neuron (e.g. PD neurons), assuming the same conductance parameters for these neurons and implying their synchronous firing [3, 4]. However, simultaneous recording of PD neurons shows differences between the timings of spikes of these neurons. This may indicate functional variability of these neurons. Here we modelled separately the two PD neurons of the STG in a multi-neuron model of the pyloric network. Our neuron models comply with known correlations between conductance parameters of ionic currents. Our results reproduce the experimental finding of increasing spike time distance between spikes originating from the two model PD neurons during their synchronised burst phase. The PD neuron with the larger calcium conductance generates its spikes before the other PD neuron. Larger potassium conductance values in the follower neuron imply longer delays between spikes, see Fig. 17.Neuromodulators change the conductance parameters of neurons and maintain the ratios of these parameters [5]. Our results show that such changes may shift the individual contribution of two PD neurons to the PD-phase of the pyloric rhythm altering their functionality within this rhythm. Our work paves the way towards an accessible experimental and computational framework for the analysis of the mechanisms and impact of functional variability of neurons within the neural circuits to which they belong
Accelerating Dental Adhesive Innovations Through Active Learning and Bayesian Optimization
The discovery of new dental materials is typically a slow process due to high-dimensionality of the formulation space as well as the multiple competing objectives which must be optimized for a given application. Here, we lay out a strategy using active learning and Bayesian optimization that has led to the discovery of 3 new high-performing formulations for dental adhesives within 29 experiments. We utilize curated data from 91 experiments with 43 different components, to reduce the design space and incorporate domain knowledge into our search. The success of this machine learning approach can be adapted to a multitude of dental materials to allow for the fast and efficient discovery of optimal new formulations, leading to enhanced performance, reduced development times, and ultimately more cost-effective and innovative solutions in dental healthcare
Machine learning predictions of low thermal conductivity: comparing TaVO5 and GdTaO4
Advancements in materials discovery tend to rely disproportionately on happenstance and luck rather than employing a systematic approach. Recently, advances in computational power have allowed researchers to build computer models to predict the material properties of any chemical formula. From energy minimization techniques to machine learning based models, these algorithms have unique strengths and weaknesses. However, a computational model is only as good as its accuracy when compared to real-world measurements. In this work, we take two recommendations from a thermoelectric machine learning model, TaVO5 and GdTaO4, and test their thermoelectric properties of Seebeck coefficient, thermal conductivity, and electrical conductivity. We see that the predictions are mixed; thermal conductivities are correctly predicted, while electrical conductivities and Seebeck coefficients are not. Furthermore, we discover a possible new avenue of research of a low thermal conductivity oxide family
Materials Science Optimization Benchmark Dataset for Multi-Objective, Multi-Fidelity Optimization of Hard-Sphere Packing Simulations
In scientific disciplines, benchmarks play a vital role in driving progress forward. For a benchmark to be effective, it must closely resemble real-world tasks. If the level of difficulty or relevance is inadequate, it can impede progress in the field. Moreover, benchmarks should have low computational overhead to ensure accessibility and repeatability. The objective is to achieve a kind of "Turing test" by creating a surrogate model that is practically indistinguishable from the ground truth observation, at least within the dataset\u27s explored boundaries. This objective necessitates a large quantity of data. This study encompasses numerous features that are characteristic of chemistry and materials science optimization tasks that are relevant to industry. These features include high levels of noise, multiple fidelities, multiple objectives, linear constraints, non-linear correlations, and failure regions. We performed 494498 random hard-sphere packing simulations representing 206 CPU days\u27 worth of computational overhead. Simulations required nine input parameters with linear constraints and two discrete fidelities each with continuous fidelity parameters. The results were logged in a free-tier shared MongoDB Atlas database, producing two core tabular datasets: a failure probability dataset and a regression dataset. The failure probability dataset maps unique input parameter sets to the estimated probabilities that the simulation will fail. The regression dataset maps input parameter sets (including repeats) to particle packing fractions and computational runtimes for each of the two steps. These two datasets were used to create a surrogate model as close as possible to running the actual simulations by incorporating simulation failure and heteroskedastic noise. In the regression dataset, percentile ranks were calculated for each group of identical parameter sets to account for heteroskedastic noise, thereby ensuring reliable and accurate results. This differs from the conventional approach that imposes a-priori assumptions, such as Gaussian noise, by specifying mean and standard deviation. This technique can be extended to other benchmark datasets to bridge the gap between optimization benchmarks with low computational overhead and the complex optimization scenarios encountered in the real world
Materials science optimization benchmark dataset for multi-objective, multi-fidelity optimization of hard-sphere packing simulations
In scientific disciplines, benchmarks play a vital role in driving progress forward. For a benchmark to be effective, it must closely resemble real-world tasks. If the level of difficulty or relevance is inadequate, it can impede progress in the field. Moreover, benchmarks should have low computational overhead to ensure accessibility and repeatability. The objective is to achieve a kind of ``Turing test'' by creating a surrogate model that is practically indistinguishable from the ground truth observation, at least within the dataset's explored boundaries. This objective necessitates a large quantity of data. This data encompasses numerous features that are characteristic of chemistry and materials science optimization tasks that are relevant to industry. These features include high levels of noise, multiple fidelities, multiple objectives, linear constraints, non-linear correlations, and failure regions. We performed 494498 random hard-sphere packing simulations representing 206 CPU days’ worth of computational overhead. Simulations required nine input parameters with linear constraints and two discrete fidelities each with continuous fidelity parameters. The data was logged in a free-tier shared MongoDB Atlas database, producing two core tabular datasets: a failure probability dataset and a regression dataset. The failure probability dataset maps unique input parameter sets to the estimated probabilities that the simulation will fail. The regression dataset maps input parameter sets (including repeats) to particle packing fractions and computational runtimes for each of the two steps. These two datasets were used to create a surrogate model as close as possible to running the actual simulations by incorporating simulation failure and heteroskedastic noise. In the regression dataset, percentile ranks were calculated for each group of identical parameter sets to account for heteroskedastic noise, thereby ensuring reliable and accurate data. This differs from the conventional approach that imposes a-priori assumptions, such as Gaussian noise, by specifying mean and standard deviation. This technique can be extended to other benchmark datasets to bridge the gap between optimization benchmarks with low computational overhead and the complex optimization scenarios encountered in the real world