57,357 research outputs found
Minimal Interspecies Interaction Adjustment (MIIA): Inference of Neighbor-Dependent Interactions in Microbial Communities
An intriguing aspect in microbial communities is that pairwise interactions can be influenced by neighboring species. This creates context dependencies for microbial interactions that are based on the functional composition of the community. Context dependent interactions are ecologically important and clearly present in nature, yet firmly established theoretical methods are lacking from many modern computational investigations. Here, we propose a novel network inference method that enables predictions for interspecies interactions affected by shifts in community composition and species populations. Our approach first identifies interspecies interactions in binary communities, which is subsequently used as a basis to infer modulation in more complex multi-species communities based on the assumption that microbes minimize adjustments of pairwise interactions in response to neighbor species. We termed this rule-based inference minimal interspecies interaction adjustment (MIIA). Our critical assessment of MIIA has produced reliable predictions of shifting interspecies interactions that are dependent on the functional role of neighbor organisms. We also show how MIIA has been applied to a microbial community composed of competing soil bacteria to elucidate a new finding that – in many cases – adding fewer competitors could impose more significant impact on binary interactions. The ability to predict membership-dependent community behavior is expected to help deepen our understanding of how microbiomes are organized in nature and how they may be designed and/or controlled in the future
Recommended from our members
Modelling of Diesel fuel properties through its surrogates using Perturbed-Chain, Statistical Associating Fluid Theory
The Perturbed-Chain, Statistical Associating Fluid Theory equation of state is utilised to model the effect of pressure and temperature on the density, volatility and viscosity of four Diesel surrogates; these calculated properties are then compared to the properties of several Diesel fuels. Perturbed-Chain, Statistical Associating Fluid Theory calculations are performed using different sources for the pure component parameters. One source utilises literature values obtained from fitting vapour pressure and saturated liquid density data or from correlations based on these parameters. The second source utilises a group contribution method based on the chemical structure of each compound. Both modelling methods deliver similar estimations for surrogate density and volatility that are in close agreement with experimental results obtained at ambient pressure. Surrogate viscosity is calculated using the entropy scaling model with a new mixing rule for calculating mixture model parameters. The closest match of the surrogates to Diesel fuel properties provides mean deviations of 1.7% in density, 2.9% in volatility and 8.3% in viscosity. The Perturbed-Chain, Statistical Associating Fluid Theory results are compared to calculations using the Peng–Robinson equation of state; the greater performance of the Perturbed-Chain, Statistical Associating Fluid Theory approach for calculating fluid properties is demonstrated. Finally, an eight-component surrogate, with properties at high pressure and temperature predicted with the group contribution Perturbed-Chain, Statistical Associating Fluid Theory method, yields the best match for Diesel properties with a combined mean absolute deviation of 7.1% from experimental data found in the literature for conditions up to 373°K and 500 MPa. These results demonstrate the predictive capability of a state-of-the-art equation of state for Diesel fuels at extreme engine operating conditions
A general guide to applying machine learning to computer architecture
The resurgence of machine learning since the late 1990s has been enabled by significant advances in computing performance and the growth of big data. The ability of these algorithms to detect complex patterns in data which are extremely difficult to achieve manually, helps to produce effective predictive models. Whilst computer architects have been accelerating the performance of machine learning algorithms with GPUs and custom hardware, there have been few implementations leveraging these algorithms to improve the computer system performance. The work that has been conducted, however, has produced considerably promising results.
The purpose of this paper is to serve as a foundational base and guide to future computer
architecture research seeking to make use of machine learning models for improving system efficiency.
We describe a method that highlights when, why, and how to utilize machine learning
models for improving system performance and provide a relevant example showcasing the effectiveness of applying machine learning in computer architecture. We describe a process of data
generation every execution quantum and parameter engineering. This is followed by a survey of a
set of popular machine learning models. We discuss their strengths and weaknesses and provide
an evaluation of implementations for the purpose of creating a workload performance predictor
for different core types in an x86 processor. The predictions can then be exploited by a scheduler
for heterogeneous processors to improve the system throughput. The algorithms of focus are
stochastic gradient descent based linear regression, decision trees, random forests, artificial neural
networks, and k-nearest neighbors.This work has been supported by the European Research Council (ERC) Advanced Grant RoMoL (Grant Agreemnt 321253) and by the Spanish Ministry of Science and Innovation (contract TIN 2015-65316P).Peer ReviewedPostprint (published version
Is Structure Necessary for Modeling Argument Expectations in Distributional Semantics?
Despite the number of NLP studies dedicated to thematic fit estimation,
little attention has been paid to the related task of composing and updating
verb argument expectations. The few exceptions have mostly modeled this
phenomenon with structured distributional models, implicitly assuming a
similarly structured representation of events. Recent experimental evidence,
however, suggests that human processing system could also exploit an
unstructured "bag-of-arguments" type of event representation to predict
upcoming input. In this paper, we re-implement a traditional structured model
and adapt it to compare the different hypotheses concerning the degree of
structure in our event knowledge, evaluating their relative performance in the
task of the argument expectations update.Comment: conference paper, IWC
- …