136,324 research outputs found
Development of a novel and rapid phenotype-based screening method to assess rice seedling growth
Background: Rice (Oryza sativa) is one of the most important model crops in plant research. Despite its considerable advantages, (phenotypic) bioassays for rice are not as well developed as for Arabidopsis thaliana. Here, we present a phenotype-based screening method to study shoot-related parameters of rice seedlings via an automated computer analysis.
Results: The phenotype-based screening method was validated by testing several compounds in pharmacological experiments that interfered with hormone homeostasis, confirming that the assay was consistent with regard to the anticipated plant growth regulation and revealing the robustness of the set-up in terms of reproducibility. Moreover, abiotic stress tests using NaCl and DCMU, an electron transport blocker during the light dependent reactions of photosynthesis, confirmed the validity of the new method for a wide range of applications. Next, this method was used to screen the impact of semi-purified fractions of marine invertebrates on the initial stages of rice seedling growth. Certain fractions clearly stimulated growth, whereas others inhibited it, especially in the root, illustrating the possible applications of this novel, robust, and fast phenotype-based screening method for rice.
Conclusions: The validated phenotype-based and cost-efficient screening method allows a quick and proper analysis of shoot growth and requires only small volumes of compounds and media. As a result, this method could potentially be used for a whole range of applications, ranging from discovery of novel biostimulants, plant growth regulators, and plant growth-promoting bacteria to analysis of CRISPR knockouts, molecular plant breeding, genome-wide association, and phytotoxicity studies. The assay system described here can contribute to a better understanding of plant development in general
Recommended from our members
Computationally Efficient Methods for High-Dimensional Statistical Problems
With the ever-increasing amount of computational power available, so broadens the horizon of statistical problems that can be tackled. However, many practitioners have only an ordinary personal computer on which to do their work. The need for computationally efficient methodology is as pressing as ever, and there remain some questions as-yet without a confident answer for a practitioner working with tight computational constraints. This thesis develops methods for three such problems. The first, introductory, chapter provides an overview of the area and an accessible preamble to the problems these methods address.
In the second chapter we address the problem of modelling a high-dimensional linear regression with categorical predictor variables.
The natural sparsity assumption in this setting is on the number of unique values the coefficients within each categorical variable can take. With this assumption, we introduce a new form of penalty function for tackling this problem. While the number of combinations of levels can grow extremely fast in the number of levels, the unique structure of the method enables fast optimisation for this problem. A novel and intricate dynamic programming algorithm computes the exact global optimum over each variable, and is embedded within a block coordinate descent algorithm. This allows fitting of such models quickly on a laptop computer in a memory efficient manner. The scaling requirements sufficient for this method to recover the correct groups cannot be relaxed for any estimator; this strong performance is validated by a range of experiments using both simulated and real data.
In the third chapter we explore the possibility that a practitioner has some a priori belief to which variables are most likely to be important, which will be in the form of a permutation of the columns. Our approach takes this ordering and efficiently computes a grid of solution paths by sequentially removing groups of variables without unnecessary recomputation of coefficients. Typical examples of such orderings include the column norms in the (unscaled) design matrix, or the recentness of observations in time series data. This procedure, combined with selecting the size of support set by validation on a test set, has similar performance to that of fitting the oracular submodel.
The fourth chapter concerns the efficient estimation of conditional independence graphs in Gaussian graphical models. Neighbourhood selection is practical, popular, and enjoys good performance, but in large-scale settings it can still have computational demands exceeding the resources available to many practitioners. Screening approaches promise large improvements in speed with only a small price to pay in terms of resulting estimation performance. Although it is well-known that nodes adjacent in the conditional independence graph may be uncorrelated, a minimum absolute correlation between adjacent nodes is often tacitly or explicitly assumed in order for screening procedures to be effective. We make use of recent work in covariance estimation and high-dimensional screening of variables to develop a fast, two-stage, screening procedure specifically for use within neighbourhood selection and avoiding this restrictive assumption. Provided that a weaker version of a minimum edge strength requirement holds over most of the graph, the performance of the post-screening nodewise regressions is not compromised, while being substantially faster than the full procedure. This method is robust to the presence of latent confounders, as well as other scenarios that typically impede the screening of variables. Experiments show that our approach strikes a favourable balance between edge detection and computational efficiencyCantab Capital Institute for the Mathematics of Informatio
Design of Experiments for Screening
The aim of this paper is to review methods of designing screening
experiments, ranging from designs originally developed for physical experiments
to those especially tailored to experiments on numerical models. The strengths
and weaknesses of the various designs for screening variables in numerical
models are discussed. First, classes of factorial designs for experiments to
estimate main effects and interactions through a linear statistical model are
described, specifically regular and nonregular fractional factorial designs,
supersaturated designs and systematic fractional replicate designs. Generic
issues of aliasing, bias and cancellation of factorial effects are discussed.
Second, group screening experiments are considered including factorial group
screening and sequential bifurcation. Third, random sampling plans are
discussed including Latin hypercube sampling and sampling plans to estimate
elementary effects. Fourth, a variety of modelling methods commonly employed
with screening designs are briefly described. Finally, a novel study
demonstrates six screening methods on two frequently-used exemplars, and their
performances are compared
Inductive queries for a drug designing robot scientist
It is increasingly clear that machine learning algorithms need to be integrated in an iterative scientific discovery loop, in which data is queried repeatedly by means of inductive queries and where the computer provides guidance to the experiments that are being performed. In this chapter, we summarise several key challenges in achieving this integration of machine learning and data mining algorithms in methods for the discovery of Quantitative Structure Activity Relationships (QSARs). We introduce the concept of a robot scientist, in which all steps of the discovery process are automated; we discuss the representation of molecular data such that knowledge discovery tools can analyse it, and we discuss the adaptation of machine learning and data mining algorithms to guide QSAR experiments
Ligand-based virtual screening using binary kernel discrimination
This paper discusses the use of a machine-learning technique called binary kernel discrimination (BKD) for virtual screening in drug- and pesticide-discovery programmes. BKD is compared with several other ligand-based tools for virtual screening in databases of 2D structures represented by fragment bit-strings, and is shown to provide an effective, and reasonably efficient, way of prioritising compounds for biological screening
Finding the Important Factors in Large Discrete-Event Simulation: Sequential Bifurcation and its Applications
This contribution discusses experiments with many factors: the case study includes a simulation model with 92 factors.The experiments are guided by sequential bifurcation.This method is most efficient and effective if the true input/output behavior of the simulation model can be approximated through a first-order polynomial possibly augmented with two-factor interactions.The method is explained and illustrated through three related discrete-event simulation models.These models represent three supply chain configurations, studied for an Ericsson factory in Sweden.After simulating 21 scenarios (factor combinations) each replicated five times to account for noise a shortlist with the 11 most important factors is identified for the biggest of the three simulation models.simulation;bifurcation;supply;Sweden
Sensitivity analysis of expensive black-box systems using metamodeling
Simulations are becoming ever more common as a tool for designing complex
products. Sensitivity analysis techniques can be applied to these simulations
to gain insight, or to reduce the complexity of the problem at hand. However,
these simulators are often expensive to evaluate and sensitivity analysis
typically requires a large amount of evaluations. Metamodeling has been
successfully applied in the past to reduce the amount of required evaluations
for design tasks such as optimization and design space exploration. In this
paper, we propose a novel sensitivity analysis algorithm for variance and
derivative based indices using sequential sampling and metamodeling. Several
stopping criteria are proposed and investigated to keep the total number of
evaluations minimal. The results show that both variance and derivative based
techniques can be accurately computed with a minimal amount of evaluations
using fast metamodels and FLOLA-Voronoi or density sequential sampling
algorithms.Comment: proceedings of winter simulation conference 201
- …