638 research outputs found
Evolutionary multiobjective optimization of the multi-location transshipment problem
We consider a multi-location inventory system where inventory choices at each
location are centrally coordinated. Lateral transshipments are allowed as
recourse actions within the same echelon in the inventory system to reduce
costs and improve service level. However, this transshipment process usually
causes undesirable lead times. In this paper, we propose a multiobjective model
of the multi-location transshipment problem which addresses optimizing three
conflicting objectives: (1) minimizing the aggregate expected cost, (2)
maximizing the expected fill rate, and (3) minimizing the expected
transshipment lead times. We apply an evolutionary multiobjective optimization
approach using the strength Pareto evolutionary algorithm (SPEA2), to
approximate the optimal Pareto front. Simulation with a wide choice of model
parameters shows the different trades-off between the conflicting objectives
Density Estimation with Imprecise Kernels: Application to Classification
International audienceIn this paper, we explore the problem of estimating lower and upper densities from imprecisely defined families of parametric kernels. Such estimations allow to rely on a single bandwidth value, and we show that it provides good results on classification tasks when extending the naive Bayesian classifie
Semi-parametric estimation of the Wilshire creep life prediction model: an application to 2.25Cr-1Mo steel
The Wilshire equation is a recent addition to the literature on safe life prediction. While the effect of temperature on creep life is reasonably understood, the effect of stress isn’t. The Wilshire equation deals with this by partitioning over sub ranges of stress, but this approximation can lead to poor life time predictions. This paper introduces a semi-parametric procedure that allows the data itself to identify the stress relationship. When applied to 2.25Cr-1Mo steel it was found that the stress relationship is non-linear, and this semi-parametric version of the Wilshire model had better predictive performance compared to any partitioned Wilshire model. This approach contains a limit to valid extrapolation and the isothermal predictions for creep life have a more realistic pattern of behaviour
A computational framework to emulate the human perspective in flow cytometric data analysis
Background: In recent years, intense research efforts have focused on developing methods for automated flow cytometric data analysis. However, while designing such applications, little or no attention has been paid to the human perspective that is absolutely central to the manual gating process of identifying and characterizing cell populations. In particular, the assumption of many common techniques that cell populations could be modeled reliably with pre-specified distributions may not hold true in real-life samples, which can have populations of arbitrary shapes and considerable inter-sample variation.
<p/>Results: To address this, we developed a new framework flowScape for emulating certain key aspects of the human perspective in analyzing flow data, which we implemented in multiple steps. First, flowScape begins with creating a mathematically rigorous map of the high-dimensional flow data landscape based on dense and sparse regions defined by relative concentrations of events around modes. In the second step, these modal clusters are connected with a global hierarchical structure. This representation allows flowScape to perform ridgeline analysis for both traversing the landscape and isolating cell populations at different levels of resolution. Finally, we extended manual gating with a new capacity for constructing templates that can identify target populations in terms of their relative parameters, as opposed to the more commonly used absolute or physical parameters. This allows flowScape to apply such templates in batch mode for detecting the corresponding populations in a flexible, sample-specific manner. We also demonstrated different applications of our framework to flow data analysis and show its superiority over other analytical methods.
<p/>Conclusions: The human perspective, built on top of intuition and experience, is a very important component of flow cytometric data analysis. By emulating some of its approaches and extending these with automation and rigor, flowScape provides a flexible and robust framework for computational cytomics
Intergenerational change and familial aggregation of body mass index
The relationship between parental BMI and that of their adult offspring, when increased adiposity can become a clinical issue, is unknown. We investigated the intergenerational change in body mass index (BMI) distribution, and examined the sex-specific relationship
between parental and adult offspring BMI. Intergenerational
change in the distribution of adjusted BMI in 1,443
complete families (both parents and at least one offspring)
with 2,286 offspring (1,263 daughters and 1,023 sons) from
the west of Scotland, UK, was investigated using quantile
regression. Familial correlations were estimated from
linear mixed effects regression models. The distribution
of BMI showed little intergenerational change in the normal
range (\25 kg/m2), decreasing overweightness (25–
\30 kg/m2) and increasing obesity (C30 kg/m2). Median
BMI was static across generations in males and decreased
in females by 0.4 (95% CI: 0.0, 0.7) kg/m2; the 95th percentileincreased by 2.2 (1.1, 3.2) kg/m2 in males and 2.7
(1.4, 3.9) kg/m2 in females. Mothers’ BMI was more
strongly associated with daughters’ BMI than was fathers’
(correlation coefficient (95% CI): mothers 0.31 (0.27,
0.36), fathers 0.19 (0.14, 0.25); P = 0.001). Mothers’ and
fathers’ BMI were equally correlated with sons’ BMI
(correlation coefficient: mothers 0.28 (0.22, 0.33), fathers
0.27 (0.22, 0.33). The increase in BMI between generations
was concentrated at the upper end of the distribution. This,
alongside the strong parent-offspring correlation, suggests that the increase in BMI is disproportionally greater among
offspring of heavier parents. Familial influences on BMI among middle-aged women appear significantly stronger from mothers than father
Bayesian Optimization Approaches for Massively Multi-modal Problems
The optimization of massively multi-modal functions is a challenging task, particularly for problems where the search space can lead the op- timization process to local optima. While evolutionary algorithms have been extensively investigated for these optimization problems, Bayesian Optimization algorithms have not been explored to the same extent. In this paper, we study the behavior of Bayesian Optimization as part of a hybrid approach for solving several massively multi-modal functions. We use well-known benchmarks and metrics to evaluate how different variants of Bayesian Optimization deal with multi-modality.TIN2016-78365-
Asymptotic normality of the Parzen-Rosenblatt density estimator for strongly mixing random fields
We prove the asymptotic normality of the kernel density estimator (introduced
by Rosenblatt (1956) and Parzen (1962)) in the context of stationary strongly
mixing random fields. Our approach is based on the Lindeberg's method rather
than on Bernstein's small-block-large-block technique and coupling arguments
widely used in previous works on nonparametric estimation for spatial
processes. Our method allows us to consider only minimal conditions on the
bandwidth parameter and provides a simple criterion on the (non-uniform) strong
mixing coefficients which do not depend on the bandwith.Comment: 16 page
A Novel Approach for Ellipsoidal Outer-Approximation of the Intersection Region of Ellipses in the Plane
In this paper, a novel technique for tight outer-approximation of the
intersection region of a finite number of ellipses in 2-dimensional (2D) space
is proposed. First, the vertices of a tight polygon that contains the convex
intersection of the ellipses are found in an efficient manner. To do so, the
intersection points of the ellipses that fall on the boundary of the
intersection region are determined, and a set of points is generated on the
elliptic arcs connecting every two neighbouring intersection points. By finding
the tangent lines to the ellipses at the extended set of points, a set of
half-planes is obtained, whose intersection forms a polygon. To find the
polygon more efficiently, the points are given an order and the intersection of
the half-planes corresponding to every two neighbouring points is calculated.
If the polygon is convex and bounded, these calculated points together with the
initially obtained intersection points will form its vertices. If the polygon
is non-convex or unbounded, we can detect this situation and then generate
additional discrete points only on the elliptical arc segment causing the
issue, and restart the algorithm to obtain a bounded and convex polygon.
Finally, the smallest area ellipse that contains the vertices of the polygon is
obtained by solving a convex optimization problem. Through numerical
experiments, it is illustrated that the proposed technique returns a tighter
outer-approximation of the intersection of multiple ellipses, compared to
conventional techniques, with only slightly higher computational cost
A Fast Gradient Approximation for Nonlinear Blind Signal Processing
When dealing with nonlinear blind processing algorithms (deconvolution or post-nonlinear source separation), complex mathematical estimations must be done giving as a result very slow algorithms. This is the case, for example, in speech processing, spike signals deconvolution or microarray data analysis. In this paper, we propose a simple method to reduce computational time for the inversion of Wiener systems or the separation of post-nonlinear mixtures, by using a linear approximation in a minimum mutual information algorithm. Simulation results demonstrate that linear spline interpolation is fast and accurate, obtaining very good results (similar to those obtained without approximation) while computational time is dramatically decreased. On the other hand, cubic spline interpolation also obtains similar good results, but due to its intrinsic complexity, the global algorithm is much more slow and hence not useful for our purpose
- …