24,695 research outputs found
Variation Modeling of Lean Manufacturing Performance Using Fuzzy Logic Based Quantitative Lean Index
The lean index is the sum of weighted scores of performance variables that describe the lean manufacturing characteristics of a system. Various quantitative lean index models have been advanced for assessing lean manufacturing performance. These models are represented by deterministic variables and do not consider variation in manufacturing systems. In this article variation is modeled in a quantitative fuzzy logic based lean index and compared with traditional deterministic modeling. By simulating the lean index model for a manufacturing case it is found that the latter tend to under or overestimate performance and the former provides a more robust lean assessment
Basic statistics for probabilistic symbolic variables: a novel metric-based approach
In data mining, it is usually to describe a set of individuals using some
summaries (means, standard deviations, histograms, confidence intervals) that
generalize individual descriptions into a typology description. In this case,
data can be described by several values. In this paper, we propose an approach
for computing basic statics for such data, and, in particular, for data
described by numerical multi-valued variables (interval, histograms, discrete
multi-valued descriptions). We propose to treat all numerical multi-valued
variables as distributional data, i.e. as individuals described by
distributions. To obtain new basic statistics for measuring the variability and
the association between such variables, we extend the classic measure of
inertia, calculated with the Euclidean distance, using the squared Wasserstein
distance defined between probability measures. The distance is a generalization
of the Wasserstein distance, that is a distance between quantile functions of
two distributions. Some properties of such a distance are shown. Among them, we
prove the Huygens theorem of decomposition of the inertia. We show the use of
the Wasserstein distance and of the basic statistics presenting a k-means like
clustering algorithm, for the clustering of a set of data described by modal
numerical variables (distributional variables), on a real data set. Keywords:
Wasserstein distance, inertia, dependence, distributional data, modal
variables.Comment: 19 pages, 3 figure
Uncertainty Analysis of the Adequacy Assessment Model of a Distributed Generation System
Due to the inherent aleatory uncertainties in renewable generators, the
reliability/adequacy assessments of distributed generation (DG) systems have
been particularly focused on the probabilistic modeling of random behaviors,
given sufficient informative data. However, another type of uncertainty
(epistemic uncertainty) must be accounted for in the modeling, due to
incomplete knowledge of the phenomena and imprecise evaluation of the related
characteristic parameters. In circumstances of few informative data, this type
of uncertainty calls for alternative methods of representation, propagation,
analysis and interpretation. In this study, we make a first attempt to
identify, model, and jointly propagate aleatory and epistemic uncertainties in
the context of DG systems modeling for adequacy assessment. Probability and
possibility distributions are used to model the aleatory and epistemic
uncertainties, respectively. Evidence theory is used to incorporate the two
uncertainties under a single framework. Based on the plausibility and belief
functions of evidence theory, the hybrid propagation approach is introduced. A
demonstration is given on a DG system adapted from the IEEE 34 nodes
distribution test feeder. Compared to the pure probabilistic approach, it is
shown that the hybrid propagation is capable of explicitly expressing the
imprecision in the knowledge on the DG parameters into the final adequacy
values assessed. It also effectively captures the growth of uncertainties with
higher DG penetration levels
Absorptive capacity and the growth and investment effects of regional transfers : a regression discontinuity design with heterogeneous treatment effects
Researchers often estimate average treatment effects of programs without investigating heterogeneity across units. Yet, individuals, firms, regions, or countries vary in their ability, e.g., to utilize transfers. We analyze Objective 1 Structural Funds transfers of the European Commission to regions of EU member states below a certain income level by way of a regression discontinuity
design with systematically heterogeneous treatment effects. Only about 30% and 21% of the regions - those with sufficient human capital and good-enough institutions - are able to turn transfers into faster per-capita
income growth and per-capita investment. In general, the variance of the treatment effect is much bigger than its mean
- …