313 research outputs found
Case study in six sigma methadology : manufacturing quality improvement and guidence for managers
This article discusses the successful implementation of Six Sigma methodology in a high precision and critical process in the manufacture of automotive products. The Six Sigma define–measure–analyse–improve–control approach resulted in a reduction of tolerance-related problems and improved the first pass yield from 85% to 99.4%. Data were collected on all possible causes and regression analysis, hypothesis testing, Taguchi methods, classification and regression tree, etc. were used to analyse the data and draw conclusions. Implementation of Six Sigma methodology had a significant financial impact on the profitability of the company. An approximate saving of US$70,000 per annum was reported, which is in addition to the customer-facing benefits of improved quality on returns and sales. The project also had the benefit of allowing the company to learn useful messages that will guide future Six Sigma activities
Estimating Mutual Information
We present two classes of improved estimators for mutual information
, from samples of random points distributed according to some joint
probability density . In contrast to conventional estimators based on
binnings, they are based on entropy estimates from -nearest neighbour
distances. This means that they are data efficient (with we resolve
structures down to the smallest possible scales), adaptive (the resolution is
higher where data are more numerous), and have minimal bias. Indeed, the bias
of the underlying entropy estimates is mainly due to non-uniformity of the
density at the smallest resolved scale, giving typically systematic errors
which scale as functions of for points. Numerically, we find that
both families become {\it exact} for independent distributions, i.e. the
estimator vanishes (up to statistical fluctuations) if . This holds for all tested marginal distributions and for all
dimensions of and . In addition, we give estimators for redundancies
between more than 2 random variables. We compare our algorithms in detail with
existing algorithms. Finally, we demonstrate the usefulness of our estimators
for assessing the actual independence of components obtained from independent
component analysis (ICA), for improving ICA, and for estimating the reliability
of blind source separation.Comment: 16 pages, including 18 figure
A Comparative Study of Some Pseudorandom Number Generators
We present results of an extensive test program of a group of pseudorandom
number generators which are commonly used in the applications of physics, in
particular in Monte Carlo simulations. The generators include public domain
programs, manufacturer installed routines and a random number sequence produced
from physical noise. We start by traditional statistical tests, followed by
detailed bit level and visual tests. The computational speed of various
algorithms is also scrutinized. Our results allow direct comparisons between
the properties of different generators, as well as an assessment of the
efficiency of the various test methods. This information provides the best
available criterion to choose the best possible generator for a given problem.
However, in light of recent problems reported with some of these generators, we
also discuss the importance of developing more refined physical tests to find
possible correlations not revealed by the present test methods.Comment: University of Helsinki preprint HU-TFT-93-22 (minor changes in Tables
2 and 7, and in the text, correspondingly
Modified bathroom scale and balance assessment: a comparison with clinical tests
Frailty and detection of fall risk are major issues in preventive gerontology. A simple tool frequently used in daily life, a bathroom scale (balance quality tester: BQT), was modified to obtain information on the balance of 84 outpatients consulting at a geriatric clinic. The results computed from the BQT were compared to the values of three geriatric tests that are widely used either to detect a fall risk or frailty (timed get up and go: TUG; 10 m walking speed: WS; walking time: WT; one-leg stand: OS). The BQT calculates four parameters that are then scored and weighted, thus creating an overall indicator of balance quality. Raw data, partial scores and the global score were compared with the results of the three geriatric tests. The WT values had the highest correlation with BQT raw data (r = 0.55), while TUG (r = 0.53) and WS (r = 0.56) had the highest correlation with BQT partial scores. ROC curves for OS cut-off values (4 and 5 s) were produced, with the best results obtained for a 5 s cut-off, both with the partial scores combined using Fisher's combination (specificity 85 %: 0.48), and with the empirical score (specificity 85 %: 8). A BQT empirical score of less than seven can detect fall risk in a community dwelling population
What Price Recreation in Finland?—A Contingent Valuation Study of Non-Market Benefits of Public Outdoor Recreation Areas
Basic services in Finnish national parks and state-owned recreation areas have traditionally been publicly financed and thus free of charge for users. Since the benefits of public recreation are not captured by market demand, government spending on recreation services must be motivated in some other way. Here, we elicit people’s willingness to pay (WTP) for services in the country’s state-owned parks to obtain an estimate of the value of outdoor recreation in monetary terms. A variant of the Tobit model is used in the econometric analysis to examine the WTP responses elicited by a payment card format. We also study who the current users of recreation services are in order to enable policymakers to anticipate the redistribution effects of a potential implementation of user fees. Finally, we discuss the motives for WTP, which reveal concerns such as equity and ability to pay that are relevant for planning public recreation in general and for the introduction of fees in particular
Testing models of thorium and particle cycling in the ocean using data from station GT11-22 of the U.S. GEOTRACES North Atlantic section
© The Author(s), 2016. This is the author's version of the work and is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Deep Sea Research Part I: Oceanographic Research Papers 113 (2016): 57-79, doi:10.1016/j.dsr.2016.03.008.Thorium is a highly particle-reactive element that possesses different measurable radio-isotopes in seawater, with well-constrained production rates and very distinct half-lives. As a
result, Th has emerged as a key tracer for the cycling of marine particles and of their chemical
constituents, including particulate organic carbon.
Here two different versions of a model of Th and particle cycling in the ocean are tested using an unprecedented data set from station GT11-22 of the U.S. GEOTRACES North Atlantic Section: (i) 21 228;230;234Th activities of dissolved and particulate fractions, (ii) 228Ra activities,
(iii) 234;238U activities estimated from salinity data and an assumed 234U/238U ratio, and (iv)
particle concentrations, below a depth of 125 m. The two model versions assume a single class
of particles but rely on different assumptions about the rate parameters for sorption reactions
and particle processes: a first version (V1) assumes vertically uniform parameters (a popular description), whereas the second (V2) does not. Both versions are tested by fitting to the
GT11-22 data using generalized nonlinear least squares and by analyzing residuals normalized
to the data errors.
We find that model V2 displays a significantly better fit to the data than model V1. Thus,
the mere allowance of vertical variations in the rate parameters can lead to a significantly better
fit to the data, without the need to modify the structure or add any new processes to the model.
To understand how the better fit is achieved we consider two parameters, K = k1=(k-1 + β-1)
and K/P, where k1 is the adsorption rate constant, k-1 the desorption rate constant, β-1 the
remineralization rate constant, and P the particle concentration. We find that the rate constant
ratio K is large (≥0.2) in the upper 1000 m and decreases to a nearly uniform value of ca.
0.12 below 2000 m, implying that the specific rate at which Th attaches to particles relative
to that at which it is released from particles is higher in the upper ocean than in the deep
ocean. In contrast, K/P increases with depth below 500 m. The parameters K and K/P
display significant positive and negative monotonic relationship with P, respectively, which is
collectively consistent with a particle concentration effect.We acknowledge the U.S. National Science Foundation for providing funding for this study
(grant OCE-1232578) and for U.S. GEOTRACES North Atlantic section ship time, sampling,
and data analysis.2017-03-3
Real-Time Definition of Non-Randomness in the Distribution of Genomic Events
Features such as mutations or structural characteristics can be non-randomly or non-uniformly distributed within a genome. So far, computer simulations were required for statistical inferences on the distribution of sequence motifs. Here, we show that these analyses are possible using an analytical, mathematical approach. For the assessment of non-randomness, our calculations only require information including genome size, number of (sampled) sequence motifs and distance parameters. We have developed computer programs evaluating our analytical formulas for the real-time determination of expected values and p-values. This approach permits a flexible cluster definition that can be applied to most effectively identify non-random or non-uniform sequence motif distribution. As an example, we show the effectivity and reliability of our mathematical approach in clinical retroviral vector integration site distribution
- …