711 research outputs found
Multiple Quantum Well AlGaAs Nanowires
This letter reports on the growth, structure and luminescent properties of
individual multiple quantum well (MQW) AlGaAs nanowires (NWs). The composition
modulations (MQWs) are obtained by alternating the elemental flux of Al and Ga
during the molecular beam epitaxy growth of the AlGaAs wire on GaAs (111)B
substrates. Transmission electron microscopy and energy dispersive X-ray
spectroscopy performed on individual NWs are consistent with a configuration
composed of conical segments stacked along the NW axis. Micro-photoluminescence
measurements and confocal microscopy showed enhanced light emission from the
MQW NWs as compared to non-segmented NWs due to carrier confinement and
sidewall passivation
Clustering techniques for the detection of Business Cycles
In this paper business cycles are considered as a multivariate phenomenon and not as a univariate one determined e.g. by the GNP. The subject is to look for the number of phases of a business cycle, which can be motivated by the number of clusters in a given dataset of macro-economic variables. Different approaches to distances in the data are tried in a fuzzy cluster analysis to pursue this goal
Desirability to characterize process capability
Over the past few years continuously new process capability indices have been developed, most of them with the aim to add some feature missed in former process capability indices. Thus, for nearly any thinkable situation now a special index exists which makes choosing a certain index as difficult as interpreting and comparing index values correctly. In this paper we propose the use of the expected value of a certain type of function, the so-called desirability function, to assess the capability of a process. The resulting index may be used analogously to the classical indices related to C , but can be adapted to nearly any process and any specification. It even allows a comparison between different processes regardless of their distribution and may be extended straightforwardly to multivariate scenarios. Furthermore, its properties compare favorably to the properties of the "classical" indices
Exploring multivariate data structures with local principal curves.
A new approach to find the underlying structure of a multidimensional data cloud is proposed, which is based on a localized version of principal components analysis. More specifically, we calculate a
series of local centers of mass and move through the data in directions given by the first local principal axis.
One obtains a smooth ``local principal curve'' passing through the "middle" of a multivariate data cloud. The concept adopts to branched curves by considering the second local principal axis. Since the algorithm is based on a simple eigendecomposition, computation is fast and easy
A note on a multivariate analogue of the process capability index C_p
A simple method is given to calculate the multivariate process capability index C_p* as defined by Taam et al. (1993) and discussed by Kotz & Johnson (1993). It is shown that using this index is equivalent to using the smallest univariate C_p-value to determine the capability of a process
Improving feature extraction by replacing the Fisher criterion by an upper error bound
A lot of alternatives and constraints have been proposed in order to improve the Fisher criterion. But most of them are not linked to the error rate, the primary interest in many applications of classification. By introducing an upper bound for the error rate a criterion is developed which can improve the classification performance
Optimal vs. Classical Linear Dimension Reduction
We describe a computer intensive method for linear dimension reduction which minimizes the classification error directly. Simulated annealing (Bohachevsky et al. (1986)) is used to solve this problem. The classification error is determined by an exact integration. We avoid distance or scatter measures which are only surrogates to circumvent the classification error. Simulations (in two dimensions) and analytical approximations demonstrate the superiority of optimal classification opposite to the classical procedures. We compare our procedure to the well-known canonical discriminant analysis (homoscedastic case) as described in Mc Lachlan (1992) and to a method by Young et al. (1987) for the heteroscedastic case. Special emphasis is put on the case when the distance based methods collapse. The computer intensive algorithm always achieves minimal classification error
Comparing the states of many quantum systems
We investigate how to determine whether the states of a set of quantum
systems are identical or not. This paper treats both error-free comparison, and
comparison where errors in the result are allowed. Error-free comparison means
that we aim to obtain definite answers, which are known to be correct, as often
as possible. In general, we will have to accept also inconclusive results,
giving no information. To obtain a definite answer that the states of the
systems are not identical is always possible, whereas, in the situation
considered here, a definite answer that they are identical will not be
possible. The optimal universal error-free comparison strategy is a projection
onto the totally symmetric and the different non-symmetric subspaces, invariant
under permutations and unitary transformations. We also show how to construct
optimal comparison strategies when allowing for some errors in the result,
minimising either the error probability, or the average cost of making an
error. We point out that it is possible to realise universal error-free
comparison strategies using only linear elements and particle detectors, albeit
with less than ideal efficiency. Also minimum-error and minimum-cost strategies
may sometimes be realised in this way. This is of great significance for
practical applications of quantum comparison.Comment: 13 pages, 2 figures. Corrected a misprint on p. 7 and added a few
references. Accepted for publication in J Mod Op
Response Surface Methodology for Optimizing Hyper Parameters
The performance of an algorithm often largely depends on some hyper parameter which should be optimized before its usage. Since most conventional optimization methods suffer from some drawbacks, we developed an alternative way to find the best hyper parameter values. Contrary to the well known procedures, the new optimization algorithm is based on statistical methods since it uses a combination of Linear Mixed Effect Models and Response Surface Methodology techniques. In particular, the Method of Steepest Ascent which is well known for the case of an Ordinary Least Squares setting and
a linear response surface has been generalized to be applicable for repeated measurements situations and for response
surfaces of order o <= 2
- …