74,116 research outputs found

    A reusable iterative optimization software library to solve combinatorial problems with approximate reasoning

    Get PDF
    Real world combinatorial optimization problems such as scheduling are typically too complex to solve with exact methods. Additionally, the problems often have to observe vaguely specified constraints of different importance, the available data may be uncertain, and compromises between antagonistic criteria may be necessary. We present a combination of approximate reasoning based constraints and iterative optimization based heuristics that help to model and solve such problems in a framework of C++ software libraries called StarFLIP++. While initially developed to schedule continuous caster units in steel plants, we present in this paper results from reusing the library components in a shift scheduling system for the workforce of an industrial production plant.Comment: 33 pages, 9 figures; for a project overview see http://www.dbai.tuwien.ac.at/proj/StarFLIP

    Weighted lattice polynomials of independent random variables

    Get PDF
    We give the cumulative distribution functions, the expected values, and the moments of weighted lattice polynomials when regarded as real functions of independent random variables. Since weighted lattice polynomial functions include ordinary lattice polynomial functions and, particularly, order statistics, our results encompass the corresponding formulas for these particular functions. We also provide an application to the reliability analysis of coherent systems.Comment: 14 page

    Different distance measures for fuzzy linear regression with Monte Carlo methods

    Get PDF
    The aim of this study was to determine the best distance measure for estimating the fuzzy linear regression model parameters with Monte Carlo (MC) methods. It is pointed out that only one distance measure is used for fuzzy linear regression with MC methods within the literature. Therefore, three different definitions of distance measure between two fuzzy numbers are introduced. Estimation accuracies of existing and proposed distance measures are explored with the simulation study. Distance measures are compared to each other in terms of estimation accuracy; hence this study demonstrates that the best distance measures to estimate fuzzy linear regression model parameters with MC methods are the distance measures defined by Kaufmann and Gupta (Introduction to fuzzy arithmetic theory and applications. Van Nostrand Reinhold, New York, 1991), Heilpern-2 (Fuzzy Sets Syst 91(2):259–268, 1997) and Chen and Hsieh (Aust J Intell Inf Process Syst 6(4):217–229, 2000). One the other hand, the worst distance measure is the distance measure used by Abdalla and Buckley (Soft Comput 11:991–996, 2007; Soft Comput 12:463–468, 2008). These results would be useful to enrich the studies that have already focused on fuzzy linear regression models

    Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data

    Get PDF
    We provide formal definitions and efficient secure techniques for - turning noisy information into keys usable for any cryptographic application, and, in particular, - reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a "fuzzy extractor" reliably extracts nearly uniform randomness R from its input; the extraction is error-tolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A "secure sketch" produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce error-prone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of ``closeness'' of input data, such as Hamming distance, edit distance, and set difference.Comment: 47 pp., 3 figures. Prelim. version in Eurocrypt 2004, Springer LNCS 3027, pp. 523-540. Differences from version 3: minor edits for grammar, clarity, and typo

    Latent class analysis for segmenting preferences of investment bonds

    Get PDF
    Market segmentation is a key component of conjoint analysis which addresses consumer preference heterogeneity. Members in a segment are assumed to be homogenous in their views and preferences when worthing an item but distinctly heterogenous to members of other segments. Latent class methodology is one of the several conjoint segmentation procedures that overcome the limitations of aggregate analysis and a-priori segmentation. The main benefit of Latent class models is that market segment membership and regression parameters of each derived segment are estimated simultaneously. The Latent class model presented in this paper uses mixtures of multivariate conditional normal distributions to analyze rating data, where the likelihood is maximized using the EM algorithm. The application focuses on customer preferences for investment bonds described by four attributes; currency, coupon rate, redemption term and price. A number of demographic variables are used to generate segments that are accessible and actionable.peer-reviewe

    A pure probabilistic interpretation of possibilistic expected value, variance, covariance and correlation

    Get PDF
    In this work we shall give a pure probabilistic interpretation of possibilistic expected value, variance, covariance and correlation

    Rating and ranking of multiple-aspect alternatives using fuzzy sets

    Get PDF
    A method is proposed to deal with multiple-alternative decision problems under uncertainty. It is assumed that all the alternatives in the choice set can be characterized by a number of aspects, and that information is available to assign weights to these aspects and to construct a rating scheme for the various aspects of each alternative. The method basically consists of computing weighted final ratings for each alternative and comparing the weighted final ratings. The uncertainty that is assumed to be inherent in the assessments of the ratings and weights is accounted for by considering each of these variables as fuzzy quantities, characterized by appropriate membership functions. Accordingly, the final evaluation of the alternatives consists of a degree of membership in the fuzzy set of alternatives ranking first. A practical method is given to compute membership functions of fuzzy sets induced by mappings, and applied to the problem at hand. A number of examples are worked out. The method is compared to another one proposed by Kahne who approaches the problem probabilistically
    • …
    corecore