926 research outputs found

    Variational Analysis of Constrained M-Estimators

    Get PDF
    We propose a unified framework for establishing existence of nonparametric M-estimators, computing the corresponding estimates, and proving their strong consistency when the class of functions is exceptionally rich. In particular, the framework addresses situations where the class of functions is complex involving information and assumptions about shape, pointwise bounds, location of modes, height at modes, location of level-sets, values of moments, size of subgradients, continuity, distance to a "prior" function, multivariate total positivity, and any combination of the above. The class might be engineered to perform well in a specific setting even in the presence of little data. The framework views the class of functions as a subset of a particular metric space of upper semicontinuous functions under the Attouch-Wets distance. In addition to allowing a systematic treatment of numerous M-estimators, the framework yields consistency of plug-in estimators of modes of densities, maximizers of regression functions, level-sets of classifiers, and related quantities, and also enables computation by means of approximating parametric classes. We establish consistency through a one-sided law of large numbers, here extended to sieves, that relaxes assumptions of uniform laws, while ensuring global approximations even under model misspecification

    Representing fuzzy decision tables in a fuzzy relational database environment.

    Get PDF
    In this paper the representation of decision tables in a relational database environment is discussed. First, crisp decision tables are defined. Afterwards a technique to represent decision tables in a relational system is presented. Next, fuzzy extensions are made to crisp decision tables in order to deal with imprecision and uncertainty. As a result, with crisp decision tables as special cases fuzzy decision tables are defined which include fuzziness in the conditions as well as in the actions. Analogous to the crisp case, it is demonstrated how fuzzy decision tables can be stored in a fuzzy relational database environment. Furthermore, consultation of these tables is discussed using fuzzy queries.Decision making;

    Modelling decision tables from data.

    Get PDF
    On most datasets induction algorithms can generate very accurate classifiers. Sometimes, however, these classifiers are very hard to understand for humans. Therefore, in this paper it is investigated how we can present the extracted knowledge to the user by means of decision tables. Decision tables are very easy to understand. Furthermore, decision tables provide interesting facilities to check the extracted knowledge on consistency and completeness. In this paper, it is demonstrated how a consistent and complete DT can be modelled starting from raw data. The proposed method is empirically validated on several benchmarking datasets. It is shown that the modelling decision tables are sufficiently small. This allows easy consultation of the represented knowledge.Data;

    Verification and validation of knowledge-based systems with an example from site selection.

    Get PDF
    In this paper, the verification and validation of Knowledge-Based Systems (KBS) using decision tables (DTs) is one of the central issues. It is illustrated using real-market data taken from industrial site selection problems.One of the main problems of KBS is that often there remain a lot of anomalies after the knowledge has been elicited. As a consequence, the quality of the KBS will degrade. This evaluation consists mainly of two parts: verification and validation (V&V). To make a distinction between verification and validation, the following phrase is regularly used: Verification deals with 'building the system right', while validation involves 'building the right system'. In the context of DTs, it has been claimed from the early years of DT research onwards that DTs are very suited for V&V purposes. Therefore, it will be explained how V&V of the modelled knowledge can be performed. In this respect, use is made of stated response modelling designs techniques to select decision rules from a DT. Our approach is illustrated using a case-study dealing with the locational problem of a (petro)chemical company in a port environment. The KBS developed has been named Matisse, which is an acronym of Matching Algorithm, a Technique for Industrial Site Selection and Evaluation.Selection; Systems;

    A synthesis of fuzzy rule-based system verification.

    Get PDF
    The verification of fuzzy rule bases for anomalies has received increasing attention these last few years. Many different approaches have been suggested and many are still under investigation. In this paper, we give a synthesis of methods proposed in literature that try to extend the verification of clasical rule bases to the case of fuzzy knowledge modelling, without needing a set of representative input. Within this area of fyzzy V&V we identify two dual lines of thought respectively leading to what is identified as static and dynamic anomaly detection methods. Static anomaly detection essentially tries to use similarity, affinity or matching measures to identify anomalies wihin a fuzzy rule base. It is assumed that the detection methods can be the same as those used in a non-fuzzy environment, except that the formerly mentioned measures indicate the degree of matching of two fuzzy expressions. Dynamic anomaly detection starts from the basic idea that any anomaly within a knowledge representation formalism, i.c. fuzzy if-then rules, can be identified by performing a dynamic analysis of the knowledge system, even without providing special input to the system. By imposing a constraint on the results of inference for an anomaly not to occur, one creates definitions of the anomalies that can only be verified if the inference pocess, and thereby the fuzzy inference operator is involved in the analysis. The major outcome of the confrontation between both approaches is that their results, stated in terms of necessary and/or sufficient conditions for anomaly detection within a particular situation, are difficult to reconcile. The duality between approaces seems to have translated into a duality in results. This article addresses precisely this issue by presenting a theoretical framework which anables us to effectively evaluate the results of both static and dynamic verification theories.

    Log-Concave Duality in Estimation and Control

    Full text link
    In this paper we generalize the estimation-control duality that exists in the linear-quadratic-Gaussian setting. We extend this duality to maximum a posteriori estimation of the system's state, where the measurement and dynamical system noise are independent log-concave random variables. More generally, we show that a problem which induces a convex penalty on noise terms will have a dual control problem. We provide conditions for strong duality to hold, and then prove relaxed conditions for the piecewise linear-quadratic case. The results have applications in estimation problems with nonsmooth densities, such as log-concave maximum likelihood densities. We conclude with an example reconstructing optimal estimates from solutions to the dual control problem, which has implications for sharing solution methods between the two types of problems

    Restructuring and simplifying rule bases.

    Get PDF
    Rule bases are commonly acquired, by expert and/or knowledge engineer, in a form which is well suited for acquisition purposes. When the knowledge base is executed, however, a different structure may be required. Moreover, since human experts normally do not provide the knowledge in compact chunks, rule bases often suffer from redundancy. This may considerably harm efficiency. In this paper a procedure is examined to transform rules that are specified in the knowledge acquisition process into an efficient rule base by way of decision tables. This transformation algorithms allows the generation of a minimal rule representation of the knowledge, and verification and optimization of rule bases and other specification (e.g. legal texts, procedural descriptions, ...). The proposed procedures are fully supported by the PROLOGA tool.

    On Parallel Processors Design for Solving Stochastics Programs

    Get PDF
    A design based on parallel processing is laid out for solving (multistage) stochastic programs. Because of the very special nature of the decomposition used here, one could rely on hard-wired micro-processors that would be extremely simple in design and fabrication, and would reduce the time required to solving stochastic programs to that needed for solving deterministic linear programs of the same size (ignoring the time required to design the parallel decomposition)

    Modeling and Solution Strategies for Unconstrained Stochastic Optimization Problems

    Get PDF
    We review some modeling alternatives for handling risk in decision making processes for unconstrained stochastic optimization problems. Solution strategies are discussed and compared
    • …
    corecore