4,303 research outputs found

    Should Optimal Designers Worry About Consideration?

    Full text link
    Consideration set formation using non-compensatory screening rules is a vital component of real purchasing decisions with decades of experimental validation. Marketers have recently developed statistical methods that can estimate quantitative choice models that include consideration set formation via non-compensatory screening rules. But is capturing consideration within models of choice important for design? This paper reports on a simulation study of a vehicle portfolio design when households screen over vehicle body style built to explore the importance of capturing consideration rules for optimal designers. We generate synthetic market share data, fit a variety of discrete choice models to the data, and then optimize design decisions using the estimated models. Model predictive power, design "error", and profitability relative to ideal profits are compared as the amount of market data available increases. We find that even when estimated compensatory models provide relatively good predictive accuracy, they can lead to sub-optimal design decisions when the population uses consideration behavior; convergence of compensatory models to non-compensatory behavior is likely to require unrealistic amounts of data; and modeling heterogeneity in non-compensatory screening is more valuable than heterogeneity in compensatory trade-offs. This supports the claim that designers should carefully identify consideration behaviors before optimizing product portfolios. We also find that higher model predictive power does not necessarily imply better design decisions; that is, different model forms can provide "descriptive" rather than "predictive" information that is useful for design.Comment: 5 figures, 26 pages. In Press at ASME Journal of Mechanical Design (as of 3/17/15

    Evaluating Knowledge Representation and Reasoning Capabilites of Ontology Specification Languages

    Get PDF
    The interchange of ontologies across the World Wide Web (WWW) and the cooperation among heterogeneous agents placed on it is the main reason for the development of a new set of ontology specification languages, based on new web standards such as XML or RDF. These languages (SHOE, XOL, RDF, OIL, etc) aim to represent the knowledge contained in an ontology in a simple and human-readable way, as well as allow for the interchange of ontologies across the web. In this paper, we establish a common framework to compare the expressiveness of "traditional" ontology languages (Ontolingua, OKBC, OCML, FLogic, LOOM) and "web-based" ontology languages. As a result of this study, we conclude that different needs in KR and reasoning may exist in the building of an ontology-based application, and these needs must be evaluated in order to choose the most suitable ontology language(s)

    Constrained Dynamic Rule Induction Learning

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.One of the known classification approaches in data mining is rule induction (RI). RI algorithms such as PRISM usually produce If-Then classifiers, which have a comparable predictive performance to other traditional classification approaches such as decision trees and associative classification. Hence, these classifiers are favourable for carrying out decisions by users and hence they can be utilised as decision making tools. Nevertheless, RI methods, including PRISM and its successors, suffer from a number of drawbacks primarily the large number of rules derived. This can be a burden especially when the input data is largely dimensional. Therefore, pruning unnecessary rules becomes essential for the success of this type of classifiers. This article proposes a new RI algorithm that reduces the search space for candidate rules by early pruning any irrelevant items during the process of building the classifier. Whenever a rule is generated, our algorithm updates the candidate items frequency to reflect the discarded data examples associated with the rules derived. This makes items frequency dynamic rather static and ensures that irrelevant rules are deleted in preliminary stages when they don’t hold enough data representation. The major benefit will be a concise set of decision making rules that are easy to understand and controlled by the decision maker. The proposed algorithm has been implemented in WEKA (Waikato Environment for Knowledge Analysis) environment and hence it can now be utilised by different types of users such as managers, researchers, students and others. Experimental results using real data from the security domain as well as sixteen classification datasets from University of California Irvine (UCI) repository reveal that the proposed algorithm is competitive in regards to classification accuracy when compared to known RI algorithms. Moreover, the classifiers produced by our algorithm are smaller in size which increase their possible use in practical applications

    Ontology-based data access with databases: a short course

    Get PDF
    Ontology-based data access (OBDA) is regarded as a key ingredient of the new generation of information systems. In the OBDA paradigm, an ontology defines a high-level global schema of (already existing) data sources and provides a vocabulary for user queries. An OBDA system rewrites such queries and ontologies into the vocabulary of the data sources and then delegates the actual query evaluation to a suitable query answering system such as a relational database management system or a datalog engine. In this chapter, we mainly focus on OBDA with the ontology language OWL 2QL, one of the three profiles of the W3C standard Web Ontology Language OWL 2, and relational databases, although other possible languages will also be discussed. We consider different types of conjunctive query rewriting and their succinctness, different architectures of OBDA systems, and give an overview of the OBDA system Ontop

    Design theories as languages for the unknown: insights from the German roots of systematic design (1840-1960).

    No full text
    International audienceIn this paper, relying on the formal framework provided by one of the most recent design theories, C-K theory, we analyse the historical development of design theories in the particular case of German systematic design. We study the three moments in the development of design theories (1850, 1900 and 1950). The analysis leads to the three main research conclusions regarding design theorizing. (1) The development of design theories and methods corresponds to specific rationalizations of the design activity in historical contexts, characterized by types of products, science and knowledge production capacities. (2) While engineering sciencesmodel known objects, design theories support reasoning on unknown objects. (3) Design methods do not target single innovations but aim to improve collective design capacities. Their performance can be assessed by the types of new objects they help design (generative capacity) and in terms of the capacities required by their users (conjunctive capacity). Historically, systematic design emerged as a formal framework with particularly strong generative and conjunctive capacities

    Modeling water resources management at the basin level: review and future directions

    Get PDF
    Water quality / Water resources development / Agricultural production / River basin development / Mathematical models / Simulation models / Water allocation / Policy / Economic aspects / Hydrology / Reservoir operation / Groundwater management / Drainage / Conjunctive use / Surface water / GIS / Decision support systems / Optimization methods / Water supply

    AUTOMATED KNOWLEDGE ACQUISITION: OVERCOMING THE EXPERT SYSTEM BOTTLENECK

    Get PDF
    The artificial intelligence (AI) discipline of machine learning offers the best opportunity for alleviating the critical problem of acquiring the knowledge base necessary for expert systems. This paper examines the characteristics of such tasks and identifies a number of weaknesses with several dominant AI approaches. Genetic algorithms (GAs) are a probabilistic search technique based on the adaptive efficiency of natural organisms and offer an alternative which addresses the weaknesses in conventional methods. This paper describes the implementation of ADAM, a GA driven classifier, and compares the quality of the rules it generates to those of alternative induction techniques on a simulated decision problem
    corecore