38 research outputs found

    Baseline Methods for Automated Fictional Ideation

    Get PDF
    The invention of fictional ideas (ideation) is often a central process in the creative production of artefacts such as poems, music and paintings, but has barely been studied in the Computational Creativity community. We present here three baseline approaches for automated fictional ideation, using methods which invert and alter facts from the ConceptNet and ReVerb databases, and perform bisociative discovery. For each method, we present a curation analysis, by calculating the proportion of ideas which pass a typicality evaluation. We further evaluate one ideation approach through a crowd- sourcing experiment in which participants were asked to rank ideas. The results from this study, and the baseline methods and methodologies presented here, constitute a firm basis on which to build more sophisticated models for automated ideation with evaluative capacity

    2015 Abstracts Student Research Conference

    Get PDF

    2013 Oklahoma Research Day Full Program

    Get PDF
    This document contains all abstracts from the 2013 Oklahoma Research Day held at the University of Central Oklahoma

    Translating the landscape

    Get PDF

    Product complexity assessment for a Proactive-DFA implementation (Simplicity + Simplicity = Complexity)

    Get PDF
    This thesis presents product complexity as a criterion for the optimisation of product design in the light of an Assembly-Oriented Design and Design for Assembly implementation. It takes a holistic approach to the evaluation of the product architecture by presenting a set of indicators that help examine the product structure at two different levels: Assembly and Component complexity. Assembly complexity assessment is further sub-divided into Structural and Sequence complexity. The latter is a well-known and thoroughly studied area in assembly sequence evaluation, whereas the former gives a novel and original approach to drawing attention to those areas in the product configuration that will consume more resources (i.e. time and tooling required). Component complexity, on the other hand, is sub-divided into manufacturing and process handling/manipulation complexity. The first area has been addressed by the manufacturing analysis section of most Design for Assembly and Manufacturing methodologies, but it has been traditionally addressed as a manual and chart-based evaluation. This is a rigid approach that leaves little room for expansion and has no connection with the product structure. The metrics presented in this work embody a new approach that takes into account the component-to-component interactions and allows the analysis of component shape by extracting its geometry characteristics and comparing them with particular traits of the manufacturing processes available to the designer. Additionally, the metrics presented in this work can be used to make an assessment of the product complexity at a particular point (static complexity) in the development cycle. They can also be registered over a period of time to provide an estimate of the possible consequences of the decisions made during a part of the development cycle (dynamic complexity). By using the methods developed, designers could reduce production costs and increase the reliability of their products.Ph

    Rethinking the risk matrix

    Get PDF
    So far risk has been mostly defined as the expected value of a loss, mathematically PL (being P the probability of an adverse event and L the loss incurred as a consequence of the adverse event). The so called risk matrix follows from such definition. This definition of risk is justified in a long term “managerial” perspective, in which it is conceivable to distribute the effects of an adverse event on a large number of subjects or a large number of recurrences. In other words, this definition is mostly justified on frequentist terms. Moreover, according to this definition, in two extreme situations (high-probability/low-consequence and low-probability/high-consequence), the estimated risk is low. This logic is against the principles of sustainability and continuous improvement, which should impose instead both a continuous search for lower probabilities of adverse events (higher and higher reliability) and a continuous search for lower impact of adverse events (in accordance with the fail-safe principle). In this work a different definition of risk is proposed, which stems from the idea of safeguard: (1Risk)=(1P)(1L). According to this definition, the risk levels can be considered low only when both the probability of the adverse event and the loss are small. Such perspective, in which the calculation of safeguard is privileged to the calculation of risk, would possibly avoid exposing the Society to catastrophic consequences, sometimes due to wrong or oversimplified use of probabilistic models. Therefore, it can be seen as the citizen’s perspective to the definition of risk

    Evaluation of the new Design Summer Year weather data using parametrical buildings

    Get PDF
    The Charted Institution of Building Services Engineers (CIBSE) updated the near extreme weather (Design Summer Year – DSY) for all 14 locations in the UK in 2016. This new release attempts to address the underlying shortcomings of the previous definition where the averaged dry bulb temperature was the sole metric to choose DSY among source weather years. The aim of this research is to evaluate whether the new definition of the probabilistic DSYs can consistently represent near extreme condition. London historical weather data and their correspondent DSYs were used in this research. Dynamic thermal modelling using EnergyPlus was carried out on large number single zone offices (parametric study) which represent a large portion of cellular offices in the UK. The predicted indoor warmth from the sample building models show that these new definitions are not always able to represent near extreme conditions. Using multiple years as DSY is able to capture different types of summer warmth but how to use one or all of these DSYs to make informed judgement on overheating is rather challenging. The recommended practice from this research is to use more warm years for the evaluation of overheating and choose the near extreme weather from the predicted indoor warmt
    corecore