33,460 research outputs found

    Evaluating Innovation

    Get PDF
    In their pursuit of the public good, foundations face two competing forces -- the pressure to do something new and the pressure to do something proven. The epigraph to this paper, "Give me something new and prove that it works," is my own summary of what foundations often seek. These pressures come from within the foundations -- their staff or boards demand them, not the public. The aspiration to fund things that work can be traced to the desire to be careful, effective stewards of resources. Foundations' recognition of the growing complexity of our shared challenges drives the increased emphasis on innovation. Issues such as climate change, political corruption, and digital learning andwork environments have enticed new players into the social problem-solving sphere and have con-vinced more funders of the need to find new solutions. The seemingly mutually exclusive desires for doing something new and doing something proven are not new, but as foundations have grown in number and size the visibility of the paradox has risen accordingly.Even as foundations seek to fund innovation, they are also seeking measurements of those investments success. Many people's first response to the challenge of measuring innovation is to declare the intention oxymoronic. Innovation is by definition amorphous, full of unintended consequences, and a creative, unpredictable process -- much like art. Measurements, assessments, evaluation are -- also by most definitions -- about quantifying activities and products. There is always the danger of counting what you can count, even if what you can count doesn't matter.For all our awareness of the inherent irony of trying to measure something that we intend to be unpredictable, many foundations (and others) continue to try to evaluate their innovation efforts. They are, as John Westley, Brenda Zimmerman, and Michael Quinn Patton put it in "Getting to Maybe", grappling with "....intentionality and complexity -- (which) meet in tension." It is important to see the struggles to measure for what they are -- attempts to evaluate the success of the process of innovation, not necessarily the success of the individual innovations themselves. This is not a semantic difference.What foundations are trying to understand is how to go about funding innovation so that more of it can happenExamples in this report were chosen because they offer a look at innovation within the broader scope of a foundation's work. This paper is the fifth in a series focused on field building. In this context I am interested in where evaluation fits within an innovation strategy and where these strategies fit within a foundation's broader funding goals. I will present a typology of innovation drawn from the OECD that can be useful inother areas. I lay the decisions about evaluation made by Knight, MacArthur, and the Jewish NewMedia Innovation Funders against their program-matic goals. Finally, I consider how evaluating innovation may improve our overall use of evaluation methods in philanthropy

    Computational Design. Design in the Age of a Knowledge Society

    Get PDF

    Overview on agent-based social modelling and the use of formal languages

    Get PDF
    Transdisciplinary Models and Applications investigates a variety of programming languages used in validating and verifying models in order to assist in their eventual implementation. This book will explore different methods of evaluating and formalizing simulation models, enabling computer and industrial engineers, mathematicians, and students working with computer simulations to thoroughly understand the progression from simulation to product, improving the overall effectiveness of modeling systems.Postprint (author's final draft

    Co-creativity through play and game design thinking

    Get PDF

    Argotario: Computational Argumentation Meets Serious Games

    Full text link
    An important skill in critical thinking and argumentation is the ability to spot and recognize fallacies. Fallacious arguments, omnipresent in argumentative discourse, can be deceptive, manipulative, or simply leading to `wrong moves' in a discussion. Despite their importance, argumentation scholars and NLP researchers with focus on argumentation quality have not yet investigated fallacies empirically. The nonexistence of resources dealing with fallacious argumentation calls for scalable approaches to data acquisition and annotation, for which the serious games methodology offers an appealing, yet unexplored, alternative. We present Argotario, a serious game that deals with fallacies in everyday argumentation. Argotario is a multilingual, open-source, platform-independent application with strong educational aspects, accessible at www.argotario.net.Comment: EMNLP 2017 demo paper. Source codes: https://github.com/UKPLab/argotari

    Building Machines That Learn and Think Like People

    Get PDF
    Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar
    corecore