24,560 research outputs found

    Logic training through algorithmic problem solving

    Get PDF
    Available for individual study only.Although much of mathematics is algorithmic in nature, the skills needed to formulate and solve algorithmic problems do not form an integral part of mathematics education. In particular, logic, which is central to algorithm development, is rarely taught explicitly at preuniversity level, under the justification that it is implicit in mathematics and therefore does not need to be taught as an independent topic. This paper argues in the opposite direction, describing a one-week workshop done at the University of Minho, in Portugal, whose goal was to introduce to high-school students calculational principles and techniques of algorithmic problem solving supported by calculational logic. The workshop resorted to recreational problems to convey the principles and to software tools, the Alloy Analyzer and Netlogo, to animate models.On- going collaboration with Roland Backhouse is deeply acknowledged. This research was supported by the MathIS project under contract PTDC/ EIA/ 73252/ 2006. The first two authors were further supported by FCT grants SFRH/ BD/ 24269/ 2005 and SFRH/ BD/ 29553/ 2006, respectively

    Curriculum Guidelines for Undergraduate Programs in Data Science

    Get PDF
    The Park City Math Institute (PCMI) 2016 Summer Undergraduate Faculty Program met for the purpose of composing guidelines for undergraduate programs in Data Science. The group consisted of 25 undergraduate faculty from a variety of institutions in the U.S., primarily from the disciplines of mathematics, statistics and computer science. These guidelines are meant to provide some structure for institutions planning for or revising a major in Data Science

    Teaching rule‐based algorithmic composition: the PWGL library cluster rules

    Get PDF
    This paper presents software suitable for undergraduate students to implement computer programs that compose music. The software offers a low floor (students easily get started) but also a high ceiling (complex compositional theories can be modelled). Our students are particularly interested in tonal music: such aesthetic preferences are supported, without stylistically restricting users of the software. We use a rule‐based approach (constraint programming) to allow for great flexibility. Our software Cluster Rules implements a collection of compositional rules on rhythm, harmony, melody, and counterpoint for the new music constraint system Cluster Engine by Örjan Sandred. The software offers a low floor by observing several guidelines. The programming environment uses visual programming (Cluster Rules and Cluster Engine extend the algorithmic composition system PWGL). Further, music theory definitions follow a template, so students can learn from examples how to create their own definitions. Finally, students are offered a collection of predefined rules, which they can freely combine in their own definitions. Music Technology students, including students without any prior computer programming experience, have successfully used the software. Students used the musical results of their computer programs to create original compositions. The software is also interesting for postgraduate students, composers and researchers. Complex polyphonic constraint problems are supported (high ceiling). Users can freely define their own rules and combine them with predefined rules. Also, Cluster Engine’s efficient search algorithm makes advanced problems solvable in practice

    Approximate Computation and Implicit Regularization for Very Large-scale Data Analysis

    Full text link
    Database theory and database practice are typically the domain of computer scientists who adopt what may be termed an algorithmic perspective on their data. This perspective is very different than the more statistical perspective adopted by statisticians, scientific computers, machine learners, and other who work on what may be broadly termed statistical data analysis. In this article, I will address fundamental aspects of this algorithmic-statistical disconnect, with an eye to bridging the gap between these two very different approaches. A concept that lies at the heart of this disconnect is that of statistical regularization, a notion that has to do with how robust is the output of an algorithm to the noise properties of the input data. Although it is nearly completely absent from computer science, which historically has taken the input data as given and modeled algorithms discretely, regularization in one form or another is central to nearly every application domain that applies algorithms to noisy data. By using several case studies, I will illustrate, both theoretically and empirically, the nonobvious fact that approximate computation, in and of itself, can implicitly lead to statistical regularization. This and other recent work suggests that, by exploiting in a more principled way the statistical properties implicit in worst-case algorithms, one can in many cases satisfy the bicriteria of having algorithms that are scalable to very large-scale databases and that also have good inferential or predictive properties.Comment: To appear in the Proceedings of the 2012 ACM Symposium on Principles of Database Systems (PODS 2012
    corecore