339,195 research outputs found
Data types
A Mathematical
interpretation is given to the notion of a data type.
The main novelty is in the generality of the mathematical treatment
which allows procedural data types and circularly defined data types.
What is meant by data type is pretty close to what any computer
scientist would understand by this term or by data structure, type,
mode, cluster, class. The mathematical treatment is the conjunction
of the ideas of D. Scott on the solution of domain equations (Scott
(71), (72) and (76)) and the initiality property noticed by the
ADJ group (ADJ (75), ADJ (77)). The present work adds operations
to the data types proposed by Scott and generalizes the data types
of ADJ to procedural types and arbitrary circular type definitions.
The advantages of a mathematical interpretation of data types are
those of mathematical semantics in general : throwing light on some
ill-understood constructs in high-level programming languages, easing
the task of writing correct programs and making possible proofs of
correctness for programs or implementations"
The role of concurrency in an evolutionary view of programming abstractions
In this paper we examine how concurrency has been embodied in mainstream
programming languages. In particular, we rely on the evolutionary talking
borrowed from biology to discuss major historical landmarks and crucial
concepts that shaped the development of programming languages. We examine the
general development process, occasionally deepening into some language, trying
to uncover evolutionary lineages related to specific programming traits. We
mainly focus on concurrency, discussing the different abstraction levels
involved in present-day concurrent programming and emphasizing the fact that
they correspond to different levels of explanation. We then comment on the role
of theoretical research on the quest for suitable programming abstractions,
recalling the importance of changing the working framework and the way of
looking every so often. This paper is not meant to be a survey of modern
mainstream programming languages: it would be very incomplete in that sense. It
aims instead at pointing out a number of remarks and connect them under an
evolutionary perspective, in order to grasp a unifying, but not simplistic,
view of the programming languages development process
Julia: A Fresh Approach to Numerical Computing
Bridging cultures that have often been distant, Julia combines expertise from
the diverse fields of computer science and computational science to create a
new approach to numerical computing. Julia is designed to be easy and fast.
Julia questions notions generally held as "laws of nature" by practitioners of
numerical computing:
1. High-level dynamic programs have to be slow.
2. One must prototype in one language and then rewrite in another language
for speed or deployment, and
3. There are parts of a system for the programmer, and other parts best left
untouched as they are built by the experts.
We introduce the Julia programming language and its design --- a dance
between specialization and abstraction. Specialization allows for custom
treatment. Multiple dispatch, a technique from computer science, picks the
right algorithm for the right circumstance. Abstraction, what good computation
is really about, recognizes what remains the same after differences are
stripped away. Abstractions in mathematics are captured as code through another
technique from computer science, generic programming.
Julia shows that one can have machine performance without sacrificing human
convenience.Comment: 37 page
Token-based typology and word order entropy: A study based on universal dependencies
The present paper discusses the benefits and challenges of token-based typology, which takes into account the frequencies of words and constructions in language use. This approach makes it possible to introduce new criteria for language classification, which would be difficult or impossible to achieve with the traditional, type-based approach. This point is illustrated by several quantitative studies of word order variation, which can be measured as entropy at different levels of granularity. I argue that this variation can be explained by general functional mechanisms and pressures, which manifest themselves in language use, such as optimization of processing (including avoidance of ambiguity) and grammaticalization of predictable units occurring in chunks. The case studies are based on multilingual corpora, which have been parsed using the Universal Dependencies annotation scheme
Findings from the Workshop on User-Centered Design of Language Archives
This white paper describes findings from the workshop on User-Centered Design of Language Archives organized in February 2016 by Christina Wasson (University of North Texas) and Gary Holton (University of Hawaiâi at MaÌnoa). It reviews relevant aspects of language archiving and user-centered design to construct the rationale for the workshop, relates key insights produced during the workshop, and outlines next steps in the larger research trajectory initiated by this workshop. The purpose of this white paper is to make all of the findings from the workshop publicly available in a short time frame, and without the constraints of a journal article concerning length, audience, format, and so forth. Selections from this white paper will be used in subsequent journal articles. So much was learned during the workshop; we wanted to provide a thorough documentation to ensure that none of the key insights would be lost.
We consider this document a white paper because it provides the foundational insights and initial conceptual frameworks that will guide us in our further research on the user-centered design of language archives. We hope this report will be useful to members of all stakeholder groups seeking to develop user-centered designs for language archives.U.S. National Science Foundation Documenting Endangered Languages Program grants BCS-1543763 and BCS-1543828
- âŠ