9 research outputs found
The symbiosis of concurrency and verification: teaching and case studies
Concurrency is beginning to be accepted as a core knowledge area in the undergraduate CS
curriculumâno longer isolated, for example, as a support mechanism in a module on operating systems or
reserved as an advanced discipline for later study. Formal verification of system properties is often considered a
difficult subject area, requiring significant mathematical knowledge and generally restricted to smaller systems
employing sequential logic only. This paper presents materials, methods and experiences of teaching concurrency
and verification as a unified subject, as early as possible in the curriculum, so that they become fundamental elements
of our software engineering tool kitâto be used together every day as a matter of course. Concurrency and
verification should live in symbiosis. Verification is essential for concurrent systems as testing becomes especially
inadequate in the face of complex non-deterministic (and, therefore, hard to repeat) behaviours. Concurrency
should simplify the expression of most scales and forms of computer system by reflecting the concurrency of the
worlds in which they operate (and, therefore, have to model); simplified expression leads to simplified reasoning
and, hence, verification. Our approach lets these skills be developed without requiring students to be trained in
the underlying formal mathematics. Instead, we build on the work of those who have engineered that necessary
mathematics into the concurrency models we use (CSP, ?-calculus), the model checker (FDR) that lets us explore
and verify those systems, and the programming languages/libraries (occam-?, Go, JCSP, ProcessJ) that let us
design and build efficient executable systems within these models. This paper introduces a workflow methodology
for the development and verification of concurrent systems; it also presents and reflects on two open-ended case
studies, using this workflow, developed at the authorsâ two universities. Concerns analysed include safety (donât do
bad things), liveness (do good things) and low probability deadlock (that testing fails to discover). The necessary
technical background is given to make this paper self-contained and its work simple to reproduce and extend
Portland Daily Press: August 01,1866
https://digitalmaine.com/pdp_1866/1176/thumbnail.jp
Report of the Working Group on the Assessment of Mackerel, Horse Mackerel, Sardine and Anchovy [ICES Headquarters, 4- 13 September, 2001]
Contributors: Svein A. Iversen, Dankert W. Skage
Report of the Working Group on the Assessment of Mackerel, Horse Mackerel, Sardine and Anchovy [ICES Headquarters, 4- 13 September, 2001]
Contributors: Svein A. Iversen, Dankert W. Skage
Annual Report of the American Historical Association for the year 1895.
Annual Report of the American Historical Association, 1895. 13 Feb. HD 291,54-1, v62. 1257p. [3429] Research related to the American Indian
Compiling Concurrent Programs for Manycores
The arrival of manycore systems enforces new approaches for developing applications in order to exploit the available hardware resources. Developing applications for manycores requires programmers to partition the application into subtasks, consider the dependence between the subtasks, understand the underlying hardware and select an appropriate programming model. This is complex, time-consuming and prone to error. In this thesis, we identify and implement abstraction layers in compilation tools to decrease the burden of the programmer, increase programming productivity and program portability for manycores and to analyze their impact on performance and efficiency. We present compilation frameworks for two concurrent programming languages, occam-pi and CAL Actor Language, and demonstrate the applicability of the approach with application case-studies targeting these different manycore architectures: STHorm, Epiphany and Ambric. For occam-pi, we have extended the Tock compiler and added a backend for STHorm. We evaluate the approach using a fault tolerance model for a four stage 1D-DCT algorithm implemented by using occam-piâs constructs for dynamic reconfiguration, and the FAST corner detection algorithm which demonstrates the suitability of occam-pi and the compilation framework for data-intensive applications. We also present a new CAL compilation framework which has a front end, two intermediate representations and three backends: for a uniprocessor, Epiphany, and Ambric. We show the feasibility of our approach by compiling a CAL implementation of the 2D-IDCT for the three backends. We also present an evaluation and optimization of code generation for Epiphany by comparing the code generated from CAL with a hand-written C code implementation of 2D-IDCT