19 research outputs found
Polymorphic Typestate for Session Types
Session types provide a principled approach to typed communication protocols
that guarantee type safety and protocol fidelity. Formalizations of
session-typed communication are typically based on process calculi, concurrent
lambda calculi, or linear logic. An alternative model based on
context-sensitive typing and typestate has not received much attention due to
its apparent restrictions. However, this model is attractive because it does
not force programmers into particular patterns like continuation-passing style
or channel-passing style, but rather enables them to treat communication
channels like mutable variables. Polymorphic typestate is the key that enables
a full treatment of session-typed communication. Previous work in this
direction was hampered by its setting in a simply-typed lambda calculus. We
show that higher-order polymorphism and existential types enable us to lift the
restrictions imposed by the previous work, thus bringing the expressivity of
the typestate-based approach on par with the competition. On this basis, we
define PolyVGR, the system of polymorphic typestate for session types,
establish its basic metatheory, type preservation and progress, and present a
prototype implementation.Comment: 29 pages. Short version appears in PPDP 202
Relating Functional and Imperative Session Types
Imperative session types provide an imperative interface to session-typed
communication. In such an interface, channel references are first-class objects
with operations that change the typestate of the channel. Compared to
functional session type APIs, the program structure is simpler at the surface,
but typestate is required to model the current state of communication
throughout.
Following an early work that explored the imperative approach, a significant
body of work on session types has neglected the imperative approach and opts
for a functional approach that uses linear types to manage channel references
soundly. We demonstrate that the functional approach subsumes the early work on
imperative session types by exhibiting a typing and semantics preserving
translation into a system of linear functional session types.
We further show that the untyped backwards translation from the functional to
the imperative calculus is semantics preserving. We restrict the type system of
the functional calculus such that the backwards translation becomes type
preserving. Thus, we precisely capture the difference in expressiveness of the
two calculi and conclude that the lack of expressiveness in the imperative
calculus is largely due to restrictions imposed by its type system.Comment: 39 pages, insubmissio
Kindly Bent to Free Us
Systems programming often requires the manipulation of resources like file
handles, network connections, or dynamically allocated memory. Programmers need
to follow certain protocols to handle these resources correctly. Violating
these protocols causes bugs ranging from type mismatches over data races to
use-after-free errors and memory leaks. These bugs often lead to security
vulnerabilities.
While statically typed programming languages guarantee type soundness and
memory safety by design, most of them do not address issues arising from
improper handling of resources. An important step towards handling resources is
the adoption of linear and affine types that enforce single-threaded resource
usage. However, the few languages supporting such types require heavy type
annotations.
We present Affe, an extension of ML that manages linearity and affinity
properties using kinds and constrained types. In addition Affe supports the
exclusive and shared borrowing of affine resources, inspired by features of
Rust. Moreover, Affe retains the defining features of the ML family: it is an
impure, strict, functional expression language with complete principal type
inference and type abstraction. Affe does not require any linearity annotations
in expressions and supports common functional programming idioms.Comment: ICFP 202
Relating Functional and Imperative Session Types
Imperative session types provide an imperative interface to session-typed
communication. In such an interface, channel references are first-class objects
with operations that change the typestate of the channel. Compared to
functional session type APIs, the program structure is simpler at the surface,
but typestate is required to model the current state of communication
throughout. Following an early work that explored the imperative approach, a
significant body of work on session types has neglected the imperative approach
and opts for a functional approach that uses linear types to manage channel
references soundly. We demonstrate that the functional approach subsumes the
early work on imperative session types by exhibiting a typing and semantics
preserving translation into a system of linear functional session types. We
further show that the untyped backwards translation from the functional to the
imperative calculus is semantics preserving. We restrict the type system of the
functional calculus such that the backwards translation becomes type
preserving. Thus, we precisely capture the difference in expressiveness of the
two calculi and conclude that the lack of expressiveness in the imperative
calculus is largely due to restrictions imposed by its type system
Kindly bent to free us
International audienceSystems programming often requires the manipulation of resources like file handles, network connections, or dynamically allocated memory. Programmers need to follow certain protocols to handle these resources correctly. Violating these protocols causes bugs ranging from type mismatches over data races to use-after-free errors and memory leaks. These bugs often lead to security vulnerabilities. While statically typed programming languages guarantee type soundness and memory safety by design, most of them do not address issues arising from improper handling of resources. An important step towards handling resources is the adoption of linear and affine types that enforce single-threaded resource usage. However, the few languages supporting such types require heavy type annotations. We present Affe, an extension of ML that manages linearity and affinity properties using kinds and constrained types. In addition Affe supports the exclusive and shared borrowing of affine resources, inspired by features of Rust. Moreover, Affe retains the defining features of the ML family: it is an impure, strict, functional expression language with complete principal type inference and type abstraction. Affe does not require any linearity annotations in expressions and supports common functional programming idioms
Recommended from our members
AtMoDat: Improving the reusability of ATmospheric MOdel DATa with DataCite DOIs paving the path towards FAIR data
The generation of high quality research data is expensive. The FAIR principles were established to foster the reuse of such data for the benefit of the scientific community and beyond. Publishing research data with metadata and DataCite DOIs in public repositories makes them findable and accessible (FA of FAIR). However, DOIs and basic metadata do not guarantee the data are actually reusable without discipline-specific knowledge: if data are saved in proprietary or undocumented file formats, if detailed discipline-specific metadata are missing and if quality information on the data and metadata are not provided. In this contribution, we present ongoing work in the AtMoDat project, -a consortium of atmospheric scientists and infrastructure providers, which aims on improving the reusability of atmospheric model data.
Consistent standards are necessary to simplify the reuse of research data. Although standardization of file structure and metadata is well established for some subdomains of the earth system modeling community – e.g. CMIP –, several other subdomains are lacking such standardization. Hence, scientists from the Universities of Hamburg and Leipzig and infrastructure operators cooperate in the AtMoDat project in order to advance standardization for model output files in specific subdomains of the atmospheric modeling community. Starting from the demanding CMIP6 standard, the aim is to establish an easy-to-use standard that is at least compliant with the Climate and Forecast (CF) conventions. In parallel, an existing netCDF file convention checker is extended to check for the new standards. This enhanced checker is designed to support the creation of compliant files and thus lower the hurdle for data producers to comply with the new standard. The transfer of this approach to further sub-disciplines of the earth system modeling community will be supported by a best-practice guide and other documentation. A showcase of a standard for the urban atmospheric modeling community will be presented in this session. The standard is based on CF Conventions and adapts several global attributes and controlled vocabularies from the well-established CMIP6 standard.
Additionally, the AtMoDat project aims on introducing a generic quality indicator into the DataCite metadata schema to foster further reuse of data. This quality indicator should require a discipline-specific implementation of a quality standard linked to the indicator. We will present the concept of the generic quality indicator in general and in the context of urban atmospheric modeling data
Towards a European network of FAIR-enabling Trustworthy Digital Repositories (TDRs) - A Working Paper
This working paper is a bottom-up initiative of a group of stakeholders from the European repository community. Its purpose is to outline an aspirational vision of a European Network of FAIR-enabling Trustworthy Digital Repositories (TDRs). This initiative originates from the workshop entitled “Towards exploring the idea of establishing the Network”. The paper was created in close connection with the wider community, as its core was built on community feedback and the first draft of the paper was shared for community-wide consultation. This paper will serve as input for the EOSC Task Force on Long Term Digital Preservation. One of the core activities mentioned in the charter of this Task Force is to produce recommendations on the creation of such a network. The working paper puts together a vision of how a European network of FAIR-enabling TDRs could be based on the community’s needs and its most important functions: Networking and knowledge exchange, stakeholder advocacy and engagement, and coordination and development. The specific activities hosted under these umbrella functions could address the wide range of topics that are important to TDRs. Beyond these functions and the challenges they address, the paper presents a framework to highlight aspects of the Network to further explore in the next steps of its development
Recommended from our members
ATMODAT Standard v3.0
Within the AtMoDat project (Atmospheric Model Data), a standard has been developed which is meant for improving the FAIRness of atmospheric model data published in repositories. The ATMODAT standard includes concrete recommendations related to the maturity, publication and enhanced FAIRness of atmospheric model data. The suggestions include requirements for rich metadata with controlled vocabularies, structured landing pages, file formats (netCDF) and the structure within files. Human- and machine readable landing pages are a core element of this standard, and should hold and present discipline-specific metadata on simulation and variable level.
This standard is an updated and translated version of "Bericht ĂĽber initialen Kernstandard und Kurationskriterien des AtMoDat Projektes (v2.4
Facing the Challenges in simulation-based Earth System Sciences and the Role of FAIR Digital Objects
MotivationResults of simulations with climate models form the most important basis for research and statements about possible changes in the future global, regional and local climate. These output volumes are increasing at an exponential rate (Balaji et al. 2018, Stevens et al. 2019). Efficiently handling these amounts of data is a challenge for researchers, mainly because the development of novel data and workflow handling approaches have not proceeded at the same rate as data volume has been increasing. This problem will only become more pronounced with the ever increasing performance of High Performance Computing (HPC) - systems used to perform weather and climate simulations (Lawrence et al. 2018). For example, in the framework of the European Commission's Destination Earth program the Digital Twins (Bauer et al. 2021) are expected to produce hundreds of terabytes of model output data every day at the EuroHPC computing sites.The described data challenge can be dissected into several aspects, two of which we will focus on in this contribution. Available data in the Earth System Sciences (ESS) are increasingly made openly accessible by various institutions, such as universities, research centres and government agencies, in addition to subject-specific repositories. Further, the exploitability of weather and climate simulation output beyond the expert community by humans and automated agents (as described by the FAIR data principles (F-Findable, A-Accessable, I-Interoperable, R-Reusable), Wilkinson et al. 2016) is currently very limited if not impossible due to disorganized metadata or incomplete provenance information. Additionally, developments regarding globally available and FAIR workflows in the spirit of the FAIR Digital Object (FDO) framework (Schultes and Wittenburg 2019, Schwardmann 2020) are just at the beginning.Cultural ChangeIn order to address the challenges with respect to data mentioned above, current efforts at DKRZ (German Climate Computing Center) are aimed at a complete restructuring of the way research is performed in simulation-based climate research (Anders et al. 2022, Mozaffari et al. 2022, Weigel et al. 2020). DKRZ is perfectly suited for this endeavor, because researchers have the resources and services available to conduct the entire suite of their data-intensive workflows - ranging from planning and setting up of model simulations, analyzing the model output, reusing existing large-volume datasets to data publication and long-term archival. At the moment, DKRZ-users do not have the possibility to orchestrate their workflows via a central service, but rather use a plethora of different tools to piece them together.Framework Environment FrevaThe central element of the new workflow environment at DKRZ shall be represented by the Freva (Free Evaluation System Framework) software infrastructure, which offers standardized data and tool solutions in ESS and is optimized for use on high-performance computer systems (Kadow et al. 2021). Freva is designed to be very well suited to the use of the FDO framework. The crucial aspects here are:the standardisation of data objects as input for analysis and processing,the already implemented remote access to data via a Persisitent Identifier (PID),the currently still system-internal capture of analysis provenance andthe possibility of sharing results but also workflows by research groups up to large communities.It is planned to extend the functionality of Freva so that the system automatically determines the data required for a specific analysis from a researcher’s research question (provided to the system via some interface), enquires available databases (local disk or tape, cloud or federated resources) for that data and retrieves the data if possible. If data are not available (yet), Freva shall be able to automatically configure, set up and submit model simulations to the HPC-System, so that the required data is created and becomes available (cf. Fig. 1). These data will in turn be ingested into Freva’s data catalog for reuse. Next, Freva shall orchestrate and document the analysis performed. Results will be provided either as numerical fields, images or animations depending on the researcher’s need. As a final step, the applied workflow and/or underlying data are published in accordance with the FAIR data guiding principles.FDOs - towards a global integrated Data Space To make the process sketched out above a reality, application of the FDO concept is essential (Schwardmann 2020, Schultes and Wittenburg 2019). There is a long tradition in the ESS community of global dissemination and reuse of large-volume climate data sets. Community standards like those developed and applied in the framework of internationally coordinated model intercomparison studies (CMIP) allow for low-barrier reuse of data (Balaji et al. 2018). Globally resolvable PIDs are provided on a regular basis. Current community ESS standards and workflows are already close to being compatible with implementing FDOs, however, now we also have to work on open points in the FDO concept, which are:the clear definition of community-specific FDO requirements including PID Kernel Types specifications,the operation of data type registries andthe technical implementation requirements for global access to FDOs.With these in place and implemented in Freva following standardized implementation recommendations, automated data queries across spatially distributed or different types of local databases become possible.We introduce the concept of implementations in Freva and also use it to highlight the challenges we face. Using an example, we show the vision of the work of a scientist in earth system science