1,015 research outputs found
Computable concurrent processes
AbstractWe study relative computability for processes and process transformations, in general, and in particular the non-deterministic and concurrent processes which can be specified in terms of various fair merge constructs. The main result is a normal form theorem for these (relatively) computable process functions which implies that although they can be very complex when viewed as classical set-functions, they are all âloosely implementableâ in the sense of Park (1980). The precise results are about the player model of concurrency introduced in Moschovakis (1991), which supports both fairness constructs and full recursion
Flaky Test Sanitisation via On-the-Fly Assumption Inference for Tests with Network Dependencies
Flaky tests cause significant problems as they can interrupt automated build
processes that rely on all tests succeeding and undermine the trustworthiness
of tests. Numerous causes of test flakiness have been identified, and program
analyses exist to detect such tests. Typically, these methods produce advice to
developers on how to refactor tests in order to make test outcomes
deterministic. We argue that one source of flakiness is the lack of assumptions
that precisely describe under which circumstances a test is meaningful. We
devise a sanitisation technique that can isolate f laky tests quickly by
inferring such assumptions on-the-fly, allowing automated builds to proceed as
flaky tests are ignored. We demonstrate this approach for Java and Groovy
programs by implementing it as extensions for three popular testing frameworks
(JUnit4, JUnit5 and Spock) that can transparently inject the inferred
assumptions. If JUnit5 is used, those extensions can be deployed without
refactoring project source code. We demonstrate and evaluate the utility of our
approach using a set of six popular real-world programs, addressing known test
flakiness issues in these programs caused by dependencies of tests on network
availability. We find that our method effectively sanitises failures induced by
network connectivity problems with high precision and recall.Comment: to appear at IEEE International Working Conference on Source Code
Analysis and Manipulation (SCAM
Improving the Network Scalability of Erlang
As the number of cores grows in commodity architectures so does the likelihood of failures. A distributed actor model potentially facilitates the development of reliable and scalable software on these architectures. Key components include lightweight processes which âshare nothingâ and hence can fail independently. Erlang is not only increasingly widely used, but the underlying actor model has been a beacon for programming language design, influencing for example Scala, Clojure and Cloud Haskell.
While the Erlang distributed actor model is inherently scalable, we demonstrate that it is limited by some pragmatic factors. We address two network scalability issues here: globally registered process names must be updated on every node (virtual machine) in the system, and any Erlang nodes that communicate maintain an active connection. That is, there is a fully connected O(n2) network of n nodes.
We present the design, implementation, and initial evaluation of a conservative extension of Erlang â Scalable Distributed (SD) Erlang. SD Erlang partitions the global namespace and connection network using s_groups. An s_group is a set of nodes with its own process namespace and with a fully connected network within the s_group, but only individual connections outside it. As a node may belong to more than one s_group it is possible to construct arbitrary connection topologies like trees or rings.
We present an operational semantics for the s_group functions, and outline the validation of conformance between the implementation and the semantics using the QuickCheck automatic testing tool. Our preliminary evaluation in comparison with distributed Erlang shows that SD Erlang dramatically improves network scalability even if the number of global operations is tiny (0.01%). Moreover, even in the absence of global operations the reduced connection maintenance overheads mean that SD Erlang scales better beyond 80 nodes (1920 cores)
- âŠ