17,436 research outputs found
One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era
OpenAI has recently released GPT-4 (a.k.a. ChatGPT plus), which is
demonstrated to be one small step for generative AI (GAI), but one giant leap
for artificial general intelligence (AGI). Since its official release in
November 2022, ChatGPT has quickly attracted numerous users with extensive
media coverage. Such unprecedented attention has also motivated numerous
researchers to investigate ChatGPT from various aspects. According to Google
scholar, there are more than 500 articles with ChatGPT in their titles or
mentioning it in their abstracts. Considering this, a review is urgently
needed, and our work fills this gap. Overall, this work is the first to survey
ChatGPT with a comprehensive review of its underlying technology, applications,
and challenges. Moreover, we present an outlook on how ChatGPT might evolve to
realize general-purpose AIGC (a.k.a. AI-generated content), which will be a
significant milestone for the development of AGI.Comment: A Survey on ChatGPT and GPT-4, 29 pages. Feedback is appreciated
([email protected]
Technical Dimensions of Programming Systems
Programming requires much more than just writing code in a programming language. It is usually done in the context of a stateful environment, by interacting with a system through a graphical user interface. Yet, this wide space of possibilities lacks a common structure for navigation. Work on programming systems fails to form a coherent body of research, making it hard to improve on past work and advance the state of the art.
In computer science, much has been said and done to allow comparison of programming languages, yet no similar theory exists for programming systems; we believe that programming systems deserve a theory too.
We present a framework of technical dimensions which capture the underlying characteristics of programming systems and provide a means for conceptualizing and comparing them.
We identify technical dimensions by examining past influential programming systems and reviewing their design principles, technical capabilities, and styles of user interaction. Technical dimensions capture characteristics that may be studied, compared and advanced independently. This makes it possible to talk about programming systems in a way that can be shared and constructively debated rather than relying solely on personal impressions.
Our framework is derived using a qualitative analysis of past programming systems. We outline two concrete ways of using our framework. First, we show how it can analyze a recently developed novel programming system. Then, we use it to identify an interesting unexplored point in the design space of programming systems.
Much research effort focuses on building programming systems that are easier to use, accessible to non-experts, moldable and/or powerful, but such efforts are disconnected. They are informal, guided by the personal vision of their authors and thus are only evaluable and comparable on the basis of individual experience using them. By providing foundations for more systematic research, we can help programming systems researchers to stand, at last, on the shoulders of giants
Satisfiability of Non-Linear Transcendental Arithmetic as a Certificate Search Problem
For typical first-order logical theories, satisfying assignments have a
straightforward finite representation that can directly serve as a certificate
that a given assignment satisfies the given formula. For non-linear real
arithmetic with transcendental functions, however, no general finite
representation of satisfying assignments is available. Hence, in this paper, we
introduce a different form of satisfiability certificate for this theory,
formulate the satisfiability verification problem as the problem of searching
for such a certificate, and show how to perform this search in a systematic
fashion. This does not only ease the independent verification of results, but
also allows the systematic design of new, efficient search techniques.
Computational experiments document that the resulting method is able to prove
satisfiability of a substantially higher number of benchmark problems than
existing methods
Perfect is the enemy of test oracle
Automation of test oracles is one of the most challenging facets of software
testing, but remains comparatively less addressed compared to automated test
input generation. Test oracles rely on a ground-truth that can distinguish
between the correct and buggy behavior to determine whether a test fails
(detects a bug) or passes. What makes the oracle problem challenging and
undecidable is the assumption that the ground-truth should know the exact
expected, correct, or buggy behavior. However, we argue that one can still
build an accurate oracle without knowing the exact correct or buggy behavior,
but how these two might differ. This paper presents SEER, a learning-based
approach that in the absence of test assertions or other types of oracle, can
determine whether a unit test passes or fails on a given method under test
(MUT). To build the ground-truth, SEER jointly embeds unit tests and the
implementation of MUTs into a unified vector space, in such a way that the
neural representation of tests are similar to that of MUTs they pass on them,
but dissimilar to MUTs they fail on them. The classifier built on top of this
vector representation serves as the oracle to generate "fail" labels, when test
inputs detect a bug in MUT or "pass" labels, otherwise. Our extensive
experiments on applying SEER to more than 5K unit tests from a diverse set of
open-source Java projects show that the produced oracle is (1) effective in
predicting the fail or pass labels, achieving an overall accuracy, precision,
recall, and F1 measure of 93%, 86%, 94%, and 90%, (2) generalizable, predicting
the labels for the unit test of projects that were not in training or
validation set with negligible performance drop, and (3) efficient, detecting
the existence of bugs in only 6.5 milliseconds on average.Comment: Published in ESEC/FSE 202
An Ethereum-compatible blockchain that explicates and ensures design-level safety properties for smart contracts
Smart contracts are crucial elements of decentralized technologies, but they
face significant obstacles to trustworthiness due to security bugs and
trapdoors. To address the core issue, we propose a technology that enables
programmers to focus on design-level properties rather than specific low-level
attack patterns. Our proposed technology, called Theorem-Carrying-Transaction
(TCT), combines the benefits of runtime checking and symbolic proof. Under the
TCT protocol, every transaction must carry a theorem that proves its adherence
to the safety properties in the invoked contracts, and the blockchain checks
the proof before executing the transaction. The unique design of TCT ensures
that the theorems are provable and checkable in an efficient manner. We believe
that TCT holds a great promise for enabling provably secure smart contracts in
the future. As such, we call for collaboration toward this vision
MUFFLE: Multi-Modal Fake News Influence Estimator on Twitter
To alleviate the impact of fake news on our society, predicting the popularity of fake news posts on social media is a crucial problem worthy of study. However, most related studies on fake news emphasize detection only. In this paper, we focus on the issue of fake news influence prediction, i.e., inferring how popular a fake news post might become on social platforms. To achieve our goal, we propose a comprehensive framework, MUFFLE, which captures multi-modal dynamics by encoding the representation of news-related social networks, user characteristics, and content in text. The attention mechanism developed in the model can provide explainability for social or psychological analysis. To examine the effectiveness of MUFFLE, we conducted extensive experiments on real-world datasets. The experimental results show that our proposed method outperforms both state-of-the-art methods of popularity prediction and machine-based baselines in top-k NDCG and hit rate. Through the experiments, we also analyze the feature importance for predicting fake news influence via the explainability provided by MUFFLE
Reduction of Petri net maintenance modeling complexity via Approximate Bayesian Computation
This paper is part of the ENHAnCE ITN project (https://www.h2020-enhanceitn.eu/) funded by the European Union's Horizon 2020 research and innovation programme under the Marie SklodowskaCurie grant agreement No. 859957. The authors would like to thank the Lloyd's Register Foundation (LRF), a charitable foundation in the U.K. helping to protect life and property by supporting engineeringrelated education, public engagement, and the application of research. The authors gratefully acknowledge the support of these organizations which have enabled the research reported in this paper.The accurate modeling of engineering systems and processes using Petri nets often results in complex graph
representations that are computationally intensive, limiting the potential of this modeling tool in real life
applications. This paper presents a methodology to properly define the optimal structure and properties of
a reduced Petri net that mimic the output of a reference Petri net model. The methodology is based on
Approximate Bayesian Computation to infer the plausible values of the model parameters of the reduced model
in a rigorous probabilistic way. Also, the method provides a numerical measure of the level of approximation
of the reduced model structure, thus allowing the selection of the optimal reduced structure among a set
of potential candidates. The suitability of the proposed methodology is illustrated using a simple illustrative
example and a system reliability engineering case study, showing satisfactory results. The results also show
that the method allows flexible reduction of the structure of the complex Petri net model taken as reference,
and provides numerical justification for the choice of the reduced model structure.European Commission 859957Lloyd's Register Foundation (LRF), a charitable foundation in the U.K
Foundations for programming and implementing effect handlers
First-class control operators provide programmers with an expressive and efficient
means for manipulating control through reification of the current control state as a first-class object, enabling programmers to implement their own computational effects and
control idioms as shareable libraries. Effect handlers provide a particularly structured
approach to programming with first-class control by naming control reifying operations
and separating from their handling.
This thesis is composed of three strands of work in which I develop operational
foundations for programming and implementing effect handlers as well as exploring
the expressive power of effect handlers.
The first strand develops a fine-grain call-by-value core calculus of a statically
typed programming language with a structural notion of effect types, as opposed to the
nominal notion of effect types that dominates the literature. With the structural approach,
effects need not be declared before use. The usual safety properties of statically typed
programming are retained by making crucial use of row polymorphism to build and
track effect signatures. The calculus features three forms of handlers: deep, shallow,
and parameterised. They each offer a different approach to manipulate the control state
of programs. Traditional deep handlers are defined by folds over computation trees,
and are the original con-struct proposed by Plotkin and Pretnar. Shallow handlers are
defined by case splits (rather than folds) over computation trees. Parameterised handlers
are deep handlers extended with a state value that is threaded through the folds over
computation trees. To demonstrate the usefulness of effects and handlers as a practical
programming abstraction I implement the essence of a small UNIX-style operating
system complete with multi-user environment, time-sharing, and file I/O.
The second strand studies continuation passing style (CPS) and abstract machine
semantics, which are foundational techniques that admit a unified basis for implementing deep, shallow, and parameterised effect handlers in the same environment. The
CPS translation is obtained through a series of refinements of a basic first-order CPS
translation for a fine-grain call-by-value language into an untyped language. Each refinement moves toward a more intensional representation of continuations eventually
arriving at the notion of generalised continuation, which admit simultaneous support for
deep, shallow, and parameterised handlers. The initial refinement adds support for deep
handlers by representing stacks of continuations and handlers as a curried sequence of
arguments. The image of the resulting translation is not properly tail-recursive, meaning some function application terms do not appear in tail position. To rectify this the
CPS translation is refined once more to obtain an uncurried representation of stacks
of continuations and handlers. Finally, the translation is made higher-order in order to
contract administrative redexes at translation time. The generalised continuation representation is used to construct an abstract machine that provide simultaneous support for
deep, shallow, and parameterised effect handlers. kinds of effect handlers.
The third strand explores the expressiveness of effect handlers. First, I show that
deep, shallow, and parameterised notions of handlers are interdefinable by way of typed
macro-expressiveness, which provides a syntactic notion of expressiveness that affirms
the existence of encodings between handlers, but it provides no information about the
computational content of the encodings. Second, using the semantic notion of expressiveness I show that for a class of programs a programming language with first-class
control (e.g. effect handlers) admits asymptotically faster implementations than possible in a language without first-class control
Digital asset management via distributed ledgers
Distributed ledgers rose to prominence with the advent of Bitcoin, the first provably secure protocol to solve consensus in an open-participation setting. Following, active research and engineering efforts have proposed a multitude of applications and alternative designs, the most prominent being Proof-of-Stake (PoS). This thesis expands the scope of secure and efficient asset management over a distributed ledger around three axes: i) cryptography; ii) distributed systems; iii) game theory and economics. First, we analyze the security of various wallets. We start with a formal model of hardware wallets, followed by an analytical framework of PoS wallets, each outlining the unique properties of Proof-of-Work (PoW) and PoS respectively. The latter also provides a rigorous design to form collaborative participating entities, called stake pools. We then propose Conclave, a stake pool design which enables a group of parties to participate in a PoS system in a collaborative manner, without a central operator. Second, we focus on efficiency. Decentralized systems are aimed at thousands of users across the globe, so a rigorous design for minimizing memory and storage consumption is a prerequisite for scalability. To that end, we frame ledger maintenance as an optimization problem and design a multi-tier framework for designing wallets which ensure that updates increase the ledger’s global state only to a minimal extent, while preserving the security guarantees outlined in the security analysis. Third, we explore incentive-compatibility and analyze blockchain systems from a micro and a macroeconomic perspective. We enrich our cryptographic and systems' results by analyzing the incentives of collective pools and designing a state efficient Bitcoin fee function. We then analyze the Nash dynamics of distributed ledgers, introducing a formal model that evaluates whether rational, utility-maximizing participants are disincentivized from exhibiting undesirable infractions, and highlighting the differences between PoW and PoS-based ledgers, both in a standalone setting and under external parameters, like market price fluctuations. We conclude by introducing a macroeconomic principle, cryptocurrency egalitarianism, and then describing two mechanisms for enabling taxation in blockchain-based currency systems
- …