71,325 research outputs found
ARPA Whitepaper
We propose a secure computation solution for blockchain networks. The
correctness of computation is verifiable even under malicious majority
condition using information-theoretic Message Authentication Code (MAC), and
the privacy is preserved using Secret-Sharing. With state-of-the-art multiparty
computation protocol and a layer2 solution, our privacy-preserving computation
guarantees data security on blockchain, cryptographically, while reducing the
heavy-lifting computation job to a few nodes. This breakthrough has several
implications on the future of decentralized networks. First, secure computation
can be used to support Private Smart Contracts, where consensus is reached
without exposing the information in the public contract. Second, it enables
data to be shared and used in trustless network, without disclosing the raw
data during data-at-use, where data ownership and data usage is safely
separated. Last but not least, computation and verification processes are
separated, which can be perceived as computational sharding, this effectively
makes the transaction processing speed linear to the number of participating
nodes. Our objective is to deploy our secure computation network as an layer2
solution to any blockchain system. Smart Contracts\cite{smartcontract} will be
used as bridge to link the blockchain and computation networks. Additionally,
they will be used as verifier to ensure that outsourced computation is
completed correctly. In order to achieve this, we first develop a general MPC
network with advanced features, such as: 1) Secure Computation, 2) Off-chain
Computation, 3) Verifiable Computation, and 4)Support dApps' needs like
privacy-preserving data exchange
Teaching programming with computational and informational thinking
Computers are the dominant technology of the early 21st century: pretty well all aspects of economic, social and personal life are now unthinkable without them. In turn, computer hardware is controlled by software, that is, codes written in programming languages. Programming, the construction of software, is thus a fundamental activity, in which millions of people are engaged worldwide, and the teaching of programming is long established in international secondary and higher education. Yet, going on 70 years after the first computers were built, there is no well-established pedagogy for teaching programming.
There has certainly been no shortage of approaches. However, these have often been driven by fashion, an enthusiastic amateurism or a wish to follow best industrial practice, which, while appropriate for mature professionals, is poorly suited to novice programmers. Much of the difficulty lies in the very close relationship between problem solving and programming. Once a problem is well characterised it is relatively straightforward to realise a solution in software. However, teaching problem solving is, if anything, less well understood than teaching programming.
Problem solving seems to be a creative, holistic, dialectical, multi-dimensional, iterative process. While there are well established techniques for analysing problems, arbitrary problems cannot be solved by rote, by mechanically applying techniques in some prescribed linear order. Furthermore, historically, approaches to teaching programming have failed to account for this complexity in problem solving, focusing strongly on programming itself and, if at all, only partially and superficially exploring problem solving.
Recently, an integrated approach to problem solving and programming called Computational Thinking (CT) (Wing, 2006) has gained considerable currency. CT has the enormous advantage over prior approaches of strongly emphasising problem solving and of making explicit core techniques. Nonetheless, there is still a tendency to view CT as prescriptive rather than creative, engendering scholastic arguments about the nature and status of CT techniques. Programming at heart is concerned with processing information but many accounts of CT emphasise processing over information rather than seeing then as intimately related.
In this paper, while acknowledging and building on the strengths of CT, I argue that understanding the form and structure of information should be primary in any pedagogy of programming
Energy Complexity of Distance Computation in Multi-hop Networks
Energy efficiency is a critical issue for wireless devices operated under
stringent power constraint (e.g., battery). Following prior works, we measure
the energy cost of a device by its transceiver usage, and define the energy
complexity of an algorithm as the maximum number of time slots a device
transmits or listens, over all devices. In a recent paper of Chang et al. (PODC
2018), it was shown that broadcasting in a multi-hop network of unknown
topology can be done in energy. In this paper, we continue
this line of research, and investigate the energy complexity of other
fundamental graph problems in multi-hop networks. Our results are summarized as
follows.
1. To avoid spending energy, the broadcasting protocols of Chang
et al. (PODC 2018) do not send the message along a BFS tree, and it is open
whether BFS could be computed in energy, for sufficiently large . In
this paper we devise an algorithm that attains energy
cost.
2. We show that the framework of the round lower bound proof
for computing diameter in CONGEST of Abboud et al. (DISC 2017) can be adapted
to give an energy lower bound in the wireless network model
(with no message size constraint), and this lower bound applies to -arboricity graphs. From the upper bound side, we show that the energy
complexity of can be attained for bounded-genus graphs
(which includes planar graphs).
3. Our upper bounds for computing diameter can be extended to other graph
problems. We show that exact global minimum cut or approximate -- minimum
cut can be computed in energy for bounded-genus graphs
On embedded implicatures
The Gricean approach explains implicatures by assumptions about the pragmatics of entire utterances. The phenomenon of embedded implicatures remains a challenge for this approach since in such cases apparently implicatures contribute to the truth-conditional content of constituents smaller than utterances. In this paper, I investigate three areas where embedded implicatures seem to differ from implicatures at the utterance level: optionality, epistemic status, and implicated presuppositions. I conclude that the differences between the two kinds of implicatures justify an approach that maintains Gricean assumptions at the utterance level, and assumes a special operator for embedded implicatures
Review of 'The Outer Limits of Reason' by Noson Yanofsky 403p (2013) (review revised 2019)
I give a detailed review of 'The Outer Limits of Reason' by Noson Yanofsky from a unified perspective of Wittgenstein and evolutionary psychology. I indicate that the difficulty with such issues as paradox in language and math, incompleteness, undecidability, computability, the brain and the universe as computers etc., all arise from the failure to look carefully at our use of language in the appropriate context and hence the failure to separate issues of scientific fact from issues of how language works. I discuss Wittgenstein's views on incompleteness, paraconsistency and undecidability and the work of Wolpert on the limits to computation. To sum it up: The Universe According to Brooklyn---Good Science, Not So Good Philosophy.
Those wishing a comprehensive up to date framework for human behavior from the modern two systems view may consult my book âThe Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searleâ 2nd ed (2019). Those interested in more of my writings may see âTalking Monkeys--Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet--Articles and Reviews 2006-2019 3rd ed (2019) and Suicidal Utopian Delusions in the 21st Century 4th ed (2019
- âŠ