1,476 research outputs found
Shingle 2.0: generalising self-consistent and automated domain discretisation for multi-scale geophysical models
The approaches taken to describe and develop spatial discretisations of the
domains required for geophysical simulation models are commonly ad hoc, model
or application specific and under-documented. This is particularly acute for
simulation models that are flexible in their use of multi-scale, anisotropic,
fully unstructured meshes where a relatively large number of heterogeneous
parameters are required to constrain their full description. As a consequence,
it can be difficult to reproduce simulations, ensure a provenance in model data
handling and initialisation, and a challenge to conduct model intercomparisons
rigorously. This paper takes a novel approach to spatial discretisation,
considering it much like a numerical simulation model problem of its own. It
introduces a generalised, extensible, self-documenting approach to carefully
describe, and necessarily fully, the constraints over the heterogeneous
parameter space that determine how a domain is spatially discretised. This
additionally provides a method to accurately record these constraints, using
high-level natural language based abstractions, that enables full accounts of
provenance, sharing and distribution. Together with this description, a
generalised consistent approach to unstructured mesh generation for geophysical
models is developed, that is automated, robust and repeatable, quick-to-draft,
rigorously verified and consistent to the source data throughout. This
interprets the description above to execute a self-consistent spatial
discretisation process, which is automatically validated to expected discrete
characteristics and metrics.Comment: 18 pages, 10 figures, 1 table. Submitted for publication and under
revie
Parameterized complexity of the MINCCA problem on graphs of bounded decomposability
In an edge-colored graph, the cost incurred at a vertex on a path when two
incident edges with different colors are traversed is called reload or
changeover cost. The "Minimum Changeover Cost Arborescence" (MINCCA) problem
consists in finding an arborescence with a given root vertex such that the
total changeover cost of the internal vertices is minimized. It has been
recently proved by G\"oz\"upek et al. [TCS 2016] that the problem is FPT when
parameterized by the treewidth and the maximum degree of the input graph. In
this article we present the following results for the MINCCA problem:
- the problem is W[1]-hard parameterized by the treedepth of the input graph,
even on graphs of average degree at most 8. In particular, it is W[1]-hard
parameterized by the treewidth of the input graph, which answers the main open
problem of G\"oz\"upek et al. [TCS 2016];
- it is W[1]-hard on multigraphs parameterized by the tree-cutwidth of the
input multigraph;
- it is FPT parameterized by the star tree-cutwidth of the input graph, which
is a slightly restricted version of tree-cutwidth. This result strictly
generalizes the FPT result given in G\"oz\"upek et al. [TCS 2016];
- it remains NP-hard on planar graphs even when restricted to instances with
at most 6 colors and 0/1 symmetric costs, or when restricted to instances with
at most 8 colors, maximum degree bounded by 4, and 0/1 symmetric costs.Comment: 25 pages, 11 figure
Simulating Auxiliary Inputs, Revisited
For any pair of correlated random variables we can think of as a
randomized function of . Provided that is short, one can make this
function computationally efficient by allowing it to be only approximately
correct. In folklore this problem is known as \emph{simulating auxiliary
inputs}. This idea of simulating auxiliary information turns out to be a
powerful tool in computer science, finding applications in complexity theory,
cryptography, pseudorandomness and zero-knowledge. In this paper we revisit
this problem, achieving the following results:
\begin{enumerate}[(a)] We discuss and compare efficiency of known results,
finding the flaw in the best known bound claimed in the TCC'14 paper "How to
Fake Auxiliary Inputs". We present a novel boosting algorithm for constructing
the simulator. Our technique essentially fixes the flaw. This boosting proof is
of independent interest, as it shows how to handle "negative mass" issues when
constructing probability measures in descent algorithms. Our bounds are much
better than bounds known so far. To make the simulator
-indistinguishable we need the complexity in time/circuit size, which is better by a
factor compared to previous bounds. In particular, with our
technique we (finally) get meaningful provable security for the EUROCRYPT'09
leakage-resilient stream cipher instantiated with a standard 256-bit block
cipher, like .Comment: Some typos present in the previous version have been correcte
The Complexity of Repairing, Adjusting, and Aggregating of Extensions in Abstract Argumentation
We study the computational complexity of problems that arise in abstract
argumentation in the context of dynamic argumentation, minimal change, and
aggregation. In particular, we consider the following problems where always an
argumentation framework F and a small positive integer k are given.
- The Repair problem asks whether a given set of arguments can be modified
into an extension by at most k elementary changes (i.e., the extension is of
distance k from the given set).
- The Adjust problem asks whether a given extension can be modified by at
most k elementary changes into an extension that contains a specified argument.
- The Center problem asks whether, given two extensions of distance k,
whether there is a "center" extension that is a distance at most (k-1) from
both given extensions.
We study these problems in the framework of parameterized complexity, and
take the distance k as the parameter. Our results covers several different
semantics, including admissible, complete, preferred, semi-stable and stable
semantics
Predictable arguments of knowledge
We initiate a formal investigation on the power of predictability for argument of knowledge systems for NP. Specifically, we consider private-coin argument systems where the answer of the prover can be predicted, given the private randomness of the verifier; we call such protocols Predictable Arguments of Knowledge (PAoK).
Our study encompasses a full characterization of PAoK, showing that such arguments can be made extremely laconic, with the prover sending a single bit, and assumed to have only one round (i.e., two messages) of communication without loss of generality.
We additionally explore PAoK satisfying additional properties (including zero-knowledge and the possibility of re-using the same challenge across multiple executions with the prover), present several constructions of PAoK relying on different cryptographic tools, and discuss applications to cryptography
The chaining lemma and its application
We present a new information-theoretic result which we call the Chaining Lemma. It considers a so-called “chain” of random variables, defined by a source distribution X(0)with high min-entropy and a number (say, t in total) of arbitrary functions (T1,…, Tt) which are applied in succession to that source to generate the chain (Formula presented). Intuitively, the Chaining Lemma guarantees that, if the chain is not too long, then either (i) the entire chain is “highly random”, in that every variable has high min-entropy; or (ii) it is possible to find a point j (1 ≤ j ≤ t) in the chain such that, conditioned on the end of the chain i.e. (Formula presented), the preceding part (Formula presented) remains highly random. We think this is an interesting information-theoretic result which is intuitive but nevertheless requires rigorous case-analysis to prove. We believe that the above lemma will find applications in cryptography. We give an example of this, namely we show an application of the lemma to protect essentially any cryptographic scheme against memory tampering attacks. We allow several tampering requests, the tampering functions can be arbitrary, however, they must be chosen from a bounded size set of functions that is fixed a prior
Tracing a phase transition with fluctuations of the largest fragment size: Statistical multifragmentation models and the ALADIN S254 data
A phase transition signature associated with cumulants of the largest
fragment size distribution has been identified in statistical
multifragmentation models and examined in analysis of the ALADIN S254 data on
fragmentation of neutron-poor and neutron-rich projectiles. Characteristics of
the transition point indicated by this signature are weakly dependent on the
A/Z ratio of the fragmenting spectator source. In particular, chemical
freeze-out temperatures are estimated within the range 5.9 to 6.5 MeV. The
experimental results are well reproduced by the SMM model.Comment: 7 pages, 3 figures, Proceedings of the International Workshop on
Multifragmentation and Related Topics (IWM2009), Catania, Italy, November
2009
Public-Key Encryption Schemes with Auxiliary Inputs
7th Theory of Cryptography Conference, TCC 2010, Zurich, Switzerland, February 9-11, 2010. ProceedingsWe construct public-key cryptosystems that remain secure even when the adversary is given any computationally uninvertible function of the secret key as auxiliary input (even one that may reveal the secret key information-theoretically). Our schemes are based on the decisional Diffie-Hellman (DDH) and the Learning with Errors (LWE) problems.
As an independent technical contribution, we extend the Goldreich-Levin theorem to provide a hard-core (pseudorandom) value over large fields.National Science Foundation (U.S.) (Grant CCF-0514167)National Science Foundation (U.S.) (Grant CCF-0635297)National Science Foundation (U.S.) (Grant NSF-0729011)Israel Science Foundation (700/08)Chais Family Fellows Progra
- …
