1,674 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
An Expressivist Strategy to Understand Logical Forms
This paper discusses a generalization of logical expressivism. It is shown that, in the wide sense defined here, the expressivist approach is neutral with respect to different theories of inference and offers a natural framework for understanding logical forms and their function. An expressivist strategy for explaining the development of logical forms is then applied to the analysis of Frege’s Begriffsschrift, Gentzen’s sequent calculus and Belnap’s display logic
Nanomaterial fate and bioavailability in freshwater environments
Given the widespread use of silver nanomaterials (AgNM), their accidental or intentional
release into the environment is inevitable. AgNM release into riverine systems is a daily
occurrence, and following their release, they will undoubtedly interact with naturally
occurring organic and inorganic particulates and sediment interfaces. At this point, AgNM's
long-term threat to freshwater ecosystems is unclear. We must develop our understanding
of AgNM fate, toxicity, and bioavailability using testing approaches that systematically
investigate AgNM environmental interaction within single-factor and multifactor systems.
This body of research aimed to comprehensively examine selected AgNM particles that
were tracked within parallel fate scenarios and toxicity and bioavailability studies. Results
showed contrasting behavior between the two tested AgNM. Findings also demonstrated
that low shear flow is a significant factor influencing the flocculation and settling rates of
AgNM, which differentially regulated the persistence and residence time of aqueous phase
AgNM within simulated riverine systems. Experiments with low shear flow showed a
significant increase in AgNM water column removal and modulated the physicochemistry
differentially compared to quiescent systems. The findings on the influence of bed
sediment interactions with waterborne AgNM demonstrated that they are a vital process
that increases the transfer and exchange of AgNM from the water column to the bed.
Toxicity studies showed how abiotic factors could modulate toxicity differentially between
aquatic species and how inorganic and organic matter can increase and decrease AgNM
toxicity. Exposure studies contrasting singular and multifactor exposures with and without
low shear flow demonstrated that they modulate the exposure of AgNM significantly
differently. In conclusion, the proof-of-concept flume designs for testing the environmental
fate and exposure of AgNM showed promise and that, with further refinement, could be
further incorporated into the life-cycle testing framework of ENMs, to produce accurate
semi-empirical coefficients for environmental models for the assessment of hazard
Elements, Government, and Licensing: Developments in phonology
Elements, Government, and Licensing brings together new theoretical and empirical developments in phonology. It covers three principal domains of phonological representation: melody and segmental structure; tone, prosody and prosodic structure; and phonological relations, empty categories, and vowel-zero alternations. Theoretical topics covered include the formalisation of Element Theory, the hotly debated topic of structural recursion in phonology, and the empirical status of government.
In addition, a wealth of new analyses and empirical evidence sheds new light on empty categories in phonology, the analysis of certain consonantal sequences, phonological and non-phonological alternation, the elemental composition of segments, and many more. Taking up long-standing empirical and theoretical issues informed by the Government Phonology and Element Theory, this book provides theoretical advances while also bringing to light new empirical evidence and analysis challenging previous generalisations.
The insights offered here will be equally exciting for phonologists working on related issues inside and outside the Principles & Parameters programme, such as researchers working in Optimality Theory or classical rule-based phonology
Prism: Private Set Intersection and Union with Aggregation over Multi-Owner Outsourced Data
This paper proposes Prism, Private Verifiable Set Computation over Multi-Owner Outsourced Databases, a secret sharing based approach to compute private set operations (i.e., intersection and union), as well as aggregates over outsourced databases belonging to multiple owners. Prism enables data owners to pre-load the data onto non-colluding servers and exploits the additive and multiplicative properties of secret-shares to compute the above-listed operations in (at most) two rounds of communication between the servers (storing the secret-shares) and the querier, resulting in a very efficient implementation. Also, Prism does not require communication among the servers and supports result verification techniques for each operation to detect malicious adversaries. Experimental results show that Prism scales both in terms of the number of data owners and database sizes, to which prior approaches do not scale
Pattern Devoid Cryptography
Pattern-loaded ciphers are at risk of being compromised by exploiting deeper patterns discovered first by the attacker. This reality offers a built-in advantage to prime cryptanalysis institutions. On the flip side, the risk of hidden math and faster computing undermines confidence in the prevailing cipher products. To avoid this risk one would resort to building security on the premise of lavish quantities of randomness. Gilbert S. Vernam did it in 1917. Using modern technology, the same idea of randomness-based security can be implemented without the inconvenience associated with the old Vernam cipher. These are Trans Vernam Ciphers that project security through a pattern-devoid cipher. Having no pattern to lean on, there is no pattern to crack. The attacker faces (i) a properly randomized shared cryptographic key combined with (ii) unilateral randomness, originated ad-hoc by the transmitter without pre-coordination with the recipient. The unlimited unilateral randomness together with the shared key randomness is set to project as much security as desired up to and including Vernam levels. Assorted Trans Vernam ciphers (TVC) are categorized and reviewed, presenting a cogent message in favor of a cryptographic pathway where transmitted secrets are credibly secured against attackers with faster computers and better mathematicians
Concurrent Realizability on Conjunctive Structures
This work aims at exploring the algebraic structure of concurrent processes and their behavior independently of a particular formalism used to define them. We propose a new algebraic structure called conjunctive involutive monoidal algebra (CIMA) as a basis for an algebraic presentation of concurrent realizability, following ideas of the algebrization program already developed in the realm of classical and intuitionistic realizability. In particular, we show how any CIMA provides a sound interpretation of multiplicative linear logic. This new structure involves, in addition to the tensor and the orthogonal map, a parallel composition. We define a reference model of this structure as induced by a standard process calculus and we use this model to prove that parallel composition cannot be defined from the conjunctive structure alone
An Expressivist Strategy to Understand Logical Forms
This paper discusses a generalization of logical expressivism. It is shown that, in the wide sense defined here, the expressivist approach is neutral with respect to different theories of inference and offers a natural framework for understanding logical forms and their function. An expressivist strategy for explaining the development of logical forms is then applied to the analysis of Frege’s Begriffsschrift, Gentzen’s sequent calculus and Belnap’s display logic
Algorithms for sparse convolution and sublinear edit distance
In this PhD thesis on fine-grained algorithm design and complexity, we investigate output-sensitive and sublinear-time algorithms for two important problems. (1) Sparse Convolution: Computing the convolution of two vectors is a basic algorithmic primitive with applications across all of Computer Science and Engineering. In the sparse convolution problem we assume that the input and output vectors have at most t nonzero entries, and the goal is to design algorithms with running times dependent on t. For the special case where all entries are nonnegative, which is particularly important for algorithm design, it is known since twenty years that sparse convolutions can be computed in near-linear randomized time O(t log^2 n). In this thesis we develop a randomized algorithm with running time O(t \log t) which is optimal (under some mild assumptions), and the first near-linear deterministic algorithm for sparse nonnegative convolution. We also present an application of these results, leading to seemingly unrelated fine-grained lower bounds against distance oracles in graphs. (2) Sublinear Edit Distance: The edit distance of two strings is a well-studied similarity measure with numerous applications in computational biology. While computing the edit distance exactly provably requires quadratic time, a long line of research has lead to a constant-factor approximation algorithm in almost-linear time. Perhaps surprisingly, it is also possible to approximate the edit distance k within a large factor O(k) in sublinear time O~(n/k + poly(k)). We drastically improve the approximation factor of the known sublinear algorithms from O(k) to k^{o(1)} while preserving the O(n/k + poly(k)) running time.In dieser Doktorarbeit über feinkörnige Algorithmen und Komplexität untersuchen wir ausgabesensitive Algorithmen und Algorithmen mit sublinearer Lauf-zeit für zwei wichtige Probleme. (1) Dünne Faltungen: Die Berechnung der Faltung zweier Vektoren ist ein grundlegendes algorithmisches Primitiv, das in allen Bereichen der Informatik und des Ingenieurwesens Anwendung findet. Für das dünne Faltungsproblem nehmen wir an, dass die Eingabe- und Ausgabevektoren höchstens t Einträge ungleich Null haben, und das Ziel ist, Algorithmen mit Laufzeiten in Abhängigkeit von t zu entwickeln. Für den speziellen Fall, dass alle Einträge nicht-negativ sind, was insbesondere für den Entwurf von Algorithmen relevant ist, ist seit zwanzig Jahren bekannt, dass dünn besetzte Faltungen in nahezu linearer randomisierter Zeit O(t \log^2 n) berechnet werden können. In dieser Arbeit entwickeln wir einen randomisierten Algorithmus mit Laufzeit O(t \log t), der (unter milden Annahmen) optimal ist, und den ersten nahezu linearen deterministischen Algorithmus für dünne nichtnegative Faltungen. Wir stellen auch eine Anwendung dieser Ergebnisse vor, die zu scheinbar unverwandten feinkörnigen unteren Schranken gegen Distanzorakel in Graphen führt. (2) Sublineare Editierdistanz: Die Editierdistanz zweier Zeichenketten ist ein gut untersuchtes Ähnlichkeitsmaß mit zahlreichen Anwendungen in der Computerbiologie. Während die exakte Berechnung der Editierdistanz nachweislich quadratische Zeit erfordert, hat eine lange Reihe von Forschungsarbeiten zu einem Approximationsalgorithmus mit konstantem Faktor in fast-linearer Zeit geführt. Überraschenderweise ist es auch möglich, die Editierdistanz k innerhalb eines großen Faktors O(k) in sublinearer Zeit O~(n/k + poly(k)) zu approximieren. Wir verbessern drastisch den Approximationsfaktor der bekannten sublinearen Algorithmen von O(k) auf k^{o(1)} unter Beibehaltung der O(n/k + poly(k))-Laufzeit
- …