54 research outputs found
Cardinal invariants of the continuum -A survey
Abstract These are expanded notes of a series of two lectures given at the meeting on axiomatic set theory at KyÅto University in November 2000. The lectures were intended to survey the state of the art of the theory of cardinal invariants of the continuum, and focused on the interplay between iterated forcing theory and cardinal invariants, as well as on important open problems. To round off the present written account of this survey, we also include sections on ZF C-inequalities between cardinal invariants, and on applications outside of set theory. However, due to the sheer size of the area, proofs had to be mostly left out. While being more comprehensive than the original talks, the personal flavor of the latter is preserved in the notes. Some of the material included was presented in talks at other conferences
Mathematical Logic: Proof Theory, Constructive Mathematics (hybrid meeting)
The Workshop "Mathematical Logic: Proof Theory,
Constructive Mathematics" focused on
proofs both as formal derivations in deductive systems as well as on
the extraction of explicit computational content from
given proofs in core areas of ordinary mathematics using proof-theoretic
methods. The workshop contributed to the following research strands: interactions between foundations and applications; proof mining; constructivity in classical logic; modal logic and provability logic; proof theory and theoretical computer science; structural proof theory
Number Theoretic Transform and Its Applications in Lattice-based Cryptosystems: A Survey
Number theoretic transform (NTT) is the most efficient method for multiplying
two polynomials of high degree with integer coefficients, due to its series of
advantages in terms of algorithm and implementation, and is consequently
widely-used and particularly fundamental in the practical implementations of
lattice-based cryptographic schemes. Especially, recent works have shown that
NTT can be utilized in those schemes without NTT-friendly rings, and can
outperform other multiplication algorithms. In this paper, we first review the
basic concepts of polynomial multiplication, convolution and NTT. Subsequently,
we systematically introduce basic radix-2 fast NTT algorithms in an algebraic
way via Chinese Remainder Theorem. And then, we elaborate recent advances about
the methods to weaken restrictions on parameter conditions of NTT. Furthermore,
we systematically introduce how to choose appropriate strategy of NTT
algorithms for the various given rings. Later, we introduce the applications of
NTT in the lattice-based cryptographic schemes of NIST post-quantum
cryptography standardization competition. Finally, we try to present some
possible future research directions
19th Brazilian Logic Conference: Book of Abstracts
This is the book of abstracts of the 19th Brazilian Logic Conferences. The Brazilian Logic Conferences (EBL) is one of the most traditional logic conferences in South America. Organized by the Brazilian Logic Society (SBL), its main goal is to promote the dissemination of research in logic in a broad sense. It has been occurring since 1979, congregating logicians of different fields ā mostly philosophy, mathematics and computer science ā and with different backgrounds ā from undergraduate students to senior researchers. The meeting is an important moment for the Brazilian and South American logical community to join together and discuss recent developments of the field. The areas of logic covered in the conference spread over foundations and philosophy of science, analytic philosophy, philosophy and history of logic, mathematics, computer science, informatics, linguistics and artificial intelligence. Previous editions of the EBL have been a great success, attracting researchers from all over Latin America and elsewhere.
The 19th edition of EBL takes place from May 6-10, 2019, in the beautiful city of JoĆ£o Pessoa, at the northeast coast of Brazil. It is conjointly organized by Federal University of ParaĆba (UFPB), whose main campus is located in JoĆ£o Pessoa, Federal University of Campina Grande (UFCG), whose main campus is located in the nearby city of Campina Grande (the second-largest city in ParaĆba state) and SBL. It is sponsored by UFPB, UFCG, the Brazilian Council for Scientific and Technological Development (CNPq) and the State Ministry of Education, Science and Technology of ParaĆba. It takes place at Hotel Luxxor Nord TambaĆŗ, privileged located right in front TambaĆŗ beach, one of JoĆ£o Pessoaās most famous beaches
A general insertion theorem for uniform locales
A general insertion theorem due to Preiss and VilimovskĆ½ is extended to the category
of locales. More precisely, given a preuniform structure on a locale we provide
necessary and sufficient conditions for a pair f ā„ g of localic real functions to
admit a uniformly continuous real function in-between. As corollaries, separation
and extension results for uniform locales are proved. The proof of the main theorem
relies heavily on (pre-)diameters in locales as a substitute for classical pseudometrics.
On the way, several general properties concerning these (pre-)diameters are also
shown
Formal concept matching and reinforcement learning in adaptive information retrieval
The superiority of the human brain in information retrieval (IR) tasks seems to come firstly
from its ability to read and understand the concepts, ideas or meanings central to documents, in
order to reason out the usefulness of documents to information needs, and secondly from its
ability to learn from experience and be adaptive to the environment. In this work we attempt to
incorporate these properties into the development of an IR model to improve document
retrieval. We investigate the applicability of concept lattices, which are based on the theory of
Formal Concept Analysis (FCA), to the representation of documents. This allows the use of
more elegant representation units, as opposed to keywords, in order to better capture
concepts/ideas expressed in natural language text. We also investigate the use of a
reinforcement leaming strategy to learn and improve document representations, based on the
information present in query statements and user relevance feedback. Features or concepts of
each document/query, formulated using FCA, are weighted separately with respect to the
documents they are in, and organised into separate concept lattices according to a subsumption
relation. Furthen-nore, each concept lattice is encoded in a two-layer neural network structure
known as a Bidirectional Associative Memory (BAM), for efficient manipulation of the
concepts in the lattice representation. This avoids implementation drawbacks faced by other
FCA-based approaches. Retrieval of a document for an information need is based on concept
matching between concept lattice representations of a document and a query. The learning
strategy works by making the similarity of relevant documents stronger and non-relevant
documents weaker for each query, depending on the relevance judgements of the users on
retrieved documents. Our approach is radically different to existing FCA-based approaches in
the following respects: concept formulation; weight assignment to object-attribute pairs; the
representation of each document in a separate concept lattice; and encoding concept lattices in
BAM structures. Furthermore, in contrast to the traditional relevance feedback mechanism, our
learning strategy makes use of relevance feedback information to enhance document
representations, thus making the document representations dynamic and adaptive to the user
interactions. The results obtained on the CISI, CACM and ASLIB Cranfield collections are
presented and compared with published results. In particular, the performance of the system is
shown to improve significantly as the system learns from experience.The School of Computing,
University of Plymouth, UK
Implementation and Evaluation of Algorithmic Skeletons: Parallelisation of Computer Algebra Algorithms
This thesis presents design and implementation approaches for the parallel algorithms of computer algebra. We use algorithmic skeletons and also further approaches, like data parallel arithmetic and actors. We have implemented skeletons for divide and conquer algorithms and some special parallel loops, that we call ārepeated computation with a possibility of premature terminationā. We introduce in this thesis a rational data parallel arithmetic. We focus on parallel symbolic computation algorithms, for these algorithms our arithmetic provides a generic parallelisation approach.
The implementation is carried out in Eden, a parallel functional programming language based on Haskell. This choice enables us to encode both the skeletons and the programs in the same language. Moreover, it allows us to refrain from using two different languagesāone for the implementation and one for the interfaceāfor our implementation of computer algebra algorithms.
Further, this thesis presents methods for evaluation and estimation of parallel execution times. We partition the parallel execution time into two components. One of them accounts for the quality of the parallelisation, we call it the āparallel penaltyā. The other is the sequential execution time. For the estimation, we predict both components separately, using statistical methods. This enables very confident estimations, although using drastically less measurement points than other methods. We have applied both our evaluation and estimation approaches to the parallel programs presented in this thesis. We haven also used existing estimation methods.
We developed divide and conquer skeletons for the implementation of fast parallel multiplication. We have implemented the Karatsuba algorithm, Strassenās matrix multiplication algorithm and the fast Fourier transform. The latter was used to implement polynomial convolution that leads to a further fast multiplication algorithm. Specially for our implementation of Strassen algorithm we have designed and implemented a divide and conquer skeleton basing on actors. We have implemented the parallel fast Fourier transform, and not only did we use new divide and conquer skeletons, but also developed a map-and-transpose skeleton. It enables good parallelisation of the Fourier transform. The parallelisation of Karatsuba multiplication shows a very good performance. We have analysed the parallel penalty of our programs and compared it to the serial fractionāan approach, known from literature. We also performed execution time estimations of our divide and conquer programs.
This thesis presents a parallel map+reduce skeleton scheme. It allows us to combine the usual parallel map skeletons, like parMap, farm, workpool, with a premature termination property. We use this to implement the so-called āparallel repeated computationā, a special form of a speculative parallel loop. We have implemented two probabilistic primality tests: the RabināMiller test and the Jacobi sum test. We parallelised both with our approach. We analysed the task distribution and stated the fitting configurations of the Jacobi sum test. We have shown formally that the Jacobi sum test can be implemented in parallel. Subsequently, we parallelised it, analysed the load balancing issues, and produced an optimisation. The latter enabled a good implementation, as verified using the parallel penalty. We have also estimated the performance of the tests for further input sizes and numbers of processing elements. Parallelisation of the Jacobi sum test and our generic parallelisation scheme for the repeated computation is our original contribution.
The data parallel arithmetic was defined not only for integers, which is already known, but also for rationals. We handled the common factors of the numerator or denominator of the fraction with the modulus in a novel manner. This is required to obtain a true multiple-residue arithmetic, a novel result of our research. Using these mathematical advances, we have parallelised the determinant computation using the GauĆ elimination. As always, we have performed task distribution analysis and estimation of the parallel execution time of our implementation. A similar computation in Maple emphasised the potential of our approach. Data parallel arithmetic enables parallelisation of entire classes of computer algebra algorithms.
Summarising, this thesis presents and thoroughly evaluates new and existing design decisions for high-level parallelisations of computer algebra algorithms
- ā¦