335 research outputs found

    Predicting the approximate functional behaviour of physical systems

    Get PDF
    This dissertation addresses the problem of the computer prediction of the approximate behaviour of physical systems describable by ordinary differential equations.Previous approaches to behavioural prediction have either focused on an exact mathematical description or on a qualitative account. We advocate a middle ground: a representation more coarse than an exact mathematical solution yet more specific than a qualitative one. What is required is a mathematical expression, simpler than the exact solution, whose qualitative features mirror those of the actual solution and whose functional form captures the principal parameter relationships underlying the behaviour of the real system. We term such a representation an approximate functional solution.Approximate functional solutions are superior to qualitative descriptions because they reveal specific functional relationships, restore a quantitative time scale to a process and support more sophisticated comparative analysis queries. Moreover, they can be superior to exact mathematical solutions by emphasizing comprehensibility, adequacy and practical utility over precision.Two strategies for constructing approximate functional solutions are proposed. The first abstracts the original equation, predicts behaviour in the abstraction space and maps this back to the approximate functional level. Specifically, analytic abduction exploits qualitative simulation to predict the qualitative properties of the solution and uses this knowledge to guide the selection of a parameterized trial function which is then tuned with respect to the differential equation. In order to limit the complexity of a proposed approximate functional solution, and hence maintain its comprehensibility, back-of-the-envelope reasoning is used to simplify overly complex expressions in a magnitude extreme. If no function is recognised which matches the predicted behaviour, segment calculus is called upon to find a composite function built from known primitives and a set of operators. At the very least, segment calculus identifies a plausible structure for the form of the solution (e.g. that it is a composition of two unknown functions). Equation parsing capitalizes on this partial information to look for a set of termwise interactions which, when interpreted, expose a particular solution of the equation.The second, and more direct, strategy for constructing an approximate functional solution is embodied in the closed form approximation technique. This extends approximation methods to equations which lack a closed form solution. This involves solving the differential equation exactly, as an infinite series, and obtaining an approximate functional solution by constructing a closed form function whose Taylor series is close to that of the exact solutionThe above techniques dovetail together to achieve a style of reasoning closer to that of an engineer or physicist rather than a mathematician. The key difference being to sacrifice the goal of finding the correct solution of the differential equation in favour of finding an approximation which is adequate for the purpose to which the knowledge will be put. Applications to Intelligent Tutoring and Design Support Systems are suggested

    Implementation and Evaluation of Algorithmic Skeletons: Parallelisation of Computer Algebra Algorithms

    Get PDF
    This thesis presents design and implementation approaches for the parallel algorithms of computer algebra. We use algorithmic skeletons and also further approaches, like data parallel arithmetic and actors. We have implemented skeletons for divide and conquer algorithms and some special parallel loops, that we call ‘repeated computation with a possibility of premature termination’. We introduce in this thesis a rational data parallel arithmetic. We focus on parallel symbolic computation algorithms, for these algorithms our arithmetic provides a generic parallelisation approach. The implementation is carried out in Eden, a parallel functional programming language based on Haskell. This choice enables us to encode both the skeletons and the programs in the same language. Moreover, it allows us to refrain from using two different languages—one for the implementation and one for the interface—for our implementation of computer algebra algorithms. Further, this thesis presents methods for evaluation and estimation of parallel execution times. We partition the parallel execution time into two components. One of them accounts for the quality of the parallelisation, we call it the ‘parallel penalty’. The other is the sequential execution time. For the estimation, we predict both components separately, using statistical methods. This enables very confident estimations, although using drastically less measurement points than other methods. We have applied both our evaluation and estimation approaches to the parallel programs presented in this thesis. We haven also used existing estimation methods. We developed divide and conquer skeletons for the implementation of fast parallel multiplication. We have implemented the Karatsuba algorithm, Strassen’s matrix multiplication algorithm and the fast Fourier transform. The latter was used to implement polynomial convolution that leads to a further fast multiplication algorithm. Specially for our implementation of Strassen algorithm we have designed and implemented a divide and conquer skeleton basing on actors. We have implemented the parallel fast Fourier transform, and not only did we use new divide and conquer skeletons, but also developed a map-and-transpose skeleton. It enables good parallelisation of the Fourier transform. The parallelisation of Karatsuba multiplication shows a very good performance. We have analysed the parallel penalty of our programs and compared it to the serial fraction—an approach, known from literature. We also performed execution time estimations of our divide and conquer programs. This thesis presents a parallel map+reduce skeleton scheme. It allows us to combine the usual parallel map skeletons, like parMap, farm, workpool, with a premature termination property. We use this to implement the so-called ‘parallel repeated computation’, a special form of a speculative parallel loop. We have implemented two probabilistic primality tests: the Rabin–Miller test and the Jacobi sum test. We parallelised both with our approach. We analysed the task distribution and stated the fitting configurations of the Jacobi sum test. We have shown formally that the Jacobi sum test can be implemented in parallel. Subsequently, we parallelised it, analysed the load balancing issues, and produced an optimisation. The latter enabled a good implementation, as verified using the parallel penalty. We have also estimated the performance of the tests for further input sizes and numbers of processing elements. Parallelisation of the Jacobi sum test and our generic parallelisation scheme for the repeated computation is our original contribution. The data parallel arithmetic was defined not only for integers, which is already known, but also for rationals. We handled the common factors of the numerator or denominator of the fraction with the modulus in a novel manner. This is required to obtain a true multiple-residue arithmetic, a novel result of our research. Using these mathematical advances, we have parallelised the determinant computation using the Gauß elimination. As always, we have performed task distribution analysis and estimation of the parallel execution time of our implementation. A similar computation in Maple emphasised the potential of our approach. Data parallel arithmetic enables parallelisation of entire classes of computer algebra algorithms. Summarising, this thesis presents and thoroughly evaluates new and existing design decisions for high-level parallelisations of computer algebra algorithms

    Single-Input Signature Register-Based Time Delay Reservoir

    Get PDF
    Machine learning continues to play a critical role in our society. The ability to automatically identify intricate relationships in large volumes of data has proven incredibly useful for problems such as automatic speech recognition and image processing. In particular, neural networks have become increasingly popular in a wide set of application domains, given their ability to solve complex problems and process high-dimensional data. However, the impressive performance of state-of-the-art neural networks comes at the cost of large area and power consumption for the computation resources used in training and inference. As a result, a growing area of research concerns hardware implementations of neural networks. This work proposes a hardware-friendly design for a time-delay reservoir (TDR), a type of recurrent neural network. TDRs represent one class of reservoir computing neural network topologies, which employ random spatio-temporal feature extraction from time series data in order to produce a linearly separable set of features. Reservoir computing topologies differ from traditional recurrent neural networks because their recurrent weights are fixed, and the only the feedforward output weights need to be trained, usually with linear regression. Previous work on TDRs includes photonic implementation, software implementation, and both digital and analog electronic implementations. This work adds to the body of previous research by exploring the design space of a novel TDR based on single-input signature registers (SISRs), which are common digital circuits used for built-in self-test. The work is motivated by the structural similarity (delayed feedback loop) between TDRs and SISRs, and the possibility of dual-purpose of SISRs for conventional testing as well as machine learning within a single chip. The proposed designs can perform classification on multivariate datasets and perform better than a traditional TDR with quantized reservoir states for parity check, MNIST classification, and temperature prediction tasks. Classification accuracies of up to 100% were observed for some configurations of the SISR for the parity check task and accuracies of up to 85% were observed for MNIST classification. We also observe overfitting on a temperature prediction task with longer data sequences and provide analyses of the results based on the reservoir dynamics, as measured by the rate of divergence between SISR states and the SISR period

    Homomorphic signcryption with public plaintext-result checkability

    Get PDF
    Signcryption originally proposed by Zheng (CRYPTO \u27 97) is a useful cryptographic primitive that provides strong confidentiality and integrity guarantees. This article addresses the question whether it is possible to homomorphically compute arbitrary functions on signcrypted data. The answer is affirmative and a new cryptographic primitive, homomorphic signcryption (HSC) with public plaintext-result checkability is proposed that allows both to evaluate arbitrary functions over signcrypted data and makes it possible for anyone to publicly test whether a given ciphertext is the signcryption of the message under the key. Two notions of message privacy are also investigated: weak message privacy and message privacy depending on whether the original signcryptions used in the evaluation are disclosed or not. More precisely, the contributions are two-fold: (i) two different definitions of HSC with public plaintext-result checkability is provided for arbitrary functions in terms of syntax, unforgeability and message privacy depending on if the homomorphic computation is performed in a private or in a public evaluation setting, (ii) two HSC constructions are proposed: one for a public evaluation setting and another for a private evaluation setting and security is formally proved

    Long-Term Confidential Secret Sharing-Based Distributed Storage Systems

    Get PDF
    Secret sharing-based distributed storage systems can provide long-term protection of confidentiality and integrity of stored data. This is achieved by periodically refreshing the stored shares and by checking the validity of the generated shares through additional audit data. However, in most real-life environments (e.g. companies), this type of solution is not optimal for three main reasons. Firstly, the access rules of state of the art secret sharing-based distributed storage systems do not match the hierarchical organization in place in these environments. Secondly, data owners are not supported in selecting the most suitable storage servers while first setting up the system nor in maintaining it secure in the long term. Thirdly, state of the art approaches require computationally demanding and unpractical and expensive building blocks that do not scale well. In this thesis, we mitigate the above mentioned issues and contribute to the transition from theory to more practical secret sharing-based long-term secure distributed storage systems. Firstly, we show that distributed storage systems can be based on hierarchical secret sharing schemes by providing efficient and secure algorithms, whose access rules can be adapted to the hierarchical organization of a company and its future modifications. Secondly, we introduce a decision support system that helps data owners to set up and maintain a distributed storage system. More precisely, on the one hand, we support data owners in selecting the storage servers making up the distributed storage system. We do this by providing them with scores that reflect their actual performances, here used in a broad sense and not tied to a specific metric. These are the output of a novel performance scoring mechanism based on the behavioral model of rational agents as opposed to the classical good/bad model. On the other hand, we support data owners in choosing the right secret sharing scheme parameters given the performance figures of the storage servers and guide them in updating them accordingly with the updated performance figures so as to maintain the system secure in the long term. Thirdly, we introduce efficient and affordable distributed storage systems based on a trusted execution environment that correctly outsources the data and periodically computes valid shares. This way, less information-theoretically secure channels have to be established for confidentiality guarantees and more efficient primitives are used for the integrity safeguard of the data. We present a third-party privacy-preserving mechanism that protects the integrity of data by checking the validity of the shares

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas

    Contributions to Confidentiality and Integrity Algorithms for 5G

    Get PDF
    The confidentiality and integrity algorithms in cellular networks protect the transmission of user and signaling data over the air between users and the network, e.g., the base stations. There are three standardised cryptographic suites for confidentiality and integrity protection in 4G, which are based on the AES, SNOW 3G, and ZUC primitives, respectively. These primitives are used for providing a 128-bit security level and are usually implemented in hardware, e.g., using IP (intellectual property) cores, thus can be quite efficient. When we come to 5G, the innovative network architecture and high-performance demands pose new challenges to security. For the confidentiality and integrity protection, there are some new requirements on the underlying cryptographic algorithms. Specifically, these algorithms should: 1) provide 256 bits of security to protect against attackers equipped with quantum computing capabilities; and 2) provide at least 20 Gbps (Gigabits per second) speed in pure software environments, which is the downlink peak data rate in 5G. The reason for considering software environments is that the encryption in 5G will likely be moved to the cloud and implemented in software. Therefore, it is crucial to investigate existing algorithms in 4G, checking if they can satisfy the 5G requirements in terms of security and speed, and possibly propose new dedicated algorithms targeting these goals. This is the motivation of this thesis, which focuses on the confidentiality and integrity algorithms for 5G. The results can be summarised as follows.1. We investigate the security of SNOW 3G under 256-bit keys and propose two linear attacks against it with complexities 2172 and 2177, respectively. These cryptanalysis results indicate that SNOW 3G cannot provide the full 256-bit security level. 2. We design some spectral tools for linear cryptanalysis and apply these tools to investigate the security of ZUC-256, the 256-bit version of ZUC. We propose a distinguishing attack against ZUC-256 with complexity 2236, which is 220 faster than exhaustive key search. 3. We design a new stream cipher called SNOW-V in response to the new requirements for 5G confidentiality and integrity protection, in terms of security and speed. SNOW-V can provide a 256-bit security level and achieve a speed as high as 58 Gbps in software based on our extensive evaluation. The cipher is currently under evaluation in ETSI SAGE (Security Algorithms Group of Experts) as a promising candidate for 5G confidentiality and integrity algorithms. 4. We perform deeper cryptanalysis of SNOW-V to ensure that two common cryptanalysis techniques, guess-and-determine attacks and linear cryptanalysis, do not apply to SNOW-V faster than exhaustive key search. 5. We introduce two minor modifications in SNOW-V and propose an extreme performance variant, called SNOW-Vi, in response to the feedback about SNOW-V that some use cases are not fully covered. SNOW-Vi covers more use cases, especially some platforms with less capabilities. The speeds in software are increased by 50% in average over SNOW-V and can be up to 92 Gbps.Besides these works on 5G confidentiality and integrity algorithms, the thesis is also devoted to local pseudorandom generators (PRGs). 6. We investigate the security of local PRGs and propose two attacks against some constructions instantiated on the P5 predicate. The attacks improve existing results with a large gap and narrow down the secure parameter regime. We also extend the attacks to other local PRGs instantiated on general XOR-AND and XOR-MAJ predicates and provide some insight in the choice of safe parameters
    • …
    corecore