145 research outputs found

    Parametrizing Complex Hadamard Matrices

    Get PDF
    The purpose of this paper is to introduce new parametric families of complex Hadamard matrices in two different ways. First, we prove that every real Hadamard matrix of order N>=4 admits an affine orbit. This settles a recent open problem of Tadej and Zyczkowski, who asked whether a real Hadamard matrix can be isolated among complex ones. In particular, we apply our construction to the only (up to equivalence) real Hadamard matrix of order 12 and show that the arising affine family is different from all previously known examples. Second, we recall a well-known construction related to real conference matrices, and show how to introduce an affine parameter in the arising complex Hadamard matrices. This leads to new parametric families of orders 10 and 14. An interesting feature of both of our constructions is that the arising families cannot be obtained via Dita's general method. Our results extend the recent catalogue of complex Hadamard matrices, and may lead to direct applications in quantum-information theory.Comment: 16 pages; Final version. Submitted to: European Journal of Combinatoric

    Construction and analysis of experimental designs

    Get PDF
    This thesis seeks to put into focus the analysis of experimental designs and their construction. It concentrates on the construction of fractional factorial designs (FFDs) using various aspects and applications. These dierent experimental designs and their applications, including how they are constructed with respect to the situation under consideration, are of interest in this study. While there is a wide range of experimental designs and numerous dierent constructions, this thesis focuses on FFDs and their applications. Experimental design is a test or a series of tests in which purposeful changes are made to the input variables of a process or system so that we may observe and identify the reasons for changes that may be noted in the output response (Montgomery (2014)). Experimental designs are important because their design and analysis can in uence the outcome and response of the intended action. In this research, analysing experimental designs and their construction intends to reveal how important they are in research experiments. Chapter 1 introduces the concept of experimental designs and their principal and oers a general explanation for factorial experiment design and FFDs. Attention is then given to the general construction and analysis of FFDs, including one-half and one-quarter fractions, Hadamard matrices (H), Balanced Incomplete Block Design (BIBD), Plackett-Burman (PB) designs and regression modelling. Chapter 2 presents an overview of the screening experiments and the literature review regarding the project. Chapter 3 introduces the rst part of the project, which is construction and analysis of edge designs from skew-symmetric supplementary dierence sets (SDSs). Edge designs were introduced by Elster and Neumaier (1995) using conference matrices and were proved to be robust. One disadvantage is that the known edge designs in the literature can be constructed when a conference matrix exists. In this chapter, we introduce a new class of edge designs- these are constructed from skew-symmetric SDSs. These designs are particularly useful, since they can be applied in experiments with an even number of factors, and they may exist for orders where conference matrices do not exist. The same model robustness is archived, as with traditional edge designs. We give details of the methodology used and provide some illustrative examples of this new approach. We also show that the new designs have good D-eciencies when applied to rst-order models, then complete the experiment with interaction in the second stage. We also show the application of models for new constructions. Chapter 4 presents the second part of the project, which is construction and analysis two-level supersaturated designs (SSDs) from Toeplitz matrices. The aim of the screening experiments was to identify the active factors from a large quantity of factors that may in uence the response y. SSDs represent an important class of screening experiments, whereby many factors are investigated using only few experimental runs; this process costs less than classical factorial designs. In this chapter, we introduce new SSDs that are constructed from Toeplitz matrices. This construction uses Toeplitz and permutation matrices of order n to obtain E(s2)- optimal two-level SSDs. We also study the properties of the constructed designs and use certain established criteria to evaluate these designs. We then give some detailed examples regarding this approach, and consider the performance of these designs with respect to dierent data analysis methods. Chapter 5 introduces the third part of the project, which is examples and comparison of the constructed design using real data in mathematics. Mathematics has strong application in dierent elds of human life. The Trends in International Mathematics and Science Study(TIMSS) is one of the worlds most eective global assessments of student achievement in both mathematics and science. The research in this thesis sought to determine the most eective factors that aect student achievement in mathematics. Four identied factors aect this problem. The rst is student factors: age, health, number of students in a class, family circumstances, time of study, desire, behaviour, achievements, media (audio and visual), rewards, friends, parents' goals and gender. The second is classroom environment factors: suitable and attractive and equipped with educational tools. The third is curriculum factors: easy or dicult. The fourth is the teacher: wellquali ed or not, and punishment. In this chapter, we detailed the methodology and present some examples, and comparisons of the constructed designs using real data in mathematics . The data comes from surveys contacted in schools in Saudi Arabia. The data are collected by the middle stage schools in the country and are available to Saudi Arabian citizen. Two main methods to collect real data were used: 1/ the mathematics scores for students' nal exams were collected from the schools; 2/ student questionnaires were conducted by disseminating 16-question questionnaires to students. The target population was 2,585 students in 22 schools. Data were subjected to regression analyses and the edge design method, with the nding that the main causes of low achievement were rewards, behaviour, class environment, educational tools and health. Chapter 6 surveys the work of this thesis and recommends further avenues of research

    Applications of Hadamard matrices, Journal of Telecommunications and Information Technology, 2003, nr 2

    Get PDF
    We present a number of applications of Hadamard matrices to signal processing, optical multiplexing, error correction coding, and design and analysis of statistics

    Novel Architectures and Optimization Algorithms for Training Neural Networks and Applications

    Get PDF
    The two main areas of Deep Learning are Unsupervised and Supervised Learning. Unsupervised Learning studies a class of data processing problems in which only descriptions of objects are known, without label information. Generative Adversarial Networks (GANs) have become among the most widely used unsupervised neural net models. GAN combines two neural nets, generative and discriminative, that work simultaneously. We introduce a new family of discriminator loss functions that adopts a weighted sum of real and fake parts, which we call adaptive weighted loss functions. Using the gradient information, we can adaptively choose weights to train a discriminator in the direction that benefits the GAN\u27s stability. Also, we propose several improvements to the GAN training schemes. One is self-correcting optimization for training a GAN discriminator on Speech Enhancement tasks, which helps avoid ``harmful\u27\u27 training directions for parts of the discriminator loss. The other improvement is a consistency loss, which targets the inconsistency in time and time-frequency domains caused by Fourier Transforms. Contrary to Unsupervised Learning, Supervised Learning uses labels for each object, and it is required to find the relationship between objects and labels. Building computing methods to interpret and represent human language automatically is known as Natural Language Processing which includes tasks such as word prediction, machine translation, etc. In this area, we propose a novel Neumann-Cayley Gated Recurrent Unit (NC-GRU) architecture based on a Neumann series-based Scaled Cayley transformation. The NC-GRU uses orthogonal matrices to prevent exploding gradient problems and enhance long-term memory on various prediction tasks. In addition, we propose using our newly introduced NC-GRU unit inside Neural Nets model to create neural molecular fingerprints. Integrating novel NC-GRU fingerprints and Multi-Task Deep Neural Networks schematics help to improve the performance of several molecular-related tasks. We also introduce a new normalization method - Assorted-Time Normalization, that helps to preserve information from multiple consecutive time steps and normalize using them in Recurrent Nets like architectures. Finally, we propose a Symmetry Structured Convolutional Neural Network (SCNN), an architecture with 2D structured symmetric features over spatial dimensions, that generates and preserves the symmetry structure in the network\u27s convolutional layers

    PATTERN RECOGNITION INTEGRATED SENSING METHODOLOGIES (PRISMS) IN PHARMACEUTICAL PROCESS VALIDATION, REMOTE SENSING AND ASTROBIOLOGY

    Get PDF
    Modern analytical instrumentation is capable of creating enormous and complex volumes of data. Analysis of large data volumes are complicated by lengthy analysis time and high computational demand. Incorporating real-time analysis methods that are computationally efficient are desirable for modern analytical methods to be fully utilized. The use of modern instrumentation in on-line pharmaceutical process validation, remote sensing, and astrobiology applications requires real-time analysis methods that are computationally efficient. Integrated sensing and processing (ISP) is a method for minimizing the data burden and sensing time of a system. ISP is accomplished through implementation of chemometric calculations in the physics of the spectroscopic sensor itself. In ISP, the measurements collected at the detector are weighted to directly correlate to the sample properties of interest. This method is especially useful for large and complex data sets. In this research, ISP is applied to acoustic resonance spectroscopy, near-infrared hyperspectral imaging and a novel solid state spectral imager. In each application ISP produced a clear advantage over the traditional sensing method. The limitations of ISP must be addressed before it can become widely used. ISP is essentially a pattern recognition algorithm. Problems arise in pattern recognition when the pattern-recognition algorithm encounters a sample unlike any in the original calibration set. This is termed the false sample problem. To address the false sample problem the Bootstrap Error-Adjusted Single-Sample Technique (BEST, a nonparametric classification technique) was investigated. The BEST-ISP method utilizes a hashtable of normalized BEST points along an asymmetric probability density contour to estimate the BEST multidimensional standard deviation of a sample. The on-line application of the BEST method requires significantly less computation than the full algorithm allowing it to be utilized in real time as sample data is obtained. This research tests the hypothesis that a BEST-ISP metric can be used to detect false samples with sensitivity \u3e 90% and specificity \u3e 90% on categorical data

    Phase-field modeling of multi-domain evolution in ferromagnetic shape memory alloys and of polycrystalline thin film growth

    Get PDF
    The phase-field method is a powerful tool in computer-aided materials science as it allows for the analysis of the time-spatial evolution of microstructures on the mesoscale. A multi-phase-field model is adopted to run numerical simulations in two different areas of scientific interest: Polycrystalline thin films growth and the ferromagnetic shape memory effect. FFT-techniques, norm conservative integration and RVE-methods are necessary to make the coupled problems numerically feasible

    Euclidean distance geometry and applications

    Full text link
    Euclidean distance geometry is the study of Euclidean geometry based on the concept of distance. This is useful in several applications where the input data consists of an incomplete set of distances, and the output is a set of points in Euclidean space that realizes the given distances. We survey some of the theory of Euclidean distance geometry and some of the most important applications: molecular conformation, localization of sensor networks and statics.Comment: 64 pages, 21 figure

    Journal of Telecommunications and Information Technology, 2003, nr 2

    Get PDF
    kwartalni

    Towards trustworthy machine learning with kernels

    Get PDF
    Machine Learning has become an indispensable aspect of various safety-critical industries like healthcare, law, and automotive. Hence, it is crucial to ensure that our machine learning models function appropriately and instil trust among their users. This thesis focuses on improving the safety and transparency of Machine Learning by advocating for more principled uncertainty quantification and more effective explainability tools. Specifically, the use of Kernel Mean Embeddings (KME) and Gaussian Processes (GP) is prevalent in this work since they can represent probability distribution with minimal distributional assumptions and capture uncertainty well, respectively. I dedicate Chapter 2 to introduce these two methodologies. Chapter 3 demonstrates an effective use of these methods in conjunction with each other to tackle a statistical downscaling problem, in which a Deconditional Gaussian process is proposed. Chapter 4 considers a causal data fusion problem, where multiple causal graphs are combined for inference. I introduce BayesIMP, an algorithm built using KME and GPs, to draw causal conclusion while accounting for the uncertainty in the data and model. In Chapter 5, I present RKHS-SHAP to model explainability for kernel methods that utilizes Shapley values. Specifically, I propose to estimate the value function in the cooperative game using KMEs, circumventing the need for any parametric density estimations. A Shapley regulariser is also proposed to regulate the amount of contributions certain features can have to the model. Chapter 6 presents a generalised preferential Gaussian processes for modelling preference with non-rankable structure, which sets the scene for Chapter 7, where I built upon my research and propose Pref-SHAP to explain preference models
    corecore