10,719 research outputs found

    UMSL Bulletin 2023-2024

    Get PDF
    The 2023-2024 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1088/thumbnail.jp

    On information captured by neural networks: connections with memorization and generalization

    Full text link
    Despite the popularity and success of deep learning, there is limited understanding of when, how, and why neural networks generalize to unseen examples. Since learning can be seen as extracting information from data, we formally study information captured by neural networks during training. Specifically, we start with viewing learning in presence of noisy labels from an information-theoretic perspective and derive a learning algorithm that limits label noise information in weights. We then define a notion of unique information that an individual sample provides to the training of a deep network, shedding some light on the behavior of neural networks on examples that are atypical, ambiguous, or belong to underrepresented subpopulations. We relate example informativeness to generalization by deriving nonvacuous generalization gap bounds. Finally, by studying knowledge distillation, we highlight the important role of data and label complexity in generalization. Overall, our findings contribute to a deeper understanding of the mechanisms underlying neural network generalization.Comment: PhD thesi

    Quantum Solutions to the Privacy vs. Utility Tradeoff

    Full text link
    In this work, we propose a novel architecture (and several variants thereof) based on quantum cryptographic primitives with provable privacy and security guarantees regarding membership inference attacks on generative models. Our architecture can be used on top of any existing classical or quantum generative models. We argue that the use of quantum gates associated with unitary operators provides inherent advantages compared to standard Differential Privacy based techniques for establishing guaranteed security from all polynomial-time adversaries

    Implicit Loss of Surjectivity and Facial Reduction: Theory and Applications

    Get PDF
    Facial reduction, pioneered by Borwein and Wolkowicz, is a preprocessing method that is commonly used to obtain strict feasibility in the reformulated, reduced constraint system. The importance of strict feasibility is often addressed in the context of the convergence results for interior point methods. Beyond the theoretical properties that the facial reduction conveys, we show that facial reduction, not only limited to interior point methods, leads to strong numerical performances in different classes of algorithms. In this thesis we study various consequences and the broad applicability of facial reduction. The thesis is organized in two parts. In the first part, we show the instabilities accompanied by the absence of strict feasibility through the lens of facially reduced systems. In particular, we exploit the implicit redundancies, revealed by each nontrivial facial reduction step, resulting in the implicit loss of surjectivity. This leads to the two-step facial reduction and two novel related notions of singularity. For the area of semidefinite programming, we use these singularities to strengthen a known bound on the solution rank, the Barvinok-Pataki bound. For the area of linear programming, we reveal degeneracies caused by the implicit redundancies. Furthermore, we propose a preprocessing tool that uses the simplex method. In the second part of this thesis, we continue with the semidefinite programs that do not have strictly feasible points. We focus on the doubly-nonnegative relaxation of the binary quadratic program and a semidefinite program with a nonlinear objective function. We closely work with two classes of algorithms, the splitting method and the Gauss-Newton interior point method. We elaborate on the advantages in building models from facial reduction. Moreover, we develop algorithms for real-world problems including the quadratic assignment problem, the protein side-chain positioning problem, and the key rate computation for quantum key distribution. Facial reduction continues to play an important role for providing robust reformulated models in both the theoretical and the practical aspects, resulting in successful numerical performances

    Classical and quantum phases of the pyrochlore S=1/2S=1/2 magnet with Heisenberg and Dzyaloshinskii-Moriya interactions

    Full text link
    We investigate the ground state and critical temperature phase diagrams of the classical and quantum S=1/2S=1/2 pyrochlore lattice with nearest-neighbor Heisenberg and Dzyaloshinskii-Moriya interactions (DMI). We consider ferromagnetic and antiferromagnetic Heisenberg exchange as well as direct and indirect DMI. Classically, three ground states are found: all-in/all-out, ferromagnetic and a locally ordered XYXY phase, known as Γ5\Gamma_5, which displays an accidental classical U(1) degeneracy. Quantum zero-point energy fluctuations are found to lift the classical ground state degeneracy and select the ψ3\psi_3 state in most parts of the Γ5\Gamma_5 regime. Likewise, thermal fluctuations treated classically, select the ψ3\psi_3 state at T=0+T=0^+. In contrast, classical Monte Carlo finds that the system orders at TcT_c in the ψ2\psi_2 state of Γ5\Gamma_5 for antiferromagnetic Heisenberg exchange and indirect DMI with a transition from ψ2\psi_2 to ψ3\psi_3 at a temperature TΓ5<TcT_{\Gamma_5} <T_c. The same method finds that the system orders via a single transition at TcT_c directly into the ψ3\psi_3 state for most of the region with ferromagnetic Heisenberg exchange and indirect DMI. Such ordering behavior at TcT_c for the S=1/2S=1/2 quantum model is corroborated by high-temperature series expansion. To investigate the T=0T=0 quantum ground states, we apply the pseudo-fermion functional renormalization group (PFFRG). The quantum paramagnetic phase of the pure antiferromagnetic S=1/2S=1/2 Heisenberg model is found to persist over a finite region in the phase diagram for both direct or indirect DMI. We find that near the boundary of ferromagnetism and Γ5\Gamma_5 antiferromagnetism the system may potentially realize a quantum ground state lacking conventional magnetic order. Otherwise, for the largest portion of the phase diagram, PFFRG finds the same ordered phases as in the classical model.Comment: 26 pages, 14 figure

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    Learning in Repeated Multi-Unit Pay-As-Bid Auctions

    Full text link
    Motivated by Carbon Emissions Trading Schemes, Treasury Auctions, and Procurement Auctions, which all involve the auctioning of homogeneous multiple units, we consider the problem of learning how to bid in repeated multi-unit pay-as-bid auctions. In each of these auctions, a large number of (identical) items are to be allocated to the largest submitted bids, where the price of each of the winning bids is equal to the bid itself. The problem of learning how to bid in pay-as-bid auctions is challenging due to the combinatorial nature of the action space. We overcome this challenge by focusing on the offline setting, where the bidder optimizes their vector of bids while only having access to the past submitted bids by other bidders. We show that the optimal solution to the offline problem can be obtained using a polynomial time dynamic programming (DP) scheme. We leverage the structure of the DP scheme to design online learning algorithms with polynomial time and space complexity under full information and bandit feedback settings. We achieve an upper bound on regret of O(MTlogB)O(M\sqrt{T\log |\mathcal{B}|}) and O(MBTlogB)O(M\sqrt{|\mathcal{B}|T\log |\mathcal{B}|}) respectively, where MM is the number of units demanded by the bidder, TT is the total number of auctions, and B|\mathcal{B}| is the size of the discretized bid space. We accompany these results with a regret lower bound, which match the linear dependency in MM. Our numerical results suggest that when all agents behave according to our proposed no regret learning algorithms, the resulting market dynamics mainly converge to a welfare maximizing equilibrium where bidders submit uniform bids. Lastly, our experiments demonstrate that the pay-as-bid auction consistently generates significantly higher revenue compared to its popular alternative, the uniform price auction.Comment: 51 pages, 12 Figure

    Using machine learning to predict pathogenicity of genomic variants throughout the human genome

    Get PDF
    Geschätzt mehr als 6.000 Erkrankungen werden durch Veränderungen im Genom verursacht. Ursachen gibt es viele: Eine genomische Variante kann die Translation eines Proteins stoppen, die Genregulation stören oder das Spleißen der mRNA in eine andere Isoform begünstigen. All diese Prozesse müssen überprüft werden, um die zum beschriebenen Phänotyp passende Variante zu ermitteln. Eine Automatisierung dieses Prozesses sind Varianteneffektmodelle. Mittels maschinellem Lernen und Annotationen aus verschiedenen Quellen bewerten diese Modelle genomische Varianten hinsichtlich ihrer Pathogenität. Die Entwicklung eines Varianteneffektmodells erfordert eine Reihe von Schritten: Annotation der Trainingsdaten, Auswahl von Features, Training verschiedener Modelle und Selektion eines Modells. Hier präsentiere ich ein allgemeines Workflow dieses Prozesses. Dieses ermöglicht es den Prozess zu konfigurieren, Modellmerkmale zu bearbeiten, und verschiedene Annotationen zu testen. Der Workflow umfasst außerdem die Optimierung von Hyperparametern, Validierung und letztlich die Anwendung des Modells durch genomweites Berechnen von Varianten-Scores. Der Workflow wird in der Entwicklung von Combined Annotation Dependent Depletion (CADD), einem Varianteneffektmodell zur genomweiten Bewertung von SNVs und InDels, verwendet. Durch Etablierung des ersten Varianteneffektmodells für das humane Referenzgenome GRCh38 demonstriere ich die gewonnenen Möglichkeiten Annotationen aufzugreifen und neue Modelle zu trainieren. Außerdem zeige ich, wie Deep-Learning-Scores als Feature in einem CADD-Modell die Vorhersage von RNA-Spleißing verbessern. Außerdem werden Varianteneffektmodelle aufgrund eines neuen, auf Allelhäufigkeit basierten, Trainingsdatensatz entwickelt. Diese Ergebnisse zeigen, dass der entwickelte Workflow eine skalierbare und flexible Möglichkeit ist, um Varianteneffektmodelle zu entwickeln. Alle entstandenen Scores sind unter cadd.gs.washington.edu und cadd.bihealth.org frei verfügbar.More than 6,000 diseases are estimated to be caused by genomic variants. This can happen in many possible ways: a variant may stop the translation of a protein, interfere with gene regulation, or alter splicing of the transcribed mRNA into an unwanted isoform. It is necessary to investigate all of these processes in order to evaluate which variant may be causal for the deleterious phenotype. A great help in this regard are variant effect scores. Implemented as machine learning classifiers, they integrate annotations from different resources to rank genomic variants in terms of pathogenicity. Developing a variant effect score requires multiple steps: annotation of the training data, feature selection, model training, benchmarking, and finally deployment for the model's application. Here, I present a generalized workflow of this process. It makes it simple to configure how information is converted into model features, enabling the rapid exploration of different annotations. The workflow further implements hyperparameter optimization, model validation and ultimately deployment of a selected model via genome-wide scoring of genomic variants. The workflow is applied to train Combined Annotation Dependent Depletion (CADD), a variant effect model that is scoring SNVs and InDels genome-wide. I show that the workflow can be quickly adapted to novel annotations by porting CADD to the genome reference GRCh38. Further, I demonstrate the integration of deep-neural network scores as features into a new CADD model, improving the annotation of RNA splicing events. Finally, I apply the workflow to train multiple variant effect models from training data that is based on variants selected by allele frequency. In conclusion, the developed workflow presents a flexible and scalable method to train variant effect scores. All software and developed scores are freely available from cadd.gs.washington.edu and cadd.bihealth.org

    Safe Reinforcement Learning as Wasserstein Variational Inference: Formal Methods for Interpretability

    Full text link
    Reinforcement Learning or optimal control can provide effective reasoning for sequential decision-making problems with variable dynamics. Such reasoning in practical implementation, however, poses a persistent challenge in interpreting the reward function and corresponding optimal policy. Consequently, formalizing the sequential decision-making problems as inference has a considerable value, as probabilistic inference in principle offers diverse and powerful mathematical tools to infer the stochastic dynamics whilst suggesting a probabilistic interpretation of the reward design and policy convergence. In this study, we propose a novel Adaptive Wasserstein Variational Optimization (AWaVO) to tackle these challenges in sequential decision-making. Our approach utilizes formal methods to provide interpretations of reward design, transparency of training convergence, and probabilistic interpretation of sequential decisions. To demonstrate practicality, we show convergent training with guaranteed global convergence rates not only in simulation but also in real robot tasks, and empirically verify a reasonable tradeoff between high performance and conservative interpretability.Comment: 24 pages, 8 figures, containing Appendi

    Comparative Multiple Case Study into the Teaching of Problem-Solving Competence in Lebanese Middle Schools

    Get PDF
    This multiple case study investigates how problem-solving competence is integrated into teaching practices in private schools in Lebanon. Its purpose is to compare instructional approaches to problem-solving across three different programs: the American (Common Core State Standards and New Generation Science Standards), French (Socle Commun de Connaissances, de Compétences et de Culture), and Lebanese with a focus on middle school (grades 7, 8, and 9). The project was conducted in nine schools equally distributed among three categories based on the programs they offered: category 1 schools offered the Lebanese program, category 2 the French and Lebanese programs, and category 3 the American and Lebanese programs. Each school was treated as a separate case. Structured observation data were collected using observation logs that focused on lesson objectives and specific cognitive problem-solving processes. The two logs were created based on a document review of the requirements for the three programs. Structured observations were followed by semi-structured interviews that were conducted to explore teachers' beliefs and understandings of problem-solving competence. The comparative analysis of within-category structured observations revealed an instruction ranging from teacher-led practices, particularly in category 1 schools, to more student-centered approaches in categories 2 and 3. The cross-category analysis showed a reliance on cognitive processes primarily promoting exploration, understanding, and demonstrating understanding, with less emphasis on planning and executing, monitoring and reflecting, thus uncovering a weakness in addressing these processes. The findings of the post-observation semi-structured interviews disclosed a range of definitions of problem-solving competence prevalent amongst teachers with clear divergences across the three school categories. This research is unique in that it compares problem-solving teaching approaches across three different programs and explores underlying teachers' beliefs and understandings of problem-solving competence in the Lebanese context. It is hoped that this project will inform curriculum developers about future directions and much-anticipated reforms of the Lebanese program and practitioners about areas that need to be addressed to further improve the teaching of problem-solving competence
    corecore