10,310 research outputs found

    Implicit Loss of Surjectivity and Facial Reduction: Theory and Applications

    Get PDF
    Facial reduction, pioneered by Borwein and Wolkowicz, is a preprocessing method that is commonly used to obtain strict feasibility in the reformulated, reduced constraint system. The importance of strict feasibility is often addressed in the context of the convergence results for interior point methods. Beyond the theoretical properties that the facial reduction conveys, we show that facial reduction, not only limited to interior point methods, leads to strong numerical performances in different classes of algorithms. In this thesis we study various consequences and the broad applicability of facial reduction. The thesis is organized in two parts. In the first part, we show the instabilities accompanied by the absence of strict feasibility through the lens of facially reduced systems. In particular, we exploit the implicit redundancies, revealed by each nontrivial facial reduction step, resulting in the implicit loss of surjectivity. This leads to the two-step facial reduction and two novel related notions of singularity. For the area of semidefinite programming, we use these singularities to strengthen a known bound on the solution rank, the Barvinok-Pataki bound. For the area of linear programming, we reveal degeneracies caused by the implicit redundancies. Furthermore, we propose a preprocessing tool that uses the simplex method. In the second part of this thesis, we continue with the semidefinite programs that do not have strictly feasible points. We focus on the doubly-nonnegative relaxation of the binary quadratic program and a semidefinite program with a nonlinear objective function. We closely work with two classes of algorithms, the splitting method and the Gauss-Newton interior point method. We elaborate on the advantages in building models from facial reduction. Moreover, we develop algorithms for real-world problems including the quadratic assignment problem, the protein side-chain positioning problem, and the key rate computation for quantum key distribution. Facial reduction continues to play an important role for providing robust reformulated models in both the theoretical and the practical aspects, resulting in successful numerical performances

    The Impact of Lithium Ion on the Application of Resistive Switching Devices

    Get PDF
    With the development of the times, people have higher and higher requirements for storage equipment. Many new storage devices have emerged, such as Magnetoresistive random-access memory(MRAM) and Resistive random-access memory(ReRAM). The junction structure is the basic unit of these two storage devices, and in this paper, the MTJ and resistive switching junction are tuned with lithium fluoride(LiF) to optimize their performance, respectively. In the first experiment, a magnetic tunnelling junction resembling a battery is developed and proved to be electromagnetically tuneable. In this LiF-based device, reversible non-volatile resistive switching phenomena and tunnelling phenomena coexist, enabling four well-defined groups for each device. The management of the interface enables the spin transfer of actively controlled devices, hence enhancing their application potential. In the second experiment, 796 devices were measured. For the resistive switching device with TiO as the insulating layer, adding additional LiF layer can significantly increase the probability of resistive switching phenomenon, and adding an appropriate thickness of LiF can also increase the differentiation between high and low group states, which is beneficial for the regulation of resistive switching devices

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    A survey on reconfigurable intelligent surfaces: wireless communication perspective

    Get PDF
    Using reconfigurable intelligent surfaces (RISs) to improve the coverage and the data rate of future wireless networks is a viable option. These surfaces are constituted of a significant number of passive and nearly passive components that interact with incident signals in a smart way, such as by reflecting them, to increase the wireless system's performance as a result of which the notion of a smart radio environment comes to fruition. In this survey, a study review of RIS-assisted wireless communication is supplied starting with the principles of RIS which include the hardware architecture, the control mechanisms, and the discussions of previously held views about the channel model and pathloss; then the performance analysis considering different performance parameters, analytical approaches and metrics are presented to describe the RIS-assisted wireless network performance improvements. Despite its enormous promise, RIS confronts new hurdles in integrating into wireless networks efficiently due to its passive nature. Consequently, the channel estimation for, both full and nearly passive RIS and the RIS deployments are compared under various wireless communication models and for single and multi-users. Lastly, the challenges and potential future study areas for the RIS aided wireless communication systems are proposed

    Blending the Material and Digital World for Hybrid Interfaces

    Get PDF
    The development of digital technologies in the 21st century is progressing continuously and new device classes such as tablets, smartphones or smartwatches are finding their way into our everyday lives. However, this development also poses problems, as these prevailing touch and gestural interfaces often lack tangibility, take little account of haptic qualities and therefore require full attention from their users. Compared to traditional tools and analog interfaces, the human skills to experience and manipulate material in its natural environment and context remain unexploited. To combine the best of both, a key question is how it is possible to blend the material world and digital world to design and realize novel hybrid interfaces in a meaningful way. Research on Tangible User Interfaces (TUIs) investigates the coupling between physical objects and virtual data. In contrast, hybrid interfaces, which specifically aim to digitally enrich analog artifacts of everyday work, have not yet been sufficiently researched and systematically discussed. Therefore, this doctoral thesis rethinks how user interfaces can provide useful digital functionality while maintaining their physical properties and familiar patterns of use in the real world. However, the development of such hybrid interfaces raises overarching research questions about the design: Which kind of physical interfaces are worth exploring? What type of digital enhancement will improve existing interfaces? How can hybrid interfaces retain their physical properties while enabling new digital functions? What are suitable methods to explore different design? And how to support technology-enthusiast users in prototyping? For a systematic investigation, the thesis builds on a design-oriented, exploratory and iterative development process using digital fabrication methods and novel materials. As a main contribution, four specific research projects are presented that apply and discuss different visual and interactive augmentation principles along real-world applications. The applications range from digitally-enhanced paper, interactive cords over visual watch strap extensions to novel prototyping tools for smart garments. While almost all of them integrate visual feedback and haptic input, none of them are built on rigid, rectangular pixel screens or use standard input modalities, as they all aim to reveal new design approaches. The dissertation shows how valuable it can be to rethink familiar, analog applications while thoughtfully extending them digitally. Finally, this thesis’ extensive work of engineering versatile research platforms is accompanied by overarching conceptual work, user evaluations and technical experiments, as well as literature reviews.Die Durchdringung digitaler Technologien im 21. Jahrhundert schreitet stetig voran und neue Geräteklassen wie Tablets, Smartphones oder Smartwatches erobern unseren Alltag. Diese Entwicklung birgt aber auch Probleme, denn die vorherrschenden berührungsempfindlichen Oberflächen berücksichtigen kaum haptische Qualitäten und erfordern daher die volle Aufmerksamkeit ihrer Nutzer:innen. Im Vergleich zu traditionellen Werkzeugen und analogen Schnittstellen bleiben die menschlichen Fähigkeiten ungenutzt, die Umwelt mit allen Sinnen zu begreifen und wahrzunehmen. Um das Beste aus beiden Welten zu vereinen, stellt sich daher die Frage, wie neuartige hybride Schnittstellen sinnvoll gestaltet und realisiert werden können, um die materielle und die digitale Welt zu verschmelzen. In der Forschung zu Tangible User Interfaces (TUIs) wird die Verbindung zwischen physischen Objekten und virtuellen Daten untersucht. Noch nicht ausreichend erforscht wurden hingegen hybride Schnittstellen, die speziell darauf abzielen, physische Gegenstände des Alltags digital zu erweitern und anhand geeigneter Designparameter und Entwurfsräume systematisch zu untersuchen. In dieser Dissertation wird daher untersucht, wie Materialität und Digitalität nahtlos ineinander übergehen können. Es soll erforscht werden, wie künftige Benutzungsschnittstellen nützliche digitale Funktionen bereitstellen können, ohne ihre physischen Eigenschaften und vertrauten Nutzungsmuster in der realen Welt zu verlieren. Die Entwicklung solcher hybriden Ansätze wirft jedoch übergreifende Forschungsfragen zum Design auf: Welche Arten von physischen Schnittstellen sind es wert, betrachtet zu werden? Welche Art von digitaler Erweiterung verbessert das Bestehende? Wie können hybride Konzepte ihre physischen Eigenschaften beibehalten und gleichzeitig neue digitale Funktionen ermöglichen? Was sind geeignete Methoden, um verschiedene Designs zu erforschen? Wie kann man Technologiebegeisterte bei der Erstellung von Prototypen unterstützen? Für eine systematische Untersuchung stützt sich die Arbeit auf einen designorientierten, explorativen und iterativen Entwicklungsprozess unter Verwendung digitaler Fabrikationsmethoden und neuartiger Materialien. Im Hauptteil werden vier Forschungsprojekte vorgestellt, die verschiedene visuelle und interaktive Prinzipien entlang realer Anwendungen diskutieren. Die Szenarien reichen von digital angereichertem Papier, interaktiven Kordeln über visuelle Erweiterungen von Uhrarmbändern bis hin zu neuartigen Prototyping-Tools für intelligente Kleidungsstücke. Um neue Designansätze aufzuzeigen, integrieren nahezu alle visuelles Feedback und haptische Eingaben, um Alternativen zu Standard-Eingabemodalitäten auf starren Pixelbildschirmen zu schaffen. Die Dissertation hat gezeigt, wie wertvoll es sein kann, bekannte, analoge Anwendungen zu überdenken und sie dabei gleichzeitig mit Bedacht digital zu erweitern. Dabei umfasst die vorliegende Arbeit sowohl realisierte technische Forschungsplattformen als auch übergreifende konzeptionelle Arbeiten, Nutzerstudien und technische Experimente sowie die Analyse existierender Forschungsarbeiten

    Computing and Compressing Electron Repulsion Integrals on FPGAs

    Full text link
    The computation of electron repulsion integrals (ERIs) over Gaussian-type orbitals (GTOs) is a challenging problem in quantum-mechanics-based atomistic simulations. In practical simulations, several trillions of ERIs may have to be computed for every time step. In this work, we investigate FPGAs as accelerators for the ERI computation. We use template parameters, here within the Intel oneAPI tool flow, to create customized designs for 256 different ERI quartet classes, based on their orbitals. To maximize data reuse, all intermediates are buffered in FPGA on-chip memory with customized layout. The pre-calculation of intermediates also helps to overcome data dependencies caused by multi-dimensional recurrence relations. The involved loop structures are partially or even fully unrolled for high throughput of FPGA kernels. Furthermore, a lossy compression algorithm utilizing arbitrary bitwidth integers is integrated in the FPGA kernels. To our best knowledge, this is the first work on ERI computation on FPGAs that supports more than just the single most basic quartet class. Also, the integration of ERI computation and compression it a novelty that is not even covered by CPU or GPU libraries so far. Our evaluation shows that using 16-bit integer for the ERI compression, the fastest FPGA kernels exceed the performance of 10 GERIS (10×10910 \times 10^9 ERIs per second) on one Intel Stratix 10 GX 2800 FPGA, with maximum absolute errors around 10710^{-7} - 10510^{-5} Hartree. The measured throughput can be accurately explained by a performance model. The FPGA kernels deployed on 2 FPGAs outperform similar computations using the widely used libint reference on a two-socket server with 40 Xeon Gold 6148 CPU cores of the same process technology by factors up to 6.0x and on a new two-socket server with 128 EPYC 7713 CPU cores by up to 1.9x

    Determination of physical properties of high-energy hadronic interactions from the XmaxX_{\mathrm{max}}-NμN_{\mu} anticorrelation

    Get PDF
    Seit der Entdeckung der kosmischen Strahlung Anfang des 20. Jahrhunderts wurden viele Experimente entwickelt, um sie direkt oder anhand der Luftschauer zu studieren, die sie beim Eintritt in die Erdatmosphäre erzeugen. Das größte Observatorium zur Erfassung von Luftschauern ist das Pierre-Auger-Observatorium in Malargüe, Argentinien. Hier werden viele Fragen der Astro- und Teilchenphysik behandelt. Da kosmische Strahlung Energien abdeckt, die weit über denen liegen, die in künstlichen Beschleunigern erreichbar sind, stellen sie hervorragende und einzigartige Objekete dar, an denen man physikalische Eigenschaften bei höchsten Energien studieren kann. Wenn ein Luftschauer die Atmosphäre durchquert, wächst die Anzahl der in ihm enthaltenen Teilchen. Gleichzeitig nehmen ihre Energien ab. Wenn diese tief genug sind, zerfallen die Teilchen oder werden in der Atmosphäre absorbiert, was die Zahl der Teilchen wieder verringert. Dies ergibt eine Position maximaler Entwicklung XmaxX_\mathrm{max}. Hadronisch wechselwirkende Teilchen bringen Myonen hervor, die am Boden gemessen werden können. Diese Zahl NμN_{\mu} zusammen mit XmaxX_\mathrm{max} sind Observablen, die in Auger gemessen werden und eine aussagekräftige Antikorrelation aufweisen. In der vorliegenden Arbeit wird diese Antikorrelation studiert. Es wird ein analytisches Modell entwickelt, das diese als Funktion von Parametern wiedergibt, die die hadronische Multiplizität, den hadronischen Energieanteil und die Inelastizität der ersten Interaktion beschreiben, zusammen mit entsprechenden effektiven Parametern, die für den gesamten Schauer repräsentativ sind. Dieses Modell wird dann anhand künstlicher neuronaler Netze weiter verbessert. Hierfür werden Werte der Parameter und Observablen von Simulationen benutzt, die mit CONEX durchgeführt wurden. Das resultierende Modell hängt nicht von dem während der Simulation verwendeten Hochenergie-Wechselwirkungsmodell ab. Schließlich wird ein Modell mit reduziertem Parametersatz auf einen Datensatz von Auger angewendet. Die Verteilungen der hadronischen Multiplizität und des hadronischen Energieanteils der ersten Wechselwirkung und die der effektiven Inelastizität werden diesem Datensatz entnommen. Sie zeigen, dass die Multiplizität und der Energieanteil der ersten Wechselwirkung in aktuellen Modellen, die für Simulationen verwendet werden, im Allgemeinen zu niedrig sind

    Observation of Josephson Harmonics in Tunnel Junctions

    Full text link
    Superconducting quantum processors have a long road ahead to reach fault-tolerant quantum computing. One of the most daunting challenges is taming the numerous microscopic degrees of freedom ubiquitous in solid-state devices. State-of-the-art technologies, including the world's largest quantum processors, employ aluminum oxide (AlOx_x) tunnel Josephson junctions (JJs) as sources of nonlinearity, assuming an idealized pure sinφ\sin\varphi current-phase relation (Cφ\varphiR). However, this celebrated sinφ\sin\varphi Cφ\varphiR is only expected to occur in the limit of vanishingly low-transparency channels in the AlOx_x barrier. Here we show that the standard Cφ\varphiR fails to accurately describe the energy spectra of transmon artificial atoms across various samples and laboratories. Instead, a mesoscopic model of tunneling through an inhomogeneous AlOx_x barrier predicts %-level contributions from higher Josephson harmonics. By including these in the transmon Hamiltonian, we obtain orders of magnitude better agreement between the computed and measured energy spectra. The reality of Josephson harmonics transforms qubit design and prompts a reevaluation of models for quantum gates and readout, parametric amplification and mixing, Floquet qubits, protected Josephson qubits, etc. As an example, we show that engineered Josephson harmonics can reduce the charge dispersion and the associated errors in transmon qubits by an order of magnitude, while preserving anharmonicity

    The Morse Code Room: Applicability of the Chinese Room Argument to Spiking Neural Networks

    Get PDF
    The Chinese room argument (CRA) was first stated in 1980. Since then computer technologies have improved and today spiking neural networks (SNNs) are “arguably the only viable option if one wants to understand how the brain computes.” (Tavanei et.al. 2019: 47) SNNs differ in various important respects from the digital computers the CRA was directed against. The objective of the present work is to explore whether the CRA applies to SNNs. In the first chapter I am going to discuss computationalism, the Chinese room argument and give a brief overview over spiking neural networks. The second chapter is going to be considered with five important differences between SNNs and digital computers: (1) Massive parallelism, (2) subsymbolic computation, (3) machine learning, (4) analogue representation and (5) temporal encoding. I am going to finish by concluding that, besides minor limitations, the Chinese room argument can be applied to spiking neural networks.:1 Introduction 2 Theoretical background 2.I Strong AI: Computationalism 2.II The Chinese room argument 2.III Spiking neural networks 3 Applicability to spiking neural networks 3.I Massive parallelism 3.II Subsymbolic computation 3.III Machine learning 3.IV Analogue representation 3.V Temporal encoding 3.VI The Morse code room and its replies 3.VII Some more general considerations regarding hardware and software 4 ConclusionDas Argument vom chinesischen Zimmer wurde erstmals 1980 veröffentlicht. Seit dieser Zeit hat sich die Computertechnologie stark weiterentwickelt und die heute viel beachteten gepulsten neuronalen Netze ähneln stark dem Aufbau und der Arbeitsweise biologischer Gehirne. Gepulste neuronale Netze unterscheiden sich in verschiedenen wichtigen Aspekten von den digitalen Computern, gegen die die CRA gerichtet war. Das Ziel der vorliegenden Arbeit ist es, zu untersuchen, ob das Argument vom chinesischen Zimmer auf gepulste neuronale Netze anwendbar ist. Im ersten Kapitel werde ich den Computer-Funktionalismus und das Argument des chinesischen Zimmers erörtern und einen kurzen Überblick über gepulste neuronale Netze geben. Das zweite Kapitel befasst sich mit fünf wichtigen Unterschieden zwischen gepulsten neuronalen Netzen und digitalen Computern: (1) Massive Parallelität, (2) subsymbolische Berechnung, (3) maschinelles Lernen, (4) analoge Darstellung und (5) zeitliche Kodierung. Ich werde schlussfolgern, dass das Argument des chinesischen Zimmers, abgesehen von geringfügigen Einschränkungen, auf gepulste neuronale Netze angewendet werden kann.:1 Introduction 2 Theoretical background 2.I Strong AI: Computationalism 2.II The Chinese room argument 2.III Spiking neural networks 3 Applicability to spiking neural networks 3.I Massive parallelism 3.II Subsymbolic computation 3.III Machine learning 3.IV Analogue representation 3.V Temporal encoding 3.VI The Morse code room and its replies 3.VII Some more general considerations regarding hardware and software 4 Conclusio

    Compiling Quantum Circuits for Dynamically Field-Programmable Neutral Atoms Array Processors

    Full text link
    Dynamically field-programmable qubit arrays (DPQA) have recently emerged as a promising platform for quantum information processing. In DPQA, atomic qubits are selectively loaded into arrays of optical traps that can be reconfigured during the computation itself. Leveraging qubit transport and parallel, entangling quantum operations, different pairs of qubits, even those initially far away, can be entangled at different stages of the quantum program execution. Such reconfigurability and non-local connectivity present new challenges for compilation, especially in the layout synthesis step which places and routes the qubits and schedules the gates. In this paper, we consider a DPQA architecture that contains multiple arrays and supports 2D array movements, representing cutting-edge experimental platforms. Within this architecture, we discretize the state space and formulate layout synthesis as a satisfactory modulo theories problem, which can be solved by existing solvers optimally in terms of circuit depth. For a set of benchmark circuits generated by random graphs with complex connectivities, our compiler OLSQ-DPQA reduces the number of two-qubit entangling gates on small problem instances by 1.7x compared to optimal compilation results on a fixed planar architecture. To further improve scalability and practicality of the method, we introduce a greedy heuristic inspired by the iterative peeling approach in classical integrated circuit routing. Using a hybrid approach that combined the greedy and optimal methods, we demonstrate that our DPQA-based compiled circuits feature reduced scaling overhead compared to a grid fixed architecture, resulting in 5.1X less two-qubit gates for 90 qubit quantum circuits. These methods enable programmable, complex quantum circuits with neutral atom quantum computers, as well as informing both future compilers and future hardware choices.Comment: An extended abstract of this work was presented at the 41st International Conference on Computer-Aided Design (ICCAD '22
    corecore