392 research outputs found

    Accelerated Quantum Monte Carlo with Probabilistic Computers

    Full text link
    Quantum Monte Carlo (QMC) techniques are widely used in a variety of scientific problems and much work has been dedicated to developing optimized algorithms that can accelerate QMC on standard processors (CPU). With the advent of various special purpose devices and domain specific hardware, it has become increasingly important to establish clear benchmarks of what improvements these technologies offer compared to existing technologies. In this paper, we demonstrate 2 to 3 orders of magnitude acceleration of a standard QMC algorithm using a specially designed digital processor, and a further 2 to 3 orders of magnitude by mapping it to a clockless analog processor. Our demonstration provides a roadmap for 5 to 6 orders of magnitude acceleration for a transverse field Ising model (TFIM) and could possibly be extended to other QMC models as well. The clockless analog hardware can be viewed as the classical counterpart of the quantum annealer and provides performance within a factor of <10<10 of the latter. The convergence time for the clockless analog hardware scales with the number of qubits as N\sim N, improving the N2\sim N^2 scaling for CPU implementations, but appears worse than that reported for quantum annealers by D-Wave

    Custom optimization algorithms for efficient hardware implementation

    No full text
    The focus is on real-time optimal decision making with application in advanced control systems. These computationally intensive schemes, which involve the repeated solution of (convex) optimization problems within a sampling interval, require more efficient computational methods than currently available for extending their application to highly dynamical systems and setups with resource-constrained embedded computing platforms. A range of techniques are proposed to exploit synergies between digital hardware, numerical analysis and algorithm design. These techniques build on top of parameterisable hardware code generation tools that generate VHDL code describing custom computing architectures for interior-point methods and a range of first-order constrained optimization methods. Since memory limitations are often important in embedded implementations we develop a custom storage scheme for KKT matrices arising in interior-point methods for control, which reduces memory requirements significantly and prevents I/O bandwidth limitations from affecting the performance in our implementations. To take advantage of the trend towards parallel computing architectures and to exploit the special characteristics of our custom architectures we propose several high-level parallel optimal control schemes that can reduce computation time. A novel optimization formulation was devised for reducing the computational effort in solving certain problems independent of the computing platform used. In order to be able to solve optimization problems in fixed-point arithmetic, which is significantly more resource-efficient than floating-point, tailored linear algebra algorithms were developed for solving the linear systems that form the computational bottleneck in many optimization methods. These methods come with guarantees for reliable operation. We also provide finite-precision error analysis for fixed-point implementations of first-order methods that can be used to minimize the use of resources while meeting accuracy specifications. The suggested techniques are demonstrated on several practical examples, including a hardware-in-the-loop setup for optimization-based control of a large airliner.Open Acces

    A Multi-mode Transverse Dynamic Force Microscope - Design, Identification and Control

    Get PDF
    This is the author accepted manuscript. The final version is available from IEEE via the DOI in this record.The transverse dynamic force microscope (TDFM) and its shear force sensing principle permit true non-contact force detection in contrast to typical atomic force microscopes. The two TDFM measurement signals for the cantilever allow, in principle, two different scanning modes of which, in particular, the second presented here permits a full-scale non-contact scan. Previous research mainly focused on developing the sensing mechanism, whereas this work investigates the vertical axis dynamics for advanced robust closed-loop control. This paper presents a new TDFM digital control solution, built on field-programmable gate array (FPGA) equipment running at high implementation frequencies. The integrated control system allows the implementation of online customizable controllers, and raster-scans in two modes at very high detection bandwidth and nano-precision. Robust control algorithms are designed, implemented, and practically assessed. The two realized scanning modes are experimentally evaluated by imaging nano-spheres with known dimensions in wet conditions.Engineering and Physical Sciences Research Council (EPSRC

    Design of an advanced atomic force microscope control electronics using FPGA

    Get PDF
    Atomic Force Microscope (AFM) is a great scientific and R&D tool. AFM can obtain 3D surface topography of objects with atomic resolution. AFM can provide images of conductive, non-conductive and also biological samples. Invention of the AFM led to many improvements in the several areas like material science, physics and biology. Additionally, AFM has a wide range of applications like the quality control and production testing in the industry. In spite of AFM's many advantages and specialties, price and the complexity of the available AFM options makes it harder for researchers and companies to reach this technology. To address this issue, easy to use Atomic Force Microscope ezAFM is designed with joint efforts of Sabanci University - "Scanning Probe Microscopy & Nanomagnetism Laboratory", Istanbul Technical University – "Nanomechanics Laboratory" and NanoMagnetics Instruments & NanoSis and with the support of "Ministry of Science, Industry and Technology", through the SANTEZ program. The ezAFM is a user friendly, compact and high performance novel AFM system. The ezAFM is very affordable and similar to a lab grade optical microscope. In this thesis, the Control Electronics design and the PC software architecture of the ezAFM will be discussed

    Single-Molecule Detection of Unique Genome Signatures: Applications in Molecular Diagnostics and Homeland Security

    Get PDF
    Single-molecule detection (SMD) offers an attractive approach for identifying the presence of certain markers that can be used for in vitro molecular diagnostics in a near real-time format. The ability to eliminate sample processing steps afforded by the ultra-high sensitivity associated with SMD yields an increased sampling pipeline. When SMD and microfluidics are used in conjunction with nucleic acid-based assays such as the ligase detection reaction coupled with single-pair fluorescent resonance energy transfer (LDR-spFRET), complete molecular profiling and screening of certain cancers, pathogenic bacteria, and other biomarkers becomes possible at remarkable speeds and sensitivities with high specificity. The merging of these technologies and techniques into two different novel instrument formats has been investigated. (1) The use of a charge-coupled device (CCD) in time-delayed integration (TDI) mode as a means for increasing the throughput of any single molecule measurement by simultaneously tracking and detecting single-molecules in multiple microfluidic channels was demonstrated. The CCD/TDI approach allowed increasing the sample throughput by a factor of 8 compared to a single-assay SMD experiment. A sampling throughput of 276 molecules s-1 per channel and 2208 molecules s-1 for an eight channel microfluidic system was achieved. A cyclic olefin copolymer (COC) waveguide was designed and fabricated in a pre-cast poly(dimethylsiloxane) stencil to increase the SNR by controlling the excitation geometry. The waveguide showed an attenuation of 0.67 dB/cm and the launch angle was optimized to increase the depth of penetration of the evanescent wave. (2) A compact SMD (cSMD) instrument was designed and built for the reporting of molecular signatures associated with bacteria. The optical waveguides were poised within the fluidic chip at orientation of 90° with respect to each other for the interrogation of single-molecule events. Molecular beacons (MB) were designed to probe bacteria for the classification of Gram +. MBs were mixed with bacterial cells and pumped though the cSMD which allowed S. aureus to be classified with 2,000 cells in 1 min. Finally, the integration of the LDR-spFRET assay on the cSMD was explored with the future direction of designing a molecular screening approach for stroke diagnostics

    Scanning micro interferometer with tunable diffraction grating for low noise parallel operation

    Get PDF
    Large area high throughput metrology plays an important role in several technologies like MEMS. In current metrology systems the parallel operation of multiple metrology probes in a tool has been hindered by their bulky sizes. This study approaches this problem by developing a metrology technique based on miniaturized scanning grating interferometers (μSGIs). Miniaturization of the interferometer is realized by novel micromachined tunable gratings fabricated using SOI substrates. These stress free flat gratings show sufficient motion (~500nm), bandwidth (~50 kHz) and low damping ratio (~0.05). Optical setups have been developed for testing the performance of μSGIs and preliminary results show 6.6 μm lateral resolution and sub-angstrom vertical resolution. To achieve high resolution and to reduce the effect of ambient vibrations, the study has developed a novel control algorithm, implemented on FPGA. It has shown significant reduction of vibration noise in 6.5 kHz bandwidth achieving 6x10-5 nmrms/√Hz noise resolution. Modifications of this control scheme enable long range displacement measurements, parallel operation and scanning samples for their dynamic profile. To analyze and simulate similar optical metrology system with active micro-components, separate tools are developed for mechanical, control and optical sub-systems. The results of these programs enable better design optimization for different applications.Ph.D.Committee Chair: Degertekin, Levent; Committee Co-Chair: Kurfess, Thomas; Committee Member: Adibi, Ali; Committee Member: Danyluk, Steven; Committee Member: Hesketh, Pete

    Pre-validation of SoC via hardware and software co-simulation

    Get PDF
    Abstract. System-on-chips (SoCs) are complex entities consisting of multiple hardware and software components. This complexity presents challenges in their design, verification, and validation. Traditional verification processes often test hardware models in isolation until late in the development cycle. As a result, cooperation between hardware and software development is also limited, slowing down bug detection and fixing. This thesis aims to develop, implement, and evaluate a co-simulation-based pre-validation methodology to address these challenges. The approach allows for the early integration of hardware and software, serving as a natural intermediate step between traditional hardware model verification and full system validation. The co-simulation employs a QEMU CPU emulator linked to a register-transfer level (RTL) hardware model. This setup enables the execution of software components, such as device drivers, on the target instruction set architecture (ISA) alongside cycle-accurate RTL hardware models. The thesis focuses on two primary applications of co-simulation. Firstly, it allows software unit tests to be run in conjunction with hardware models, facilitating early communication between device drivers, low-level software, and hardware components. Secondly, it offers an environment for using software in functional hardware verification. A significant advantage of this approach is the early detection of integration errors. Software unit tests can be executed at the IP block level with actual hardware models, a task previously only possible with costly system-level prototypes. This enables earlier collaboration between software and hardware development teams and smoothens the transition to traditional system-level validation techniques.Järjestelmäpiirin esivalidointi laitteiston ja ohjelmiston yhteissimulaatiolla. Tiivistelmä. Järjestelmäpiirit (SoC) ovat monimutkaisia kokonaisuuksia, jotka koostuvat useista laitteisto- ja ohjelmistokomponenteista. Tämä monimutkaisuus asettaa haasteita niiden suunnittelulle, varmennukselle ja validoinnille. Perinteiset varmennusprosessit testaavat usein laitteistomalleja eristyksissä kehityssyklin loppuvaiheeseen saakka. Tämän myötä myös yhteistyö laitteisto- ja ohjelmistokehityksen välillä on vähäistä, mikä hidastaa virheiden tunnistamista ja korjausta. Tämän diplomityön tavoitteena on kehittää, toteuttaa ja arvioida laitteisto-ohjelmisto-yhteissimulointiin perustuva esivalidointimenetelmä näiden haasteiden ratkaisemiseksi. Menetelmä mahdollistaa laitteiston ja ohjelmiston varhaisen integroinnin, toimien luonnollisena välietappina perinteisen laitteistomallin varmennuksen ja koko järjestelmän validoinnin välillä. Yhteissimulointi käyttää QEMU suoritinemulaattoria, joka on yhdistetty rekisterinsiirtotason (RTL) laitteistomalliin. Tämä mahdollistaa ohjelmistokomponenttien, kuten laiteajureiden, suorittamisen kohdejärjestelmän käskysarja-arkkitehtuurilla (ISA) yhdessä kellosyklitarkkojen RTL laitteistomallien kanssa. Työ keskittyy kahteen yhteissimulaation pääsovellukseen. Ensinnäkin se mahdollistaa ohjelmiston yksikkötestien suorittamisen laitteistomallien kanssa, varmistaen kommunikaation laiteajurien, matalan tason ohjelmiston ja laitteistokomponenttien välillä. Toiseksi se tarjoaa ympäristön ohjelmiston käyttämiseen toiminnallisessa laitteiston varmennuksessa. Merkittävä etu tästä lähestymistavasta on integraatiovirheiden varhainen havaitseminen. Ohjelmiston yksikkötestejä voidaan suorittaa jo IP-lohkon tasolla oikeilla laitteistomalleilla, mikä on aiemmin ollut mahdollista vain kalliilla järjestelmätason prototyypeillä. Tämä mahdollistaa aikaisemman ohjelmisto- ja laitteistokehitystiimien välisen yhteistyön ja helpottaa siirtymistä perinteisiin järjestelmätason validointimenetelmiin
    corecore