326 research outputs found

    FPGA-Based In-Vivo Calcium Image Decoding for Closed-Loop Feedback Applications

    Full text link
    The miniaturized calcium imaging is an emerging neural recording technique that can monitor neural activity at large scale at a specific brain region of a rat or mice. It has been widely used in the study of brain functions in experimental neuroscientific research. Most calcium-image analysis pipelines operate offline, which incurs long processing latency thus are hard to be used for closed-loop feedback stimulation targeting certain neural circuits. In this paper, we propose our FPGA-based design that enables real-time calcium image processing and position decoding for closed-loop feedback applications. Our design can perform real-time calcium image motion correction, enhancement, and fast trace extraction based on predefined cell contours and tiles. With that, we evaluated a variety of machine learning methods to decode positions from the extracted traces. Our proposed design and implementation can achieve position decoding with less than 1 ms latency under 300 MHz on FPGA for a variety of mainstream 1-photon miniscope sensors. We benchmarked the position decoding accuracy on open-sourced datasets collected from six different rats, and we show that by taking advantage of the ordinal encoding in the decoding task, we can consistently improve decoding accuracy without any overhead on hardware implementation and runtime across the subjects.Comment: 11 pages, 15 figure

    Linear-time encoding and decoding of low-density parity-check codes

    Get PDF
    Low-density parity-check (LDPC) codes had a renaissance when they were rediscovered in the 1990’s. Since then LDPC codes have been an important part of the field of error-correcting codes, and have been shown to be able to approach the Shannon capacity, the limit at which we can reliably transmit information over noisy channels. Following this, many modern communications standards have adopted LDPC codes. Error-correction is equally important in protecting data from corruption on a hard-drive as it is in deep-space communications. It is most commonly used for example for reliable wireless transmission of data to mobile devices. For practical purposes, both encoding and decoding need to be of low complexity to achieve high throughput and low power consumption. This thesis provides a literature review of the current state-of-the-art in encoding and decoding of LDPC codes. Message- passing decoders are still capable of achieving the best error-correcting performance, while more recently considered bit-flipping decoders are providing a low-complexity alternative, albeit with some loss in error-correcting performance. An implementation of a low-complexity stochastic bit-flipping decoder is also presented. It is implemented for Graphics Processing Units (GPUs) in a parallel fashion, providing a peak throughput of 1.2 Gb/s, which is significantly higher than previous decoder implementations on GPUs. The error-correcting performance of a range of decoders has also been tested, showing that the stochastic bit-flipping decoder provides relatively good error-correcting performance with low complexity. Finally, a brief comparison of encoding complexities for two code ensembles is also presented

    Foundations and Recent Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions

    Full text link
    Multimodal machine learning is a vibrant multi-disciplinary research field that aims to design computer agents with intelligent capabilities such as understanding, reasoning, and learning through integrating multiple communicative modalities, including linguistic, acoustic, visual, tactile, and physiological messages. With the recent interest in video understanding, embodied autonomous agents, text-to-image generation, and multisensor fusion in application domains such as healthcare and robotics, multimodal machine learning has brought unique computational and theoretical challenges to the machine learning community given the heterogeneity of data sources and the interconnections often found between modalities. However, the breadth of progress in multimodal research has made it difficult to identify the common themes and open questions in the field. By synthesizing a broad range of application domains and theoretical frameworks from both historical and recent perspectives, this paper is designed to provide an overview of the computational and theoretical foundations of multimodal machine learning. We start by defining two key principles of modality heterogeneity and interconnections that have driven subsequent innovations, and propose a taxonomy of 6 core technical challenges: representation, alignment, reasoning, generation, transference, and quantification covering historical and recent trends. Recent technical achievements will be presented through the lens of this taxonomy, allowing researchers to understand the similarities and differences across new approaches. We end by motivating several open problems for future research as identified by our taxonomy

    Improving Bandwidth Utilization in a 1 Tbps Airborne MIMO Communications Downlink

    Get PDF
    FEC techniques are compared for different MIMO configurations of a high altitude, extremely wide bandwidth radio frequency downlink. Monte Carlo simulations are completed in MATLAB® with the aim of isolating the impacts of turbo codes and LDPC codes on system throughput and error performance. The system is modeled as a transmit-only static array at an altitude of 60,000 feet, with no interferers in the channel. Transmissions are received by a static receiver array. Simulations attempt to determine what modulation types should be considered for practical implementation, and what FEC codes enable these modulation schemes. The antenna configurations used in this study are [44:352], [62:248], and [80:160] transmitters to receivers. Effects from waveform generation, mixing, down-conversion, and amplification are not considered. Criteria of interest were BER and throughput, with the maximum allowable value of the former set at 1 x 10-5, and the latter set at a 1 terabits per second (Tbps) transfer rate for a successful configuration. Results show that the best performing system configuration was unable to meet both criteria, but was capable of improving over Brueggen\u27s 2012 research, which used Reed-Solomon codes and a MIMO configuration of [80:160], by 18.6%. The best-case configuration produced a throughput rate of 0.83 Tbps at a BER of less than 1 x 10-8, by implementing a rate 2/3 LDPC code with QAM constellation of 16 symbols

    Decryption Failure Attacks on Post-Quantum Cryptography

    Get PDF
    This dissertation discusses mainly new cryptanalytical results related to issues of securely implementing the next generation of asymmetric cryptography, or Public-Key Cryptography (PKC).PKC, as it has been deployed until today, depends heavily on the integer factorization and the discrete logarithm problems.Unfortunately, it has been well-known since the mid-90s, that these mathematical problems can be solved due to Peter Shor's algorithm for quantum computers, which achieves the answers in polynomial time.The recently accelerated pace of R&D towards quantum computers, eventually of sufficient size and power to threaten cryptography, has led the crypto research community towards a major shift of focus.A project towards standardization of Post-quantum Cryptography (PQC) was launched by the US-based standardization organization, NIST. PQC is the name given to algorithms designed for running on classical hardware/software whilst being resistant to attacks from quantum computers.PQC is well suited for replacing the current asymmetric schemes.A primary motivation for the project is to guide publicly available research toward the singular goal of finding weaknesses in the proposed next generation of PKC.For public key encryption (PKE) or digital signature (DS) schemes to be considered secure they must be shown to rely heavily on well-known mathematical problems with theoretical proofs of security under established models, such as indistinguishability under chosen ciphertext attack (IND-CCA).Also, they must withstand serious attack attempts by well-renowned cryptographers both concerning theoretical security and the actual software/hardware instantiations.It is well-known that security models, such as IND-CCA, are not designed to capture the intricacies of inner-state leakages.Such leakages are named side-channels, which is currently a major topic of interest in the NIST PQC project.This dissertation focuses on two things, in general:1) how does the low but non-zero probability of decryption failures affect the cryptanalysis of these new PQC candidates?And 2) how might side-channel vulnerabilities inadvertently be introduced when going from theory to the practice of software/hardware implementations?Of main concern are PQC algorithms based on lattice theory and coding theory.The primary contributions are the discovery of novel decryption failure side-channel attacks, improvements on existing attacks, an alternative implementation to a part of a PQC scheme, and some more theoretical cryptanalytical results

    Efficient mapping of EEG algorithms

    Get PDF
    • …
    corecore