11 research outputs found

    Tom Thumb Algorithm and von Neumann Universal Constructor

    Get PDF
    This article describes the addition to the von Neumann cellular automaton of the Tom Thumb Algorithm, a mechanism developed for the self-replication of multi-processor systems. Except for the cell construction process, every functionality of the original CA has been preserved in our new system. Moreover, the Tom Thumb Algorithm now allows the replication of any structure within the von Neumann environment, whatever its number of cells may be

    The Comet Interceptor Mission

    Get PDF
    Here we describe the novel, multi-point Comet Interceptor mission. It is dedicated to the exploration of a little-processed long-period comet, possibly entering the inner Solar System for the first time, or to encounter an interstellar object originating at another star. The objectives of the mission are to address the following questions: What are the surface composition, shape, morphology, and structure of the target object? What is the composition of the gas and dust in the coma, its connection to the nucleus, and the nature of its interaction with the solar wind? The mission was proposed to the European Space Agency in 2018, and formally adopted by the agency in June 2022, for launch in 2029 together with the Ariel mission. Comet Interceptor will take advantage of the opportunity presented by ESA’s F-Class call for fast, flexible, low-cost missions to which it was proposed. The call required a launch to a halo orbit around the Sun-Earth L2 point. The mission can take advantage of this placement to wait for the discovery of a suitable comet reachable with its minimum ΔV capability of 600 ms−1. Comet Interceptor will be unique in encountering and studying, at a nominal closest approach distance of 1000 km, a comet that represents a near-pristine sample of material from the formation of the Solar System. It will also add a capability that no previous cometary mission has had, which is to deploy two sub-probes – B1, provided by the Japanese space agency, JAXA, and B2 – that will follow different trajectories through the coma. While the main probe passes at a nominal 1000 km distance, probes B1 and B2 will follow different chords through the coma at distances of 850 km and 400 km, respectively. The result will be unique, simultaneous, spatially resolved information of the 3-dimensional properties of the target comet and its interaction with the space environment. We present the mission’s science background leading to these objectives, as well as an overview of the scientific instruments, mission design, and schedule

    Pipelined multi-FPGA genomic data clustering

    No full text
    High throughput DNA sequencing made individual genome profiling possible and produces very large amounts of data. Today data and associated metadata are stored in FASTQ text file assemblies carrying the information of genome fragments called reads. Current techniques rely on mapping these reads to a common reference genome for compression and analysis. However, about 10% of the reads do not map to any known reference making them difficult to compress or process. These reads are of high importance because they hold information absent from any reference. Finding overlaps in these reads can help subsequent processing and compression tasks tremendously. Within this context clustering is used to find overlapping unmapped reads and sort them in groups. Clustering being an extremely time consuming task a modular multi-FPGA pipeline was designed and is the focus of this paper. A pipeline with 6 FPGAs was created and has shown a speed-up of ×5 compared to existing FPGA implementations. Resulting enriched files encoding reads and clustering results show file sizes within a 10% margin of the best DNA compressors while providing valuable extra information

    FPGA-GPU communicating through PCIe

    No full text
    In recent years two main platforms emerged as powerful key players in the domain of parallel computing: GPUs and FPGAs. Many researches investigate interaction and benefits of coupling them with a general purpose processor (CPU), but very few, and only very recently, integrate the two in the same computational system. Even less researches are focusing on direct interaction of the two platforms. This paper presents an implementation of a GPU-FPGA direct communication. The transfer is triggered by a central CPU but managed by the FPGA, in a DMA-like manner. An initial framework has been developed on a Virtex-5 FPGA, with a PCIe Gen1.1×1 setup, and demonstrates a 200 MB/s data rate. A new implementation on Virtex-7 has been conducted, supporting Gen3.0×8, with a demonstrated throughput of up to 2.4 GB/s in a Gen2.1×8 setup. Performance results between different hardware setups are therefore presented and compared. The various measurements demonstrate achieved data rates that are close to the theoretical maximum, with some interesting outliners, and a very low interfacing latency

    Developmental processes in silicon: an engineering perspective

    No full text
    In this article, we analyze the requirements of developmental processes from the perspective of their implementation in digital hardware. After recalling the motivations for such an implementation, we concentrate separately on the two mechanisms (cellular division and cellular differentiation) that are exploited by biological systems to realize development. We then describe some of the current and projected solutions to implement such mechanisms in hardware, and conclude by analyzing the most interesting features of developmental approaches

    Y.: Developmental processes in silicon: An engineering perspective

    No full text
    In this article, we try to analyze the requirements of developmental processes from the perspective of their implementation in digital hardware. After recalling the motivations for such an implementation, we concentrate separately on the two mechanisms (cellular division and cellular differentiation) that are exploited by biological systems to realize development. We then describe some of the current and projected solutions to implement such mechanisms in hardware, and conclude by analyzing the most interesting features of developmental approaches.

    Genomic Data Clustering on FPGAs for Compression

    No full text
    Current sequencing machine technology generates very large and redundant volumes of genomic data for each biological sample. Today data and associated metadata are formatted in very large text file assemblies called FASTQ carrying the information of billions of genome fragments referred to as “reads” and composed of strings of nucleotide bases with lengths in the range of a few tenths to a few hundreds bases. Compressing such data is definitely required in order to manage the sheer amount of data soon to be generated. Doing so implies finding redundant information in the raw sequences. While most of it can be mapped onto the human reference genome and fits well for compression, about 10% of it usually does not map to any reference . For these orphan sequences, finding redundancy will help compression. Doing so requires clustering these reads, a very time consuming process. Within this context this paper presents a FPGA implementation of a clustering algorithm for genomic reads, implemented on Pico Computing EX-700 AC-510 hard-ware, offering more than a 1000×speed up over a CPU implementation while reducing power consumption by a 700 factor
    corecore