5 research outputs found

    Evaluation of single photon avalanche diode arrays for imaging fluorescence correlation spectroscopy : FPGA-based data readout and fast correlation analysis on CPUs, GPUs and FPGAs

    Get PDF
    The metabolism of all living organisms, and specifically also of their smallest constituents, the cell, is based on chemical reactions. A key factor determining the speed of these processes is transport of reactants, energy, and information within the and between the cells of an organism. It has been shown that the relevant transport processes also depend on the spatial organization of the cells. Such transport processes are typically investigated using fluorescence correlation spectroscopy (FCS) in combination with fluorescent labeling of the molecules of interest. In FCS, one observes the fluctuating fluorescence signal from a femtoliter-sized sub-volume within the sample (e.g. a cell). The variations in the intensity arise from the particles moving in and out of this sub-volume. By means of an autocorrelation analysis of the intensity signal, conclusion can be drawn regarding the concentration and the mobility parameters, such as the diffusion coefficient. Typically, one uses the laser focus of a confocal microscope for FCS measurements. But with this microscopy technique, FCS is limited to a single spot a every time. In order to conduct parallel multi-spot measurements, i.e. to create diffusion maps, FCS can be combined with the lightsheet based selective plane illumination microscopy (SPIM). This recent widefield microscopy technique allows observing a small plane of a sample (1-3um thick), which can be positioned arbitrarily. Usually, FCS on a SPIM is done using fast electron-multiplying charge-coupled device (EMCCD) cameras, which offer a limited temporal resolution (500us). Such a temporal resolution only allows measuring the motion of intermediately sized particles within a cell reliably. The limited temporal resolution renders the detection of even smaller molecules impossible. In this thesis, arrays of single photon avalanche diodes (SPADs) were used as detectors. Although SPAD-based image sensors still lack in sensitivity, they provide a significantly better temporal resolution (1-10us for full frames) that is not achievable with sensitive cameras and seem to be the ideal sensors for SPIM-FCS. In the course of this work, two recent SPAD arrays (developed in the groups of Prof. Edoardo Charbon, TU Delft, the Netherlands, and EPFL, Switzerland) were extensively characterized with regards to their suitability for SPIM-FCS. The evaluated SPAD arrays comprise 32x32 and 512x128 pixels and allow for frame rates of up to 300000 or 150000 frames per second, respectively. With these specifications, the latter array is one of the largest and fastest sensors that is currently available. During full-frame readout, it delivers a data rate of up to 1.2 GiB/s. For both arrays, suitable readout-hardware-based on field programmable gate arrays (FPGAs) was designed. To cope with the high data rate and to allow real-time correlation analysis, correlation algorithms were implemented and characterized on the three major high performance computing platforms, namely FPGAs, CPUs, and graphics processing units (GPUs). Of all three platforms, the GPU performed best in terms of correlation analysis, and a speed of 2.6 over real time was achieved for the larger SPAD array. Beside the lack in sensitivity, which could be accounted for by microlenses, a major drawback of the evaluated SPAD arrays was their afterpulsing. It appeared that the temporal structure superimposed the signal of the diffusion. Thus, extracting diffusion properties from the autocorrelation analysis only proved impossible. By additionally performing a spatial cross-correlation analysis such influences could be significantly minimized. Furthermore, this approach allowed for the determination of absolute diffusion coefficients without prior calibration. With that, spatially resolved measurements of fluorescent proteins in living cells could be conducted successfully

    Variability-Aware Circuit Performance Optimisation Through Digital Reconfiguration

    Get PDF
    This thesis proposes optimisation methods for improving the performance of circuits imple- mented on a custom reconfigurable hardware platform with knowledge of intrinsic variations, through the use of digital reconfiguration. With the continuing trend of transistor shrinking, stochastic variations become first order effects, posing a significant challenge for device reliability. Traditional device models tend to be too conservative, as the margins are greatly increased to account for these variations. Variation-aware optimisation methods are then required to reduce the performance spread caused by these substrate variations. The Programmable Analogue and Digital Array (PAnDA) is a reconfigurable hardware plat- form which combines the traditional architecture of a Field Programmable Gate Array (FPGA) with the concept of configurable transistor widths, and is used in this thesis as a platform on which variability-aware circuits can be implemented. A model of the PAnDA architecture is designed to allow for rapid prototyping of devices, making the study of the effects of intrinsic variability on circuit performance – which re- quires expensive statistical simulations – feasible. This is achieved by means of importing statistically-enhanced transistor performance data from RandomSPICE simulations into a model of the PAnDA architecture implemented in hardware. Digital reconfiguration is then used to explore the hardware resources available for performance optimisation. A bio-inspired optimisation algorithm is used to explore the large solution space more efficiently. Results from test circuits suggest that variation-aware optimisation can provide a significant reduction in the spread of the distribution of performance across various instances of circuits, as well as an increase in performance for each. Even if transistor geometry flexibility is not available, as is the case of traditional architectures, it is still possible to make use of the substrate variations to reduce spread and increase performance by means of function relocation

    Topical Workshop on Electronics for Particle Physics

    Get PDF
    The purpose of the workshop was to present results and original concepts for electronics research and development relevant to particle physics experiments as well as accelerator and beam instrumentation at future facilities; to review the status of electronics for the LHC experiments; to identify and encourage common efforts for the development of electronics; and to promote information exchange and collaboration in the relevant engineering and physics communities

    Binning optimization based on SSTA for transparently-latched circuits

    No full text

    Online learning on the programmable dataplane

    Get PDF
    This thesis makes the case for managing computer networks with datadriven methods automated statistical inference and control based on measurement data and runtime observations—and argues for their tight integration with programmable dataplane hardware to make management decisions faster and from more precise data. Optimisation, defence, and measurement of networked infrastructure are each challenging tasks in their own right, which are currently dominated by the use of hand-crafted heuristic methods. These become harder to reason about and deploy as networks scale in rates and number of forwarding elements, but their design requires expert knowledge and care around unexpected protocol interactions. This makes tailored, per-deployment or -workload solutions infeasible to develop. Recent advances in machine learning offer capable function approximation and closed-loop control which suit many of these tasks. New, programmable dataplane hardware enables more agility in the network— runtime reprogrammability, precise traffic measurement, and low latency on-path processing. The synthesis of these two developments allows complex decisions to be made on previously unusable state, and made quicker by offloading inference to the network. To justify this argument, I advance the state of the art in data-driven defence of networks, novel dataplane-friendly online reinforcement learning algorithms, and in-network data reduction to allow classification of switchscale data. Each requires co-design aware of the network, and of the failure modes of systems and carried traffic. To make online learning possible in the dataplane, I use fixed-point arithmetic and modify classical (non-neural) approaches to take advantage of the SmartNIC compute model and make use of rich device local state. I show that data-driven solutions still require great care to correctly design, but with the right domain expertise they can improve on pathological cases in DDoS defence, such as protecting legitimate UDP traffic. In-network aggregation to histograms is shown to enable accurate classification from fine temporal effects, and allows hosts to scale such classification to far larger flow counts and traffic volume. Moving reinforcement learning to the dataplane is shown to offer substantial benefits to stateaction latency and online learning throughput versus host machines; allowing policies to react faster to fine-grained network events. The dataplane environment is key in making reactive online learning feasible—to port further algorithms and learnt functions, I collate and analyse the strengths of current and future hardware designs, as well as individual algorithms
    corecore