6 research outputs found

    A communications system perspective for dynamic mode atomic force microscopy, with applications to high-density storage and nanoimaging

    Get PDF
    In recent times, the atomic force microscope (AFM) has been used in various fields like biology, chemistry, physics and medicine for obtaining atomic level images. The AFM is a high-resolution microscope which can provide the resolution on the order of fractions of a nanometer. It has applications in the field of material characterization, probe based data storage, nano-imaging etc. The prevalent mode of using the AFM is the static mode where the cantilever is in continuous contact with the sample. This is harsh on the probe and the sample. The problem of probe and sample wear can be partly addressed by using the dynamic mode operation with the high quality factor cantilevers. In the dynamic mode operation, the cantilever is forced sinusoidally using a dither piezo. The oscillating cantilever gently taps the sample which reduces the probe-sample wear. In this dissertation, we demonstrate that viewing the dynamic mode operation from a communication systems perspective can yield huge gains in nano-interrogation speed and fidelity. In the first part of the dissertation, we have considered a data storage system that operates by encoding information as topographic profiles on a polymer medium. A cantilever probe with a sharp tip (few nm radius) is used to create and sense the presence of topographic profiles, resulting in a density of few Tb per square inch. The usage of the static mode is harsh on the probe and the media. In this work, the high quality factor dynamic mode operation, which alleviates the probe-media wear, is analyzed. The read operation is modeled as a communication channel which incorporates system memory due to inter-symbol interference and the cantilever state. We demonstrate an appropriate level of abstraction of this complex nanoscale system that obviates the need for an involved physical model. Next, a solution to the maximum likelihood sequence detection problem based on the Viterbi algorithm is devised. Experimental and simulation results demonstrate that the performance of this detector is several orders of magnitude better than the performance of other existing schemes. In the second part of the dissertation, we have considered another interesting application of the dynamic mode AFM in the field of nano-imaging. Nano-imaging has played a vital role in biology, chemistry and physics as it enables interrogation of material with sub-nanometer resolution. However, current nano-imaging techniques are too slow to be useful in the high speed applications of interest such as studying the evolution of certain biological processes over time that involve very small time scales. In this work, we present a high speed one-bit imaging technique using the dynamic mode AFM with a high quality factor cantilever. We propose a communication channel model for the cantilever based nano-imaging system. Next, we devise an imaging algorithm that incorporates a learned prior from the previous scan line while detecting the features on the current scan line. Experimental results demonstrate that our proposed algorithm provides significantly better image resolution compared to current nano-imaging techniques at high scanning speed. While modeling the probe-based data storage system and the cantilever based nano-imaging system, it has been observed that the channel models exhibit the behavior similar to intersymbol-interference (ISI) channel with data dependent time-correlated noise. The Viterbi algorithm can be adapted for performing maximum likelihood sequence detection in such channels. However, the problem of finding an analytical upper bound on the bit error rate of the Viterbi detector in this case has not been fully investigated. In the third part of the dissertation, we have considered a subset of the class of ISI channels with data dependent Gauss-Markov noise. We derive an upper bound on the pairwise error probability (PEP) between the transmitted bit sequence and the decoded bit sequence that can be expressed as a product of functions depending on current and previous states in the (incorrect) decoded sequence and the (correct) transmitted sequence. In general, the PEP is asymmetric. The average BER over all possible bit sequences is then determined using a pairwise state diagram. Simulations results demonstrate that analytic bound on BER is tight in high SNR regime

    Quelques Aspects des RĂ©seaux Multi-Cellules Multi-Utilisateurs MIMO : DĂ©lai, Conception d'Emetteur-RĂ©cepteur, SĂ©lection d'Utilisateurs et Topologie

    Get PDF
    In order to meet ever-growing needs for capacity in wireless networks, transmission techniques and the system models used to study their performances have rapidly evolved. From single-user single-antenna point-to-point communications to modern multi-cell multi-antenna cellular networks there have been large advances in technology. Along the way, several assumptions are made in order to have either more realistic models, but also to allow simpler analysis. We analyze three aspects of actual networks and try to benefit from them when possible or conversely, to mitigate their negative impact. This sometimes corrects overly optimistic results, for instance when delay in the channel state information (CSI) acquisition is no longer neglected. However, this sometimes also corrects overly pessimistic results, for instance when in a broadcast channel (BC) the number of users is no longer limited to be equal to the number of transmit antennas or when partial connectivity is taken into account in cellular networks.We first focus on the delay in the CSI acquisition because it greatly impairs the channel multiplexing gain if nothing is done to use the dead time during which the transmitters are not transmitting and do not yet have the CSI. We review and propose different schemes to use this dead time to improve the multiplexing gain in both the BC and the interference channel (IC). We evaluate the more relevant net multiplexing gain, taking into account the training and feedback overheads. Results are surprising because potential schemes to fight delay reveal to be burdened by impractical overheads in the BC. In the IC, an optimal scheme is proposed. It allows avoiding any loss of multiplexing gain even for significant feedback delay. Concerning the number of users, we propose a new criterion for the greedy user selection in a BC to benefit of the multi-user diversity, and two interference alignment schemes for the IC to benefit of having multiple users in each cell. Finally, partially connected cellular networks are considered and schemes to benefit from said partial connectivity to increase the multiplexing gain are proposed.Afin de rĂ©pondre au besoin sans cesse croissant de capacitĂ© dans les rĂ©seaux sans fil, les techniques de transmission, et les modĂšles utilisĂ©s pour les Ă©tudier, ont Ă©voluĂ©s rapidement. De simples communications point Ă  point avec une seuleantenne nous sommes passĂ© aux rĂ©seaux cellulaires de nos jours: de multiples cellules et de multiples antennes Ă  l’émission et Ă  la rĂ©ception. Progressivement, plusieurs hypothĂšses ont Ă©tĂ© faites, soit afin d’avoir des modĂšles rĂ©alistes, mais aussi parfois pour permettre une analyse plus simple. Nous examinons et analysons l’impact de trois aspects des rĂ©seaux rĂ©els. Cela revient parfois Ă  corriger des rĂ©sultats trop optimistes, par exemple lorsque le dĂ©lai dans l’acquisition des coefficients des canaux n’est plus nĂ©gligĂ©. Cela revient parfois Ă  corriger des rĂ©sultats trop pessimistes, par exemple, lorsque dans un canal de diffusion (BC) le nombre d’utilisateurs n’est plus limitĂ© au nombre d’antennes d’émission ou lorsque la connectivitĂ© partielle est prise en compte dans les rĂ©seaux cellulaires. Plus prĂ©cisĂ©ment, dans cette thĂšse, nous nous concentrons sur le dĂ©lai dans l’acquisition des coefficients des canaux par l’émetteur puisque sa prise en comptedĂ©tĂ©riore grandement le gain de multiplexage du canal si rien n’est fait pour utiliser efficacement le temps mort au cours duquel les Ă©metteurs ne transmettent pas et n’ont pas encore la connaissance du canal. Nous examinons et proposons des schĂ©mas de transmission pour utiliser efficacement ce temps mort afin d’amĂ©liorer le gain de multiplexage. Nous Ă©valuons le gain de multiplexage net, plus pertinent, en tenant compte le temps passĂ© Ă  envoyer symboles d’apprentissage et Ă  les renvoyer aux transmetteurs. Les rĂ©sultats sont surprenant puisque les schĂ©mas contre le retard de connaissance de canal se rĂ©vĂšle ĂȘtre impraticables Ă  cause du cout du partage de la connaissance des canaux. Dans les rĂ©seaux multi-cellulaires, un schĂ©ma de transmission optimal est proposĂ© et permet de n’avoir aucune perte de gain de multiplexage mĂȘme en cas de retard important dans la connaissance de canal. En ce qui concerne le nombre d’utilisateurs, nous proposons un nouveau critĂšre pour la sĂ©lection des utilisateurs de les configurations Ă  une seule cellule afin de bĂ©nĂ©ficier de la diversitĂ© multi-utilisateurs, et nous proposons deux schĂ©mas d’alignement d’interfĂ©rence pour systĂšmes multi-cellulaires afin de bĂ©nĂ©ficier du fait qu’il y a gĂ©nĂ©ralement plusieurs utilisateurs dans chaque cellule. Enfin, les rĂ©seaux cellulaires partiellement connectĂ©s sont Ă©tudiĂ©s et des schĂ©mas bĂ©nĂ©ficiant de la connectivitĂ© partielle pour augmenter le gain de multiplexage sont proposĂ©s

    Orthogonal frequency division multiplexing multiple-input multiple-output automotive radar with novel signal processing algorithms

    Get PDF
    Advanced driver assistance systems that actively assist the driver based on environment perception achieved significant advances in recent years. Along with this development, autonomous driving became a major research topic that aims ultimately at development of fully automated, driverless vehicles. Since such applications rely on environment perception, their ever increasing sophistication imposes growing demands on environmental sensors. Specifically, the need for reliable environment sensing necessitates the development of more sophisticated, high-performance radar sensors. A further vital challenge in terms of increased radar interference arises with the growing market penetration of the vehicular radar technology. To address these challenges, in many respects novel approaches and radar concepts are required. As the modulation is one of the key factors determining the radar performance, the research of new modulation schemes for automotive radar becomes essential. A topic that emerged in the last years is the radar operating with digitally generated waveforms based on orthogonal frequency division multiplexing (OFDM). Initially, the use of OFDM for radar was motivated by the combination of radar with communication via modulation of the radar waveform with communication data. Some subsequent works studied the use of OFDM as a modulation scheme in many different radar applications - from adaptive radar processing to synthetic aperture radar. This suggests that the flexibility provided by OFDM based digital generation of radar waveforms can potentially enable novel radar concepts that are well suited for future automotive radar systems. This thesis aims to explore the perspectives of OFDM as a modulation scheme for high-performance, robust and adaptive automotive radar. To this end, novel signal processing algorithms and OFDM based radar concepts are introduced in this work. The main focus of the thesis is on high-end automotive radar applications, while the applicability for real time implementation is of primary concern. The first part of this thesis focuses on signal processing algorithms for distance-velocity estimation. As a foundation for the algorithms presented in this thesis, a novel and rigorous signal model for OFDM radar is introduced. Based on this signal model, the limitations of the state-of-the-art OFDM radar signal processing are pointed out. To overcome these limitations, we propose two novel signal processing algorithms that build upon the conventional processing and extend it by more sophisticated modeling of the radar signal. The first method named all-cell Doppler compensation (ACDC) overcomes the Doppler sensitivity problem of OFDM radar. The core idea of this algorithm is the scenario-independent correction of Doppler shifts for the entire measurement signal. Since Doppler effect is a major concern for OFDM radar and influences the radar parametrization, its complete compensation opens new perspectives for OFDM radar. It not only achieves an improved, Doppler-independent performance, it also enables more favorable system parametrization. The second distance-velocity estimation algorithm introduced in this thesis addresses the issue of range and Doppler frequency migration due to the target’s motion during the measurement. For the conventional radar signal processing, these migration effects set an upper limit on the simultaneously achievable distance and velocity resolution. The proposed method named all-cell migration compensation (ACMC) extends the underlying OFDM radar signal model to account for the target motion. As a result, the effect of migration is compensated implicitly for the entire radar measurement, which leads to an improved distance and velocity resolution. Simulations show the effectiveness of the proposed algorithms in overcoming the two major limitations of the conventional OFDM radar signal processing. As multiple-input multiple-output (MIMO) radar is a well-established technology for improving the direction-of-arrival (DOA) estimation, the second part of this work studies the multiplexing methods for OFDM radar that enable simultaneous use of multiple transmit antennas for MIMO radar processing. After discussing the drawbacks of known multiplexing methods, we introduce two advanced multiplexing schemes for OFDM-MIMO radar based on non-equidistant interleaving of OFDM subcarriers. These multiplexing approaches exploit the multicarrier structure of OFDM for generation of orthogonal waveforms that enable a simultaneous operation of multiple MIMO channels occupying the same bandwidth. The primary advantage of these methods is that despite multiplexing they maintain all original radar parameters (resolution and unambiguous range in distance and velocity) for each individual MIMO channel. To obtain favorable interleaving patterns with low sidelobes, we propose an optimization approach based on genetic algorithms. Furthermore, to overcome the drawback of increased sidelobes due to subcarrier interleaving, we study the applicability of sparse processing methods for the distance-velocity estimation from measurements of non-equidistantly interleaved OFDM-MIMO radar. We introduce a novel sparsity based frequency estimation algorithm designed for this purpose. The third topic addressed in this work is the robustness of OFDM radar to interference from other radar sensors. In this part of the work we study the interference robustness of OFDM radar and propose novel interference mitigation techniques. The first interference suppression algorithm we introduce exploits the robustness of OFDM to narrowband interference by dropping subcarriers strongly corrupted by interference from evaluation. To avoid increase of sidelobes due to missing subcarriers, their values are reconstructed from the neighboring ones based on linear prediction methods. As a further measure for increasing the interference robustness in a more universal manner, we propose the extension of OFDM radar with cognitive features. We introduce the general concept of cognitive radar that is capable of adapting to the current spectral situation for avoiding interference. Our work focuses mainly on waveform adaptation techniques; we propose adaptation methods that allow dynamic interference avoidance without affecting adversely the estimation performance. The final part of this work focuses on prototypical implementation of OFDM-MIMO radar. With the constructed prototype, the feasibility of OFDM for high-performance radar applications is demonstrated. Furthermore, based on this radar prototype the algorithms presented in this thesis are validated experimentally. The measurements confirm the applicability of the proposed algorithms and concepts for real world automotive radar implementations

    Modelling and Identification of Immune Cell Migration during the Inflammatory Response

    Get PDF
    Neutrophils are the white blood cells that play a crucial role in the response of the innate immune system to tissue injuries or infectious threats. Their rapid arrival to the damaged area and timely removal from it define the success of the inflammatory process. Therefore, understanding neutrophil migratory behaviour is essential for the therapeutic regulation of multiple inflammation-mediated diseases. Recent years saw rapid development of various in vivo models of inflammation that provide a remarkable insight into the neutrophil function. The main drawback of the \textit{in vivo} microscopy is that it usually focuses on the moving cells and obscures the external environment that drives their migration. To evaluate the effect of a particular treatment strategy on neutrophil behaviour, it is necessary to recover the information about the cell responsiveness and the complex extracellular environment from the limited experimental data. This thesis addresses the presented inference problem by developing a dynamical modelling and estimation framework that quantifies the relationship between an individual migrating cell and the global environment. \par The first part of the thesis is concerned with the estimation of the hidden chemical environment that modulates the observed cell migration during the inflammatory response in the injured tail fin of zebrafish larvae. First, a dynamical model of the neutrophil responding to the chemoattractant concentration is developed based on the potential field paradigm of object-environment interaction. This representation serves as a foundation for a hybrid model that is proposed to account for heterogeneous behaviour of an individual cell throughout the migration process. An approximate maximum likelihood estimation framework is derived to estimate the hidden environment and the states of multiple hybrid systems simultaneously. The developed framework is then used to analyse the neutrophil tracking data observed in vivo under the assumption that each neutrophil at each time can be in one of three migratory modes: responding to the environment, randomly moving, and stationary. The second part of the thesis examines the process of neutrophil migration at the subcellular scale, focusing on the subcellular mechanism that translates the local environment sensing into the cell shape change. A state space model is formulated based on the hypothesis that links the local protrusions of the cell membrane and the concentration of the intracellular pro-inflammatory signalling protein. The developed model is tested against the local concentration data extracted from the in vivo time-lapse images via the classical expectation-maximisation algorithm

    3D Medical Image Lossless Compressor Using Deep Learning Approaches

    Get PDF
    The ever-increasing importance of accelerated information processing, communica-tion, and storing are major requirements within the big-data era revolution. With the extensive rise in data availability, handy information acquisition, and growing data rate, a critical challenge emerges in eïŹƒcient handling. Even with advanced technical hardware developments and multiple Graphics Processing Units (GPUs) availability, this demand is still highly promoted to utilise these technologies eïŹ€ectively. Health-care systems are one of the domains yielding explosive data growth. Especially when considering their modern scanners abilities, which annually produce higher-resolution and more densely sampled medical images, with increasing requirements for massive storage capacity. The bottleneck in data transmission and storage would essentially be handled with an eïŹ€ective compression method. Since medical information is critical and imposes an inïŹ‚uential role in diagnosis accuracy, it is strongly encouraged to guarantee exact reconstruction with no loss in quality, which is the main objective of any lossless compression algorithm. Given the revolutionary impact of Deep Learning (DL) methods in solving many tasks while achieving the state of the art results, includ-ing data compression, this opens tremendous opportunities for contributions. While considerable eïŹ€orts have been made to address lossy performance using learning-based approaches, less attention was paid to address lossless compression. This PhD thesis investigates and proposes novel learning-based approaches for compressing 3D medical images losslessly.Firstly, we formulate the lossless compression task as a supervised sequential prediction problem, whereby a model learns a projection function to predict a target voxel given sequence of samples from its spatially surrounding voxels. Using such 3D local sampling information eïŹƒciently exploits spatial similarities and redundancies in a volumetric medical context by utilising such a prediction paradigm. The proposed NN-based data predictor is trained to minimise the diïŹ€erences with the original data values while the residual errors are encoded using arithmetic coding to allow lossless reconstruction.Following this, we explore the eïŹ€ectiveness of Recurrent Neural Networks (RNNs) as a 3D predictor for learning the mapping function from the spatial medical domain (16 bit-depths). We analyse Long Short-Term Memory (LSTM) models’ generalisabil-ity and robustness in capturing the 3D spatial dependencies of a voxel’s neighbourhood while utilising samples taken from various scanning settings. We evaluate our proposed MedZip models in compressing unseen Computerized Tomography (CT) and Magnetic Resonance Imaging (MRI) modalities losslessly, compared to other state-of-the-art lossless compression standards.This work investigates input conïŹgurations and sampling schemes for a many-to-one sequence prediction model, speciïŹcally for compressing 3D medical images (16 bit-depths) losslessly. The main objective is to determine the optimal practice for enabling the proposed LSTM model to achieve a high compression ratio and fast encoding-decoding performance. A solution for a non-deterministic environments problem was also proposed, allowing models to run in parallel form without much compression performance drop. Compared to well-known lossless codecs, experimental evaluations were carried out on datasets acquired by diïŹ€erent hospitals, representing diïŹ€erent body segments, and have distinct scanning modalities (i.e. CT and MRI).To conclude, we present a novel data-driven sampling scheme utilising weighted gradient scores for training LSTM prediction-based models. The objective is to determine whether some training samples are signiïŹcantly more informative than others, speciïŹcally in medical domains where samples are available on a scale of billions. The eïŹ€ectiveness of models trained on the presented importance sampling scheme was evaluated compared to alternative strategies such as uniform, Gaussian, and sliced-based sampling
    corecore