25 research outputs found

    Fast, Three-Dimensional Fluorescence Imaging of Living Cells

    Get PDF
    This thesis focuses on multi-plane fluorescence microscopy for fast live-cell imaging. To improve the performance of multi-plane microscopy, I developed new image analysis methods. I used these methods to measure and analyze the movements of cardiomyocytesand Dictyostelium discoideum cells.The multi-plane setup is based on a conventional wide-field microscope using a custom multiple beam-splitter in the detection path. This prism creates separate images of eight distinct focal planes in the sample. Since 3D volume is imaged without scanning, three-dimensional imaging at a very high speed becomes possible. However, as in conventional wide-field microscopy, the "missing cone" of spatial frequencies along the optical axis in the optical transfer function (OTF) prevents optical sectioning in such a microscope. This is in stark contrast to other truly three-dimensional imaging modalities like confocal and light-sheet microscopy. In order to overcome the lack of optical sectioning, I developed a new deconvolution method. Deconvolution describes methods that restore or sharpen an image based on physical assumptions and knowledge of the imaging process. Deconvolution methods have been widely used to sharpen images of microscopes and telescopes. The recently developed SUPPOSe algorithm is a deconvolution algorithm that uses a set of numerous virtual point sources. It tries to reconstruct an image by distributing these point sources in space and optimizing their positions so that the resulting image reproduces as good as possible the measured data. SUPPOSe has never been used for 3D images. Compared to other algorithms, this method has superior performance when the number of pixels is increased by interpolation. In this work, I extended the method to work also with 3D image data. The 3D-SUPPOSe program is suitable for analyzing data of our multi-plane setup. The multi-plane setup has only eight vertically aligned image planes. Furthermore, for accurate reconstruction of 3D images, I studied a method of correcting each image plane's relative brightness constituting an image, and I also developed a method of measuring the movement of point emitters in 3D space. Using these methods, I measured and analyzed the beating motion of cardiomyocytes and the chemotaxis of Dicyosteilium discoidem. Cardiomyocytes are the cells of the heart muscle and consist of repetitive sarcomeres. These cells are characterized by fast and periodic movements, and so far the dynamics of these cells was studied only with two-dimensional imaging. In this thesis, the beating motion was analyzed by tracing the spatial distribution of the so-called z-discs, one of the constituent components of cardiomyocytes. I found that the vertical distribution of α\alpha-actinine-2 in a single z-disc changed very rapidly, which may serve as a starting point for a better understanding the motion of cardiomyocytes. \textit{Dictyostelium discoideum} is a well established single cell model organism that migrates along the gradient of a chemoattractant. One has conducted much research to understand the mechanism of chemotaxis, and many efforts have been made to understand the role of actin in the chemotactic motion. By suppressing the motor protein, myosin, a cell line was created that prevented the formation of normal actin filaments. In these myosin null cells, F-actin moves in a flow-like behaviour and induces cell movement. In this study, I imaged the actin dynamics, and I analyzed the flow using the newly created deconvolution and flow estimation methods. As a result of the analysis, the spatio-temporal correlation between pseudo-pod formation and dynamics and actin flow was investigated.2022-01-2

    Unraveling the Thousand Word Picture: An Introduction to Super-Resolution Data Analysis

    Get PDF
    Super-resolution microscopy provides direct insight into fundamental biological processes occurring at length scales smaller than light’s diffraction limit. The analysis of data at such scales has brought statistical and machine learning methods into the mainstream. Here we provide a survey of data analysis methods starting from an overview of basic statistical techniques underlying the analysis of super-resolution and, more broadly, imaging data. We subsequently break down the analysis of super-resolution data into four problems: the localization problem, the counting problem, the linking problem, and what we’ve termed the interpretation problem

    Statistical Nested Sensor Array Signal Processing

    Get PDF
    Source number detection and direction-of-arrival (DOA) estimation are two major applications of sensor arrays. Both applications are often confined to the use of uniform linear arrays (ULAs), which is expensive and difficult to yield wide aperture. Besides, a ULA with N scalar sensors can resolve at most N − 1 sources. On the other hand, a systematic approach was recently proposed to achieve O(N 2 ) degrees of freedom (DOFs) using O(N) sensors based on a nested array, which is obtained by combining two or more ULAs with successively increased spacing. This dissertation will focus on a fundamental study of statistical signal processing of nested arrays. Five important topics are discussed, extending the existing nested-array strategies to more practical scenarios. Novel signal models and algorithms are proposed. First, based on the linear nested array, we consider the problem for wideband Gaussian sources. To employ the nested array to the wideband case, we propose effective strategies to apply nested-array processing to each frequency component, and combine all the spectral information of various frequencies to conduct the detection and estimation. We then consider the practical scenario with distributed sources, which considers the spreading phenomenon of sources. Next, we investigate the self-calibration problem for perturbed nested arrays, for which existing works require certain modeling assumptions, for example, an exactly known array geometry, including the sensor gain and phase. We propose corresponding robust algorithms to estimate both the model errors and the DOAs. The partial Toeplitz structure of the covariance matrix is employed to estimate the gain errors, and the sparse total least squares is used to deal with the phase error issue. We further propose a new class of nested vector-sensor arrays which is capable of significantly increasing the DOFs. This is not a simple extension of the nested scalar-sensor array. Both the signal model and the signal processing strategies are developed in the multidimensional sense. Based on the analytical results, we consider two main applications: electromagnetic (EM) vector sensors and acoustic vector sensors. Last but not least, in order to make full use of the available limited valuable data, we propose a novel strategy, which is inspired by the jackknifing resampling method. Exploiting numerous iterations of subsets of the whole data set, this strategy greatly improves the results of the existing source number detection and DOA estimation methods

    Methods for analyzing the influence of molecular dynamics on neuronal activity

    Get PDF
    Magdeburg, Univ., Fak. fĂŒr Informatik, Diss., 2015von Stefan Sokol

    Sensor Signal and Information Processing II

    Get PDF
    In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing

    Applications of compressive sensing to direction of arrival estimation

    Get PDF
    Die SchĂ€tzung der Einfallsrichtungen (Directions of Arrival/DOA) mehrerer ebener Wellenfronten mit Hilfe eines Antennen-Arrays ist eine der prominentesten Fragestellungen im Gebiet der Array-Signalverarbeitung. Das nach wie vor starke Forschungsinteresse in dieser Richtung konzentriert sich vor allem auf die Reduktion des Hardware-Aufwands, im Sinne der KomplexitĂ€t und des Energieverbrauchs der EmpfĂ€nger, bei einem vorgegebenen Grad an Genauigkeit und Robustheit gegen Mehrwegeausbreitung. Diese Dissertation beschĂ€ftigt sich mit der Anwendung von Compressive Sensing (CS) auf das Gebiet der DOA-SchĂ€tzung mit dem Ziel, hiermit die KomplexitĂ€t der EmpfĂ€ngerhardware zu reduzieren und gleichzeitig eine hohe Richtungsauflösung und Robustheit zu erreichen. CS wurde bereits auf das DOA-Problem angewandt unter der Ausnutzung der Tatsache, dass eine Superposition ebener Wellenfronten mit einer winkelabhĂ€ngigen Leistungsdichte korrespondiert, die ĂŒber den Winkel betrachtet sparse ist. Basierend auf der Idee wurden CS-basierte Algorithmen zur DOA-SchĂ€tzung vorgeschlagen, die sich durch eine geringe RechenkomplexitĂ€t, Robustheit gegenĂŒber Quellenkorrelation und FlexibilitĂ€t bezĂŒglich der Wahl der Array-Geometrie auszeichnen. Die Anwendung von CS fĂŒhrt darĂŒber hinaus zu einer erheblichen Reduktion der Hardware-KomplexitĂ€t, da weniger EmpfangskanĂ€le benötigt werden und eine geringere Datenmenge zu verarbeiten und zu speichern ist, ohne dabei wesentliche Informationen zu verlieren. Im ersten Teil der Arbeit wird das Problem des Modellfehlers bei der CS-basierten DOA-SchĂ€tzung mit gitterbehafteten Verfahren untersucht. Ein hĂ€ufig verwendeter Ansatz um das CS-Framework auf das DOA-Problem anzuwenden ist es, den kontinuierlichen Winkel-Parameter zu diskreditieren und damit ein Dictionary endlicher GrĂ¶ĂŸe zu bilden. Da die tatsĂ€chlichen Winkel fast sicher nicht auf diesem Gitter liegen werden, entsteht dabei ein unvermeidlicher Modellfehler, der sich auf die SchĂ€tzalgorithmen auswirkt. In der Arbeit wird ein analytischer Ansatz gewĂ€hlt, um den Effekt der Gitterfehler auf die rekonstruierten Spektra zu untersuchen. Es wird gezeigt, dass sich die Messung einer Quelle aus beliebiger Richtung sehr gut durch die erwarteten Antworten ihrer beiden Nachbarn auf dem Gitter annĂ€hern lĂ€sst. Darauf basierend wird ein einfaches und effizientes Verfahren vorgeschlagen, den Gitterversatz zu schĂ€tzen. Dieser Ansatz ist anwendbar auf einzelne Quellen oder mehrere, rĂ€umlich gut separierte Quellen. FĂŒr den Fall mehrerer dicht benachbarter Quellen wird ein numerischer Ansatz zur gemeinsamen SchĂ€tzung des Gitterversatzes diskutiert. Im zweiten Teil der Arbeit untersuchen wir das Design kompressiver Antennenarrays fĂŒr die DOA-SchĂ€tzung. Die Kompression im Sinne von Linearkombinationen der Antennensignale, erlaubt es, Arrays mit großer Apertur zu entwerfen, die nur wenige EmpfangskanĂ€le benötigen und sich konfigurieren lassen. In der Arbeit wird eine einfache Empfangsarchitektur vorgeschlagen und ein allgemeines Systemmodell diskutiert, welches verschiedene Optionen der tatsĂ€chlichen Hardware-Realisierung dieser Linearkombinationen zulĂ€sst. Im Anschluss wird das Design der Gewichte des analogen Kombinations-Netzwerks untersucht. Numerische Simulationen zeigen die Überlegenheit der vorgeschlagenen kompressiven Antennen-Arrays im Vergleich mit dĂŒnn besetzten Arrays der gleichen KomplexitĂ€t sowie kompressiver Arrays mit zufĂ€llig gewĂ€hlten Gewichten. Schließlich werden zwei weitere Anwendungen der vorgeschlagenen AnsĂ€tze diskutiert: CS-basierte VerzögerungsschĂ€tzung und kompressives Channel Sounding. Es wird demonstriert, dass die in beiden Gebieten durch die Anwendung der vorgeschlagenen AnsĂ€tze erhebliche Verbesserungen erzielt werden können.Direction of Arrival (DOA) estimation of plane waves impinging on an array of sensors is one of the most important tasks in array signal processing, which have attracted tremendous research interest over the past several decades. The estimated DOAs are used in various applications like localization of transmitting sources, massive MIMO and 5G Networks, tracking and surveillance in radar, and many others. The major objective in DOA estimation is to develop approaches that allow to reduce the hardware complexity in terms of receiver costs and power consumption, while providing a desired level of estimation accuracy and robustness in the presence of multiple sources and/or multiple paths. Compressive sensing (CS) is a novel sampling methodology merging signal acquisition and compression. It allows for sampling a signal with a rate below the conventional Nyquist bound. In essence, it has been shown that signals can be acquired at sub-Nyquist sampling rates without loss of information provided they possess a sufficiently sparse representation in some domain and that the measurement strategy is suitably chosen. CS has been recently applied to DOA estimation, leveraging the fact that a superposition of planar wavefronts corresponds to a sparse angular power spectrum. This dissertation investigates the application of compressive sensing to the DOA estimation problem with the goal to reduce the hardware complexity and/or achieve a high resolution and a high level of robustness. Many CS-based DOA estimation algorithms have been proposed in recent years showing tremendous advantages with respect to the complexity of the numerical solution while being insensitive to source correlation and allowing arbitrary array geometries. Moreover, CS has also been suggested to be applied in the spatial domain with the main goal to reduce the complexity of the measurement process by using fewer RF chains and storing less measured data without the loss of any significant information. In the first part of the work we investigate the model mismatch problem for CS based DOA estimation algorithms off the grid. To apply the CS framework a very common approach is to construct a finite dictionary by sampling the angular domain with a predefined sampling grid. Therefore, the target locations are almost surely not located exactly on a subset of these grid points. This leads to a model mismatch which deteriorates the performance of the estimators. We take an analytical approach to investigate the effect of such grid offsets on the recovered spectra showing that each off-grid source can be well approximated by the two neighboring points on the grid. We propose a simple and efficient scheme to estimate the grid offset for a single source or multiple well-separated sources. We also discuss a numerical procedure for the joint estimation of the grid offsets of closer sources. In the second part of the thesis we study the design of compressive antenna arrays for DOA estimation that aim to provide a larger aperture with a reduced hardware complexity and allowing reconfigurability, by a linear combination of the antenna outputs to a lower number of receiver channels. We present a basic receiver architecture of such a compressive array and introduce a generic system model that includes different options for the hardware implementation. We then discuss the design of the analog combining network that performs the receiver channel reduction. Our numerical simulations demonstrate the superiority of the proposed optimized compressive arrays compared to the sparse arrays of the same complexity and to compressive arrays with randomly chosen combining kernels. Finally, we consider two other applications of the sparse recovery and compressive arrays. The first application is CS based time delay estimation and the other one is compressive channel sounding. We show that the proposed approaches for sparse recovery off the grid and compressive arrays show significant improvements in the considered applications compared to conventional methods

    Localization as a Key Enabler of 6G Wireless Systems: A Comprehensive Survey and an Outlook

    Get PDF
    peer reviewedWhen fully implemented, sixth generation (6G) wireless systems will constitute intelligent wireless networks that enable not only ubiquitous communication but also high-Accuracy localization services. They will be the driving force behind this transformation by introducing a new set of characteristics and service capabilities in which location will coexist with communication while sharing available resources. To that purpose, this survey investigates the envisioned applications and use cases of localization in future 6G wireless systems, while analyzing the impact of the major technology enablers. Afterwards, system models for millimeter wave, terahertz and visible light positioning that take into account both line-of-sight (LOS) and non-LOS channels are presented, while localization key performance indicators are revisited alongside mathematical definitions. Moreover, a detailed review of the state of the art conventional and learning-based localization techniques is conducted. Furthermore, the localization problem is formulated, the wireless system design is considered and the optimization of both is investigated. Finally, insights that arise from the presented analysis are summarized and used to highlight the most important future directions for localization in 6G wireless systems
    corecore