20 research outputs found

    In pursuit of high resolution radar using pursuit algorithms

    Get PDF
    Radar receivers typically employ matched filters designed to maximize signal to noise ratio (SNR) in a single target environment. In a multi-target environment, however, matched filter estimates of target environment often consist of spurious targets because of radar signal sidelobes. As a result, matched filters are not suitable for use in high resolution radars operating in multi-target environments. Assuming a point target model, we show that the radar problem can be formulated as a linear under-determined system with a sparse solution. This suggests that radar can be considered as a sparse signal recovery problem. However, it is shown that the sensing matrix obtained using common radar signals does not usually satisfy the mutual coherence condition. This implies that using recovery techniques available in compressed sensing literature may not result in the optimal solution. In this thesis, we focus on the greedy algorithm approach to solve the problem and show that it naturally yields a quantitative measure for radar resolution. In addition, we show that the limitations of the greedy algorithms can be attributed to the close relation between greedy matching pursuit algorithms and the matched filter. This suggests that improvements to the resolution capability of the greedy pursuit algorithms can be made by using a mismatched signal dictionary. In some cases, unlike the mismatched filter, the proposed mismatched pursuit algorithm is shown to offer improved resolution and stability without any noticeable difference in detection performance. Further improvements in resolution are proposed by using greedy algorithms in a radar system using multiple transmit waveforms. It is shown that while using the greedy algorithms together with linear channel combining can yield significant resolution improvement, a greedy approach using nonlinear channel combining also shows some promise. Finally, a forward-backward greedy algorithm is proposed for target environments comprising of point targets as well as extended targets

    Robustness of spike deconvolution for neuronal calcium imaging

    Get PDF
    Calcium imaging is a powerful method to record the activity of neural populations in many species, but inferring spike times from calcium signals is a challenging problem. We compared multiple approaches using multiple datasets with ground truth electrophysiology, and found that simple non-negative deconvolution (NND) outperformed all other algorithms on out-of-sample test data. We introduce a novel benchmark applicable to recordings without electrophysiological ground truth, based on the correlation of responses to two stimulus repeats, and used this to show that unconstrained NND also outperformed the other algorithms when run on “zoomed out” datasets of ∌10,000 cell recordings from the visual cortex of mice of either sex. Finally, we show that NND-based methods match the performance of a supervised method based on convolutional neural networks, while avoiding some of the biases of such methods, and at much faster running times. We therefore recommend that spikes be inferred from calcium traces using simple NND, due to its simplicity, efficiency and accuracy

    Low-dimensional data embedding for scalable astronomical imaging in the SKA telescope era

    Get PDF
    Astronomy is one of the oldest sciences known to humanity. We have been studying celestial objects for millennia, and continue to peer deeper into space in our thirst for knowledge about our origins and the universe that surrounds us. Radio astronomy -- observing celestial objects at radio frequencies -- has helped push the boundaries on the kind of objects we can study. Indeed, some of the most important discoveries about the structure of our universe, like the cosmic microwave background, and entire classes of objects like quasars and pulsars, were made using radio astronomy. Radio interferometers are telescopes made of multiple antennas spread over a distance. Signals detected at different antennas are combined to provide images with much higher resolution and sensitivity than with a traditional single-dish radio telescope. The Square Kilometre Array (SKA) is one such radio interferometer, with plans to have antennas separated by as much as 3000\,km. In its quest for ever-higher resolution and ever-wider coverage of the sky, the SKA heralds a data explosion, with an expected acquisition rate of 5\,terabits per second. The high data rate fed into the pipeline can be handled with a two-pronged approach -- (i) scalable, parallel imaging algorithms that fully utilize the latest computing technologies like accelerators and distributed clusters, and (ii) dimensionality reduction methods that embed the high-dimensional telescope data to much smaller sizes without losing information and guaranteeing accurate recovery of the images, thereby enabling imaging methods to scale to big data sizes and alleviating heavy loads on pipeline buffers without compromising on the science goals of the SKA. In this thesis we propose fast and robust dimensionality reduction methods that embed data to very low sizes while preserving information present in the original data. These methods are presented in the context of compressed sensing theory and related signal recovery techniques. The effectiveness of the reduction methods is illustrated by coupling them with advanced convex optimization algorithms to solve a sparse recovery problem. Images thus reconstructed from extremely low-sized embedded data are shown to have quality comparable to those obtained from full data without any reduction. Comparisons with other standard `data compression' techniques in radio interferometry (like averaging) show a clear advantage in using our methods which provide higher quality images from much lower data sizes. We confirm these claims on both synthetic data simulating SKA data patterns as well as actual telescope data from a state-of-the-art radio interferometer. Additionally, imaging with reduced data is shown to have a lighter computational load -- smaller memory footprint owing to the size and faster iterative image recovery owing to the fast embedding. Extensions to the work presented in this thesis are already underway. We propose an `on-line' version of our reduction methods that works on blocks of data and thus can be applied on-the-fly on data as they are being acquired by telescopes in real-time. This is of immediate interest to the SKA where large buffers in the data acquisition pipeline are very expensive and thus undesirable. Some directions to be probed in the immediate future are in transient imaging, and imaging hyperspectral data to test computational load while in a high resolution, multi-frequency setting

    Convex Relaxations for Particle-Gradient Flow with Applications in Super-Resolution Single-Molecule Localization Microscopy

    Get PDF
    Single-molecule localization microscopy (SMLM) techniques have become advanced bioanalytical tools by quantifying the positions and orientations of molecules in space and time at the nanoscale. With the noisy and heterogeneous nature of SMLM datasets in mind, we discuss leveraging particle-gradient flow 1) for quantifying the accuracy of localization algorithms with and without ground truth and 2) as a basis for novel, model-driven localization algorithms with empirically robust performance. Using experimental data, we demonstrate that overlapping images of molecules, a typical consequence of densely packed biological structures, cause biases in position estimates and reconstruction artifacts. To minimize such biases, we develop a novel sparse deconvolution algorithm by relaxing a particle-gradient flow algorithm (called relaxed-gradient flow or RGF). In contrast to previous methods based on sequential source matching or grid-based strategies, RGF detects source molecules based on the estimated “gradient flux.” RGF reconstructs experimental images of microtubules with much greater accuracy in terms of separation and diameter. We further extend RGF to the problem of joint estimation of molecular position and orientation. By lifting the optimization from first-order to second-order orientational moments, we derive an efficient version of RGF, which exhibits robustness to instrumental mismatches. Finally, we discuss the fundamental problem of quantifying the accuracy of a localization estimate without ground truth. We show that by computing measurement stability under a well-chosen perturbation with accurate knowledge of the imaging system, we can robustly quantify the confidence of individual localizations without ground-truth knowledge of the sample. To demonstrate the broad applicability of our method, termed Wasserstein-induced flux, we measure the accuracy of various reconstruction algorithms directly on experimental data

    Algorithms for Reconstruction of Undersampled Atomic Force Microscopy Images

    Get PDF

    Sensor Signal and Information Processing II

    Get PDF
    In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing

    Listening to Distances and Hearing Shapes:Inverse Problems in Room Acoustics and Beyond

    Get PDF
    A central theme of this thesis is using echoes to achieve useful, interesting, and sometimes surprising results. One should have no doubts about the echoes' constructive potential; it is, after all, demonstrated masterfully by Nature. Just think about the bat's intriguing ability to navigate in unknown spaces and hunt for insects by listening to echoes of its calls, or about similar (albeit less well-known) abilities of toothed whales, some birds, shrews, and ultimately people. We show that, perhaps contrary to conventional wisdom, multipath propagation resulting from echoes is our friend. When we think about it the right way, it reveals essential geometric information about the sources--channel--receivers system. The key idea is to think of echoes as being more than just delayed and attenuated peaks in 1D impulse responses; they are actually additional sources with their corresponding 3D locations. This transformation allows us to forget about the abstract \emph{room}, and to replace it by more familiar \emph{point sets}. We can then engage the powerful machinery of Euclidean distance geometry. A problem that always arises is that we do not know \emph{a priori} the matching between the peaks and the points in space, and solving the inverse problem is achieved by \emph{echo sorting}---a tool we developed for learning correct labelings of echoes. This has applications beyond acoustics, whenever one deals with waves and reflections, or more generally, time-of-flight measurements. Equipped with this perspective, we first address the ``Can one hear the shape of a room?'' question, and we answer it with a qualified ``yes''. Even a single impulse response uniquely describes a convex polyhedral room, whereas a more practical algorithm to reconstruct the room's geometry uses only first-order echoes and a few microphones. Next, we show how different problems of localization benefit from echoes. The first one is multiple indoor sound source localization. Assuming the room is known, we show that discretizing the Helmholtz equation yields a system of sparse reconstruction problems linked by the common sparsity pattern. By exploiting the full bandwidth of the sources, we show that it is possible to localize multiple unknown sound sources using only a single microphone. We then look at indoor localization with known pulses from the geometric echo perspective introduced previously. Echo sorting enables localization in non-convex rooms without a line-of-sight path, and localization with a single omni-directional sensor, which is impossible without echoes. A closely related problem is microphone position calibration; we show that echoes can help even without assuming that the room is known. Using echoes, we can localize arbitrary numbers of microphones at unknown locations in an unknown room using only one source at an unknown location---for example a finger snap---and get the room's geometry as a byproduct. Our study of source localization outgrew the initial form factor when we looked at source localization with spherical microphone arrays. Spherical signals appear well beyond spherical microphone arrays; for example, any signal defined on Earth's surface lives on a sphere. This resulted in the first slight departure from the main theme: We develop the theory and algorithms for sampling sparse signals on the sphere using finite rate-of-innovation principles and apply it to various signal processing problems on the sphere

    Pacific Symposium on Biocomputing 2023

    Get PDF
    The Pacific Symposium on Biocomputing (PSB) 2023 is an international, multidisciplinary conference for the presentation and discussion of current research in the theory and application of computational methods in problems of biological significance. Presentations are rigorously peer reviewed and are published in an archival proceedings volume. PSB 2023 will be held on January 3-7, 2023 in Kohala Coast, Hawaii. Tutorials and workshops will be offered prior to the start of the conference.PSB 2023 will bring together top researchers from the US, the Asian Pacific nations, and around the world to exchange research results and address open issues in all aspects of computational biology. It is a forum for the presentation of work in databases, algorithms, interfaces, visualization, modeling, and other computational methods, as applied to biological problems, with emphasis on applications in data-rich areas of molecular biology.The PSB has been designed to be responsive to the need for critical mass in sub-disciplines within biocomputing. For that reason, it is the only meeting whose sessions are defined dynamically each year in response to specific proposals. PSB sessions are organized by leaders of research in biocomputing's 'hot topics.' In this way, the meeting provides an early forum for serious examination of emerging methods and approaches in this rapidly changing field
    corecore