26 research outputs found

    Probabilistic modeling for single-photon lidar

    Full text link
    Lidar is an increasingly prevalent technology for depth sensing, with applications including scientific measurement and autonomous navigation systems. While conventional systems require hundreds or thousands of photon detections per pixel to form accurate depth and reflectivity images, recent results for single-photon lidar (SPL) systems using single-photon avalanche diode (SPAD) detectors have shown accurate images formed from as little as one photon detection per pixel, even when half of those detections are due to uninformative ambient light. The keys to such photon-efficient image formation are two-fold: (i) a precise model of the probability distribution of photon detection times, and (ii) prior beliefs about the structure of natural scenes. Reducing the number of photons needed for accurate image formation enables faster, farther, and safer acquisition. Still, such photon-efficient systems are often limited to laboratory conditions more favorable than the real-world settings in which they would be deployed. This thesis focuses on expanding the photon detection time models to address challenging imaging scenarios and the effects of non-ideal acquisition equipment. The processing derived from these enhanced models, sometimes modified jointly with the acquisition hardware, surpasses the performance of state-of-the-art photon counting systems. We first address the problem of high levels of ambient light, which causes traditional depth and reflectivity estimators to fail. We achieve robustness to strong ambient light through a rigorously derived window-based censoring method that separates signal and background light detections. Spatial correlations both within and between depth and reflectivity images are encoded in superpixel constructions, which fill in holes caused by the censoring. Accurate depth and reflectivity images can then be formed with an average of 2 signal photons and 50 background photons per pixel, outperforming methods previously demonstrated at a signal-to-background ratio of 1. We next approach the problem of coarse temporal resolution for photon detection time measurements, which limits the precision of depth estimates. To achieve sub-bin depth precision, we propose a subtractively-dithered lidar implementation, which uses changing synchronization delays to shift the time-quantization bin edges. We examine the generic noise model resulting from dithering Gaussian-distributed signals and introduce a generalized Gaussian approximation to the noise distribution and simple order statistics-based depth estimators that take advantage of this model. Additional analysis of the generalized Gaussian approximation yields rules of thumb for determining when and how to apply dither to quantized measurements. We implement a dithered SPL system and propose a modification for non-Gaussian pulse shapes that outperforms the Gaussian assumption in practical experiments. The resulting dithered-lidar architecture could be used to design SPAD array detectors that can form precise depth estimates despite relaxed temporal quantization constraints. Finally, SPAD dead time effects have been considered a major limitation for fast data acquisition in SPL, since a commonly adopted approach for dead time mitigation is to operate in the low-flux regime where dead time effects can be ignored. We show that the empirical distribution of detection times converges to the stationary distribution of a Markov chain and demonstrate improvements in depth estimation and histogram correction using our Markov chain model. An example simulation shows that correctly compensating for dead times in a high-flux measurement can yield a 20-times speed up of data acquisition. The resulting accuracy at high photon flux could enable real-time applications such as autonomous navigation

    Remote Sensing Data Compression

    Get PDF
    A huge amount of data is acquired nowadays by different remote sensing systems installed on satellites, aircrafts, and UAV. The acquired data then have to be transferred to image processing centres, stored and/or delivered to customers. In restricted scenarios, data compression is strongly desired or necessary. A wide diversity of coding methods can be used, depending on the requirements and their priority. In addition, the types and properties of images differ a lot, thus, practical implementation aspects have to be taken into account. The Special Issue paper collection taken as basis of this book touches on all of the aforementioned items to some degree, giving the reader an opportunity to learn about recent developments and research directions in the field of image compression. In particular, lossless and near-lossless compression of multi- and hyperspectral images still remains current, since such images constitute data arrays that are of extremely large size with rich information that can be retrieved from them for various applications. Another important aspect is the impact of lossless compression on image classification and segmentation, where a reasonable compromise between the characteristics of compression and the final tasks of data processing has to be achieved. The problems of data transition from UAV-based acquisition platforms, as well as the use of FPGA and neural networks, have become very important. Finally, attempts to apply compressive sensing approaches in remote sensing image processing with positive outcomes are observed. We hope that readers will find our book useful and interestin

    Efficient, concurrent Bayesian analysis of full waveform LaDAR data

    Get PDF
    Bayesian analysis of full waveform laser detection and ranging (LaDAR) signals using reversible jump Markov chain Monte Carlo (RJMCMC) algorithms have shown higher estimation accuracy, resolution and sensitivity to detect weak signatures for 3D surface profiling, and construct multiple layer images with varying number of surface returns. However, it is computational expensive. Although parallel computing has the potential to reduce both the processing time and the requirement for persistent memory storage, parallelizing the serial sampling procedure in RJMCMC is a significant challenge in both statistical and computing domains. While several strategies have been developed for Markov chain Monte Carlo (MCMC) parallelization, these are usually restricted to fixed dimensional parameter estimates, and not obviously applicable to RJMCMC for varying dimensional signal analysis. In the statistical domain, we propose an effective, concurrent RJMCMC algorithm, state space decomposition RJMCMC (SSD-RJMCMC), which divides the entire state space into groups and assign to each an independent RJMCMC chain with restricted variation of model dimensions. It intrinsically has a parallel structure, a form of model-level parallelization. Applying the convergence diagnostic, we can adaptively assess the convergence of the Markov chain on-the-fly and so dynamically terminate the chain generation. Evaluations on both synthetic and real data demonstrate that the concurrent chains have shorter convergence length and hence improved sampling efficiency. Parallel exploration of the candidate models, in conjunction with an error detection and correction scheme, improves the reliability of surface detection. By adaptively generating a complimentary MCMC sequence for the determined model, it enhances the accuracy for surface profiling. In the computing domain, we develop a data parallel SSD-RJMCMC (DP SSD-RJMCMCU) to achieve efficient parallel implementation on a distributed computer cluster. Adding data-level parallelization on top of the model-level parallelization, it formalizes a task queue and introduces an automatic scheduler for dynamic task allocation. These two strategies successfully diminish the load imbalance that occurred in SSD-RJMCMC. Thanks to the coarse granularity, the processors communicate at a very low frequency. The MPIbased implementation on a Beowulf cluster demonstrates that compared with RJMCMC, DP SSD-RJMCMCU has further reduced problem size and computation complexity. Therefore, it can achieve a super linear speedup if the number of data segments and processors are chosen wisely

    Dynamical Systems in Spiking Neuromorphic Hardware

    Get PDF
    Dynamical systems are universal computers. They can perceive stimuli, remember, learn from feedback, plan sequences of actions, and coordinate complex behavioural responses. The Neural Engineering Framework (NEF) provides a general recipe to formulate models of such systems as coupled sets of nonlinear differential equations and compile them onto recurrently connected spiking neural networks – akin to a programming language for spiking models of computation. The Nengo software ecosystem supports the NEF and compiles such models onto neuromorphic hardware. In this thesis, we analyze the theory driving the success of the NEF, and expose several core principles underpinning its correctness, scalability, completeness, robustness, and extensibility. We also derive novel theoretical extensions to the framework that enable it to far more effectively leverage a wide variety of dynamics in digital hardware, and to exploit the device-level physics in analog hardware. At the same time, we propose a novel set of spiking algorithms that recruit an optimal nonlinear encoding of time, which we call the Delay Network (DN). Backpropagation across stacked layers of DNs dramatically outperforms stacked Long Short-Term Memory (LSTM) networks—a state-of-the-art deep recurrent architecture—in accuracy and training time, on a continuous-time memory task, and a chaotic time-series prediction benchmark. The basic component of this network is shown to function on state-of-the-art spiking neuromorphic hardware including Braindrop and Loihi. This implementation approaches the energy-efficiency of the human brain in the former case, and the precision of conventional computation in the latter case

    BETTII: A pathfinder for high angular resolution observations of star-forming regions in the far-infrared

    Get PDF
    In this thesis, we study clustered star formation in nearby star clusters and discuss how high angular resolution observations in the far-infrared regime could help us understand these important regions of stellar birth. We use the increased angular resolution from the FORCAST instrument on the SOFIA airborne observatory to study 10 nearby star-forming regions, and discuss the physical properties of sources in these regions that we can infer from radiative transfer modeling using these new observations. We discuss the design of BETTII, a pathfinder balloon-borne interferometer which will provide significantly better angular resolution in the far-infrared regime, and pave the way for future space-borne observatories. We elaborate on the details of BETTII's core technique, called Double-Fourier interferometry, and how to accurately compute the sensitivity of instruments which use this technique. Finally, we show our design and implementation results of the control and attitude estimation system for the BETTII payload, which poses unique challenges as an interferometer on a balloon platform

    Particle Physics Reference Library

    Get PDF
    This second open access volume of the handbook series deals with detectors, large experimental facilities and data handling, both for accelerator and non-accelerator based experiments. It also covers applications in medicine and life sciences. A joint CERN-Springer initiative, the “Particle Physics Reference Library” provides revised and updated contributions based on previously published material in the well-known Landolt-Boernstein series on particle physics, accelerators and detectors (volumes 21A,B1,B2,C), which took stock of the field approximately one decade ago. Central to this new initiative is publication under full open access

    Tracking the Temporal-Evolution of Supernova Bubbles in Numerical Simulations

    Get PDF
    The study of low-dimensional, noisy manifolds embedded in a higher dimensional space has been extremely useful in many applications, from the chemical analysis of multi-phase flows to simulations of galactic mergers. Building a probabilistic model of the manifolds has helped in describing their essential properties and how they vary in space. However, when the manifold is evolving through time, a joint spatio-temporal modelling is needed, in order to fully comprehend its nature. We propose a first-order Markovian process that propagates the spatial probabilistic model of a manifold at fixed time, to its adjacent temporal stages. The proposed methodology is demonstrated using a particle simulation of an interacting dwarf galaxy to describe the evolution of a cavity generated by a Supernov

    Computer Science & Technology Series : XVIII Argentine Congress of Computer Science. Selected papers

    Get PDF
    CACIC’12 was the eighteenth Congress in the CACIC series. It was organized by the School of Computer Science and Engineering at the Universidad Nacional del Sur. The Congress included 13 Workshops with 178 accepted papers, 5 Conferences, 2 invited tutorials, different meetings related with Computer Science Education (Professors, PhD students, Curricula) and an International School with 5 courses. CACIC 2012 was organized following the traditional Congress format, with 13 Workshops covering a diversity of dimensions of Computer Science Research. Each topic was supervised by a committee of 3-5 chairs of different Universities. The call for papers attracted a total of 302 submissions. An average of 2.5 review reports were collected for each paper, for a grand total of 752 review reports that involved about 410 different reviewers. A total of 178 full papers, involving 496 authors and 83 Universities, were accepted and 27 of them were selected for this book.Red de Universidades con Carreras en Informática (RedUNCI

    Multiplexed single molecule observation and manipulation of engineered biomolecules

    Get PDF
    Molecular processes in organisms are often enabled by structural elements resilient to mechanical forces. For instance, the microbial and hierarchical cellulosome protein system comprises enzymes and the receptor-ligand complexes Cohesin-Dockerin (Coh-Doc), that act in concert for the efficient hydrolysis of plant polysaccharides. The Coh-Doc complexes can withstand remarkably high forces to keep host cells and enzymes bound to their substrates in the extreme environmental conditions the microorganisms frequently live in. This work focuses on the investigation of mechanical stability of such biomolecules on the single-molecule level. The highly symmetric binding interface of the Coh-Doc type I complex from Clostridium thermocellum, enables two different binding conformations withcomparable affinity and similar strength. I was able to show that both conformations exist in the wild-type molecules and are occupied under native conditions. I further characterized one of the strongest non-covalent protein complexes known, Coh-Doc type III from Ruminococcus flavefaciens by elucidating the pivotal role of the adjacent xModule domain for the mechanical stabilization of the whole complex and the role of the bimodal rupture force distribution. Such large forces impair accuracy of measured contour length increments in unfolding studies by inducing conformational changes in poly-ethylene glycol (PEG) linkers in aqueous buffer systems. This problemwas solved by introducing elastin-like polypeptides (ELP) as surface tethers. Having a peptide backbone similar to that of unfolded proteins, ELP linkers do not alter accuracy of the single-molecule force spectroscopy (SMFS) assay. To provide high throughput and precise comparability, I worked on a microfluidic platform for the in vitro protein synthesis and immobilization. The Coh-Doc system was hereby integrated as a binding handle for multiplexed measurements of mechanostability. Employing a single AFM probe to measure multiple different molecules facilitates force precision required to shed light onto molecular mechanisms down to the level of single amino acids. I also applied the Coh-Doc complex to a purely protein based single-molecule cut and paste assay for the bottom-up assembly of molecular systems for quick phenotyping of spatial arrangements. With this system, interactions in enzymatic synergies can be studied by defined positioning patterns on the single molecule level. To understand and design force responses of complex systems, I complemented the investigation of protein systems with SMFS studies on DNA Origami structures. The results of SMFS on DNA were compared to a simulation framework. Despite their difference in force loading rates, both methods agree well within their results, enabling better fundamental understanding of complex molecular superstructures.Molekulare Prozesse in Organismenwerden oft von Strukturelementen ermöglicht, die mechanischen KrĂ€ften standhalten können. Ein Beispiel hierfür ist das mikrobielle und hierarchisch aufgebaute Proteinsystem des Zellulosoms. Enzyme und die Rezeptor-Liganden Komplexe Cohesin-Dockerin (Coh-Doc) arbeiten hierbei für die effiziente Hydrolyse von pflanzlichen Polysacchariden zusammen. Die Coh-Doc Komplexe können bemerkenswerten KrĂ€ften standhalten, um in den extremen Umweltbedingungen, in denen die Mikroorganismen teilweise leben, die Wirtszellen und Enzyme an ihre Substrate binden zu können. Die vorliegende Arbeit untersucht den Einfluss von mechanischer Kraft auf solche Biomoleküle mittels Einzelmolekülmessungen. Die hohe Symmetrie des Bindeinterfaces des Coh-Doc Typ I Komplexes aus Clostridium thermocellum ermöglicht zwei verschiedene Konformationen, die vergleichbare AffinitĂ€t und StĂ€rke aufweisen. Im Rahmen dieser Arbeit konnte ich beide in denWildtyp-Molekülen und unter nativen Bedingungen nachweisen. Eines der stĂ€rksten bekannten nicht-kovalenten Rezeptor-Liganden Systeme, Coh- Doc Typ III aus Ruminococcus flavefaciens wurde charakterisiert, und die Kernrolle des benachbarten xModuls für die StabilitĂ€t des gesamten Komplexes sowie die Rolle der bimodalen Kraftverteilung untersucht. Solch hohe KrĂ€fte vermindern die Genauigkeit der gemessenenKonturlĂ€ngeninkremente von Proteinentfaltungen, indem sie KonformationsĂ€nderungen der Poly- Ethylenglykol (PEG) OberflĂ€chenanker in wĂ€ssrigen Puffersystemen verursachen. Mit Elastin-Ă€hnlichen Polypeptiden (ELP) als Anker wurde dieses Problem gelöst: durch die Ähnlichkeit des Peptid-Rückgrates von ELPs mit dem entfaltener Proteine beeinflussen diese die Genauigkeit des Experiments nicht. Für die Optimierung von Messdurchsatz und Vergleichbarkeit entwickelte ich an einer Mikrofluidik-Plattform zur in vitro Proteinsynthese und -immobilisierung. Das Coh-Doc System wurde hierbei als Binde-Molekül für gemultiplexte Messungen integriert. Die dadurch ermöglichte Nutzung einer einzigen AFM Messsonde für die Messung verschiedener Moleküle erlaubt die nötige KraftprĂ€zision, um molekulare Mechanismen bis auf die Ebene einzelner AminosĂ€uren aufzuklĂ€ren. Des weiteren habe ich den Coh-Doc Komplex in einem rein auf Proteininteraktionen basierten ’Cut and Paste’ Assay für den modularen Aufbau molekularer Systeme implementiert. Dieses ermöglicht schnelle PhĂ€notypisierung geometrischer Anordnunungen und die Untersuchung von Wechselwirkung zwischen Enzymen mittels definierter Positionierung auf Einzelmolekülebene. Um die Kraftantwort komplexer Systeme besser verstehen und letztendlich gestalten zu können, ergĂ€nzte ich die Untersuchung von Proteinsystemen mit derer von DNA-Origami Strukturen. Die Ergebnisse der Kraftspektroskopie an DNA wurden mit Computersimulationen verglichen, und trotz des großen Unterschieds ihrer Ladungsraten stimmen beide Methoden gut überein. Dadruch legen sie die Grundlagen für ein besseres VerstĂ€ndnis komplexer molekularer Superstrukturen
    corecore