23 research outputs found

    UWB Pulse Radar for Human Imaging and Doppler Detection Applications

    Get PDF
    We were motivated to develop new technologies capable of identifying human life through walls. Our goal is to pinpoint multiple people at a time, which could pay dividends during military operations, disaster rescue efforts, or assisted-living. Such system requires the combination of two features in one platform: seeing-through wall localization and vital signs Doppler detection. Ultra-wideband (UWB) radar technology has been used due to its distinct advantages, such as ultra-low power, fine imaging resolution, good penetrating through wall characteristics, and high performance in noisy environment. Not only being widely used in imaging systems and ground penetrating detection, UWB radar also targets Doppler sensing, precise positioning and tracking, communications and measurement, and etc. A robust UWB pulse radar prototype has been developed and is presented here. The UWB pulse radar prototype integrates seeing-through imaging and Doppler detection features in one platform. Many challenges existing in implementing such a radar have been addressed extensively in this dissertation. Two Vivaldi antenna arrays have been designed and fabricated to cover 1.5-4.5 GHz and 1.5-10 GHz, respectively. A carrier-based pulse radar transceiver has been implemented to achieve a high dynamic range of 65dB. A 100 GSPS data acquisition module is prototyped using the off-the-shelf field-programmable gate array (FPGA) and analog-to-digital converter (ADC) based on a low cost solution: equivalent time sampling scheme. Ptolemy and transient simulation tools are used to accurately emulate the linear and nonlinear components in the comprehensive simulation platform, incorporated with electromagnetic theory to account for through wall effect and radar scattering. Imaging and Doppler detection examples have been given to demonstrate that such a “Biometrics-at-a-glance” would have a great impact on the security, rescuing, and biomedical applications in the future

    Two and three dimensional segmentation of multimodal imagery

    Get PDF
    The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes

    Algorithms for the reconstruction, analysis, repairing and enhancement of 3D urban models from multiple data sources

    Get PDF
    Over the last few years, there has been a notorious growth in the field of digitization of 3D buildings and urban environments. The substantial improvement of both scanning hardware and reconstruction algorithms has led to the development of representations of buildings and cities that can be remotely transmitted and inspected in real-time. Among the applications that implement these technologies are several GPS navigators and virtual globes such as Google Earth or the tools provided by the Institut Cartogràfic i Geològic de Catalunya. In particular, in this thesis, we conceptualize cities as a collection of individual buildings. Hence, we focus on the individual processing of one structure at a time, rather than on the larger-scale processing of urban environments. Nowadays, there is a wide diversity of digitization technologies, and the choice of the appropriate one is key for each particular application. Roughly, these techniques can be grouped around three main families: - Time-of-flight (terrestrial and aerial LiDAR). - Photogrammetry (street-level, satellite, and aerial imagery). - Human-edited vector data (cadastre and other map sources). Each of these has its advantages in terms of covered area, data quality, economic cost, and processing effort. Plane and car-mounted LiDAR devices are optimal for sweeping huge areas, but acquiring and calibrating such devices is not a trivial task. Moreover, the capturing process is done by scan lines, which need to be registered using GPS and inertial data. As an alternative, terrestrial LiDAR devices are more accessible but cover smaller areas, and their sampling strategy usually produces massive point clouds with over-represented plain regions. A more inexpensive option is street-level imagery. A dense set of images captured with a commodity camera can be fed to state-of-the-art multi-view stereo algorithms to produce realistic-enough reconstructions. One other advantage of this approach is capturing high-quality color data, whereas the geometric information is usually lacking. In this thesis, we analyze in-depth some of the shortcomings of these data-acquisition methods and propose new ways to overcome them. Mainly, we focus on the technologies that allow high-quality digitization of individual buildings. These are terrestrial LiDAR for geometric information and street-level imagery for color information. Our main goal is the processing and completion of detailed 3D urban representations. For this, we will work with multiple data sources and combine them when possible to produce models that can be inspected in real-time. Our research has focused on the following contributions: - Effective and feature-preserving simplification of massive point clouds. - Developing normal estimation algorithms explicitly designed for LiDAR data. - Low-stretch panoramic representation for point clouds. - Semantic analysis of street-level imagery for improved multi-view stereo reconstruction. - Color improvement through heuristic techniques and the registration of LiDAR and imagery data. - Efficient and faithful visualization of massive point clouds using image-based techniques.Durant els darrers anys, hi ha hagut un creixement notori en el camp de la digitalització d'edificis en 3D i entorns urbans. La millora substancial tant del maquinari d'escaneig com dels algorismes de reconstrucció ha portat al desenvolupament de representacions d'edificis i ciutats que es poden transmetre i inspeccionar remotament en temps real. Entre les aplicacions que implementen aquestes tecnologies hi ha diversos navegadors GPS i globus virtuals com Google Earth o les eines proporcionades per l'Institut Cartogràfic i Geològic de Catalunya. En particular, en aquesta tesi, conceptualitzem les ciutats com una col·lecció d'edificis individuals. Per tant, ens centrem en el processament individual d'una estructura a la vegada, en lloc del processament a gran escala d'entorns urbans. Avui en dia, hi ha una àmplia diversitat de tecnologies de digitalització i la selecció de l'adequada és clau per a cada aplicació particular. Aproximadament, aquestes tècniques es poden agrupar en tres famílies principals: - Temps de vol (LiDAR terrestre i aeri). - Fotogrametria (imatges a escala de carrer, de satèl·lit i aèries). - Dades vectorials editades per humans (cadastre i altres fonts de mapes). Cadascun d'ells presenta els seus avantatges en termes d'àrea coberta, qualitat de les dades, cost econòmic i esforç de processament. Els dispositius LiDAR muntats en avió i en cotxe són òptims per escombrar àrees enormes, però adquirir i calibrar aquests dispositius no és una tasca trivial. A més, el procés de captura es realitza mitjançant línies d'escaneig, que cal registrar mitjançant GPS i dades inercials. Com a alternativa, els dispositius terrestres de LiDAR són més accessibles, però cobreixen àrees més petites, i la seva estratègia de mostreig sol produir núvols de punts massius amb regions planes sobrerepresentades. Una opció més barata són les imatges a escala de carrer. Es pot fer servir un conjunt dens d'imatges capturades amb una càmera de qualitat mitjana per obtenir reconstruccions prou realistes mitjançant algorismes estèreo d'última generació per produir. Un altre avantatge d'aquest mètode és la captura de dades de color d'alta qualitat. Tanmateix, la informació geomètrica resultant sol ser de baixa qualitat. En aquesta tesi, analitzem en profunditat algunes de les mancances d'aquests mètodes d'adquisició de dades i proposem noves maneres de superar-les. Principalment, ens centrem en les tecnologies que permeten una digitalització d'alta qualitat d'edificis individuals. Es tracta de LiDAR terrestre per obtenir informació geomètrica i imatges a escala de carrer per obtenir informació sobre colors. El nostre objectiu principal és el processament i la millora de representacions urbanes 3D amb molt detall. Per a això, treballarem amb diverses fonts de dades i les combinarem quan sigui possible per produir models que es puguin inspeccionar en temps real. La nostra investigació s'ha centrat en les següents contribucions: - Simplificació eficaç de núvols de punts massius, preservant detalls d'alta resolució. - Desenvolupament d'algoritmes d'estimació normal dissenyats explícitament per a dades LiDAR. - Representació panoràmica de baixa distorsió per a núvols de punts. - Anàlisi semàntica d'imatges a escala de carrer per millorar la reconstrucció estèreo de façanes. - Millora del color mitjançant tècniques heurístiques i el registre de dades LiDAR i imatge. - Visualització eficient i fidel de núvols de punts massius mitjançant tècniques basades en imatges

    From Polar to Reed-Muller Codes:Unified Scaling, Non-standard Channels, and a Proven Conjecture

    Get PDF
    The year 2016, in which I am writing these words, marks the centenary of Claude Shannon, the father of information theory. In his landmark 1948 paper "A Mathematical Theory of Communication", Shannon established the largest rate at which reliable communication is possible, and he referred to it as the channel capacity. Since then, researchers have focused on the design of practical coding schemes that could approach such a limit. The road to channel capacity has been almost 70 years long and, after many ideas, occasional detours, and some rediscoveries, it has culminated in the description of low-complexity and provably capacity-achieving coding schemes, namely, polar codes and iterative codes based on sparse graphs. However, next-generation communication systems require an unprecedented performance improvement and the number of transmission settings relevant in applications is rapidly increasing. Hence, although Shannon's limit seems finally close at hand, new challenges are just around the corner. In this thesis, we trace a road that goes from polar to Reed-Muller codes and, by doing so, we investigate three main topics: unified scaling, non-standard channels, and capacity via symmetry. First, we consider unified scaling. A coding scheme is capacity-achieving when, for any rate smaller than capacity, the error probability tends to 0 as the block length becomes increasingly larger. However, the practitioner is often interested in more specific questions such as, "How much do we need to increase the block length in order to halve the gap between rate and capacity?". We focus our analysis on polar codes and develop a unified framework to rigorously analyze the scaling of the main parameters, i.e., block length, rate, error probability, and channel quality. Furthermore, in light of the recent success of a list decoding algorithm for polar codes, we provide scaling results on the performance of list decoders. Next, we deal with non-standard channels. When we say that a coding scheme achieves capacity, we typically consider binary memoryless symmetric channels. However, practical transmission scenarios often involve more complicated settings. For example, the downlink of a cellular system is modeled as a broadcast channel, and the communication on fiber links is inherently asymmetric. We propose provably optimal low-complexity solutions for these settings. In particular, we present a polar coding scheme that achieves the best known rate region for the broadcast channel, and we describe three paradigms to achieve the capacity of asymmetric channels. To do so, we develop general coding "primitives", such as the chaining construction that has already proved to be useful in a variety of communication problems. Finally, we show how to achieve capacity via symmetry. In the early days of coding theory, a popular paradigm consisted in exploiting the structure of algebraic codes to devise practical decoding algorithms. However, proving the optimality of such coding schemes remained an elusive goal. In particular, the conjecture that Reed-Muller codes achieve capacity dates back to the 1960s. We solve this open problem by showing that Reed-Muller codes and, in general, codes with sufficient symmetry are capacity-achieving over erasure channels under optimal MAP decoding. As the proof does not rely on the precise structure of the codes, we are able to show that symmetry alone guarantees optimal performance

    Signals on Networks: Random Asynchronous and Multirate Processing, and Uncertainty Principles

    Get PDF
    The processing of signals defined on graphs has been of interest for many years, and finds applications in a diverse set of fields such as sensor networks, social and economic networks, and biological networks. In graph signal processing applications, signals are not defined as functions on a uniform time-domain grid but they are defined as vectors indexed by the vertices of a graph, where the underlying graph is assumed to model the irregular signal domain. Although analysis of such networked models is not new (it can be traced back to the consensus problem studied more than four decades ago), such models are studied recently from the view-point of signal processing, in which the analysis is based on the "graph operator" whose eigenvectors serve as a Fourier basis for the graph of interest. With the help of graph Fourier basis, a number of topics from classical signal processing (such as sampling, reconstruction, filtering, etc.) are extended to the case of graphs. The main contribution of this thesis is to provide new directions in the field of graph signal processing and provide further extensions of topics in classical signal processing. The first part of this thesis focuses on a random and asynchronous variant of "graph shift," i.e., localized communication between neighboring nodes. Since the dynamical behavior of randomized asynchronous updates is very different from standard graph shift (i.e., state-space models), this part of the thesis focuses on the convergence and stability behavior of such random asynchronous recursions. Although non-random variants of asynchronous state recursions (possibly with non-linear updates) are well-studied problems with early results dating back to the late 60's, this thesis considers the convergence (and stability) in the statistical mean-squared sense and presents the precise conditions for the stability by drawing parallels with switching systems. It is also shown that systems exhibit unexpected behavior under randomized asynchronicity: an unstable system (in the synchronous world) may be stabilized simply by the use of randomized asynchronicity. Moreover, randomized asynchronicity may result in a lower total computational complexity in certain parameter settings. The thesis presents applications of the random asynchronous model in the context of graph signal processing including an autonomous clustering of network of agents, and a node-asynchronous communication protocol that implements a given rational filter on the graph. The second part of the thesis focuses on extensions of the following topics in classical signal processing to the case of graph: multirate processing and filter banks, discrete uncertainty principles, and energy compaction filters for optimal filter design. The thesis also considers an application to the heat diffusion over networks. Multirate systems and filter banks find many applications in signal processing theory and implementations. Despite the possibility of extending 2-channel filter banks to bipartite graphs, this thesis shows that this relation cannot be generalized to M-channel systems on M-partite graphs. As a result, the extension of classical multirate theory to graphs is nontrivial, and such extensions cannot be obtained without certain mathematical restrictions on the graph. The thesis provides the necessary conditions on the graph such that fundamental building blocks of multirate processing remain valid in the graph domain. In particular, it is shown that when the underlying graph satisfies a condition called M-block cyclic property, classical multirate theory can be extended to the graphs. The uncertainty principle is an essential mathematical concept in science and engineering, and uncertainty principles generally state that a signal cannot have an arbitrarily "short" description in the original basis and in the Fourier basis simultaneously. Based on the fact that graph signal processing proposes two different bases (i.e., vertex and the graph Fourier domains) to represent graph signals, this thesis shows that the total number of nonzero elements of a graph signal and its representation in the graph Fourier domain is lower bounded by a quantity depending on the underlying graph. The thesis also presents the necessary and sufficient condition for the existence of 2-sparse and 3-sparse eigenvectors of a connected graph. When such eigenvectors exist, the uncertainty bound is very low, tight, and independent of the global structure of the graph. The thesis also considers the classical spectral concentration problem. In the context of polynomial graph filters, the problem reduces to the polynomial concentration problem studied more generally by Slepian in the 70's. The thesis studies the asymptotic behavior of the optimal solution in the case of narrow bandwidth. Different examples of graphs are also compared in order to show that the maximum energy compaction and the optimal filter depends heavily on the graph spectrum. In the last part, the thesis considers the estimation of the starting time of a heat diffusion process from its noisy measurements when there is a single point source located on a known vertex of a graph with unknown starting time. In particular, the Cramér-Rao lower bound for the estimation problem is derived, and it is shown that for graphs with higher connectivity the problem has a larger lower bound making the estimation problem more difficult.</p

    Characterization and Control in Large Hilbert spaces.

    Get PDF
    Computational devices built on and exploiting quantum phenomena have the potential to revolutionize our understanding of computational complexity by being able to solve certain problems faster than the best known classical algorithms. Unfortunately, unlike the digital computers quantum information processing devices hope to replace, quantum information is fragile by nature and lacks the inherent robustness of digital logic. Indeed, for whatever we can do to control the evolution, nature can also do in some random and unknown fashion ruining the computation. This thesis explores the task of building the classical control architecture to control a large quantum system and how to go about characterizing the behaviour of the system to determine the level of control reached. Both these tasks appear to require an exponential amount of resources as the size of the system grows. The inability to efficiently control and characterize large scale quantum systems will certainly militate against their potential computational usefulness making these important problems to solve. The solutions presented in this thesis are all tested for their practical usefulness by implementing them in either liquid- or solid-state nuclear magnetic resonance

    Temporal pattern recognition in multiparameter ICU data

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (leaves 210-219).Intensive Care Unit (ICU) patients are physiologically fragile and require vigilant monitoring and support. The myriad of data gathered from biosensors and clinical information systems has created a challenge for clinicians to assimilate and interpret such large volumes of data. Physiologic measurements in the ICU are inherently noisy, multidimensional, and can readily fluctuate in response to therapeutic interventions as well as evolving pathophysiologic states. ICU patient monitoring systems may potentially improve the efficiency, accuracy and timeliness of clinical decision-making in intensive care. However, the aforementioned characteristics of ICU data can pose a significant signal processing and pattern recognition challenge---often leading to false and clinically irrelevant alarms. We have developed a temporal database of several thousand ICU patient records to facilitate research in advanced monitoring systems. The MIMIC-II database includes high-resolution physiologic waveforms such as ECG, blood pressures waveforms, vital sign trends, laboratory data, fluid balance, therapy profiles, and clinical progress notes over each patient's ICU stay. We quantitatively and qualitatively characterize the MIMIC-II database and include examples of clinical studies that can be supported by its unique attributes. We also introduce a novel algorithm for identifying "similar" temporal patterns that may illuminate hidden information in physiologic time series. The discovery of multi-parameter temporal patterns that are predictive of physiologic instability may aid clinicians in optimizing care. In this thesis, we introduce a novel temporal similarity metric based on a transformation of time series data into an intuitive symbolic representation.(cont.) The symbolic transform is based on a wavelet decomposition to characterize time series dynamics at multiple time scales. The symbolic transformation allows us to utilize classical information retrieval algorithms based on a vector-space model. Our algorithm is capable of assessing the similarity between multi-dimensional time series and is computationally efficient. We utilized our algorithm to identify similar physiologic patterns in hemodynamic time series from ICU patients. The results of this thesis demonstrate that statistical similarities between different patient time series may have meaningful physiologic interpretations in the detection of impending hemodynamic deterioration. Thus, our framework may be of potential use in clinical decision-support systems. As a generalized time series similarity metric, the algorithms that are described have applications in several other domains as well.by Mohammed Saeed.Ph.D

    A comprehensive system for non-intrusive load monitoring and diagnostics

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 603-612).Energy monitoring and smart grid applications have rapidly developed into a multi-billion dollar market. The continued growth and utility of monitoring technologies is predicated upon the ability to economically extract actionable information from acquired data streams. One of the largest roadblocks to effective analytics arises from the disparities of scale inherent in all aspects of data collection and processing. Managing these multifaceted dynamic range issues is crucial to the success of load monitoring and smart grid technology. This thesis presents NilmDB, a comprehensive framework for energy monitoring applications. The NilmDB management system is a network-enabled database that supports efficient storage, retrieval, and processing of vast, timestamped data sets. It allows a flexible and powerful separation between on-site, high-bandwidth processing operations and off-site, low-bandwidth control and visualization. Specific analysis can be performed as data is acquired, or retroactively as needed, using short filter scripts written in Python and transferred to the monitor. The NilmDB framework is used to implement a spectral envelope preprocessor, an integral part of many non-intrusive load monitoring workflows that extracts relevant harmonic information and provides significant data reduction. A robust approach to spectral envelope calculation is presented using a 4-parameter sinusoid fit. A new physically-windowed sensor architecture for improving the dynamic range of non-intrusive data acquisition is also presented and demonstrated. The hardware architecture utilizes digital techniques and physical cancellation to track a large-scale main signal while maintaining the ability to capture small-scale variations.by James Paris.Ph.D
    corecore