41 research outputs found
Feature-Based Probabilistic Data Association for Video-Based Multi-Object Tracking
This work proposes a feature-based probabilistic data association and tracking approach (FBPDATA) for multi-object tracking. FBPDATA is based on re-identification and tracking of individual video image points (feature points) and aims at solving the problems of partial, split (fragmented), bloated or missed detections, which are due to sensory or algorithmic restrictions, limited field of view of the sensors, as well as occlusion situations
Fusion of Imaging and Inertial Sensors for Navigation
The motivation of this research is to address the limitations of satellite-based navigation by fusing imaging and inertial systems. The research begins by rigorously describing the imaging and navigation problem and developing practical models of the sensors, then presenting a transformation technique to detect features within an image. Given a set of features, a statistical feature projection technique is developed which utilizes inertial measurements to predict vectors in the feature space between images. This coupling of the imaging and inertial sensors at a deep level is then used to aid the statistical feature matching function. The feature matches and inertial measurements are then used to estimate the navigation trajectory using an extended Kalman filter. After accomplishing a proper calibration, the image-aided inertial navigation algorithm is then tested using a combination of simulation and ground tests using both tactical and consumer- grade inertial sensors. While limitations of the Kalman filter are identified, the experimental results demonstrate a navigation performance improvement of at least two orders of magnitude over the respective inertial-only solutions
Hidden Markov Models
Hidden Markov Models (HMMs), although known for decades, have made a big career nowadays and are still in state of development. This book presents theoretical issues and a variety of HMMs applications in speech recognition and synthesis, medicine, neurosciences, computational biology, bioinformatics, seismology, environment protection and engineering. I hope that the reader will find this book useful and helpful for their own research
Sparse Modeling for Image and Vision Processing
In recent years, a large amount of multi-disciplinary research has been
conducted on sparse models and their applications. In statistics and machine
learning, the sparsity principle is used to perform model selection---that is,
automatically selecting a simple model among a large collection of them. In
signal processing, sparse coding consists of representing data with linear
combinations of a few dictionary elements. Subsequently, the corresponding
tools have been widely adopted by several scientific communities such as
neuroscience, bioinformatics, or computer vision. The goal of this monograph is
to offer a self-contained view of sparse modeling for visual recognition and
image processing. More specifically, we focus on applications where the
dictionary is learned and adapted to data, yielding a compact representation
that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics
and Visio
Time diversity solutions to cope with lost packets
A dissertation submitted to Departamento de Engenharia Electrotécnica of Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engenharia Electrotécnica e de ComputadoresModern broadband wireless systems require high throughputs and can also have very high
Quality-of-Service (QoS) requirements, namely small error rates and short delays. A high spectral efficiency is needed to meet these requirements. Lost packets, either due to errors or collisions, are usually discarded and need to be retransmitted, leading to performance degradation.
An alternative to simple retransmission that can improve both power and spectral
efficiency is to combine the signals associated to different transmission attempts.
This thesis analyses two time diversity approaches to cope with lost packets that are
relatively similar at physical layer but handle different packet loss causes. The first is a lowcomplexity Diversity-Combining (DC) Automatic Repeat reQuest (ARQ) scheme employed in a Time Division Multiple Access (TDMA) architecture, adapted for channels dedicated to a single user. The second is a Network-assisted Diversity Multiple Access (NDMA) scheme, which is a multi-packet detection approach able to separate multiple mobile terminals transmitting simultaneously in one slot using temporal diversity. This thesis combines these techniques with Single Carrier with Frequency Division Equalizer (SC-FDE) systems, which are widely recognized as the best candidates for the uplink of future broadband wireless systems.
It proposes a new NDMA scheme capable of handling more Mobile Terminals (MTs)
than the user separation capacity of the receiver. This thesis also proposes a set of analytical tools that can be used to analyse and optimize the use of these two systems. These tools are then employed to compare both approaches in terms of error rate, throughput and delay performances, and taking the implementation complexity into consideration.
Finally, it is shown that both approaches represent viable solutions for future broadband wireless communications complementing each other.Fundação para a Ciência e Tecnologia - PhD grant(SFRH/BD/41515/2007); CTS multi-annual funding project PEst-OE/EEI/UI0066/2011, IT
pluri-annual funding project PEst-OE/EEI/LA0008/2011, U-BOAT project PTDC/EEATEL/
67066/2006, MPSat project PTDC/EEA-TEL/099074/2008 and OPPORTUNISTICCR
project PTDC/EEA-TEL/115981/200
Opportunistic communications in large uncoordinated networks
(English) The increase of wireless devices offering high data rate services limits the coexistence of wireless systems sharing the same resources in a given geographical area because of inter-system interference. Therefore, interference management plays a key role in permitting the coexistence of several heterogeneous communication services. However, classical interference management strategies require lateral information giving rise to the need for inter-system coordination and cooperation, which is not always practical.
Opportunistic communications offer a potential solution to the problem of inter-system interference management. The basic principle of opportunistic communications is to efficiently and robustly exploit the resources available in a wireless network and adapt the transmitted signals to the state of the network to avoid inter-system interference. Therefore, opportunistic communications depend on inferring the available network resources that can be safely exploited without inducing interference in coexisting communication nodes. Once the available network resources are identified, the most prominent opportunistic communication techniques consist in designing scenario-adapted precoding/decoding strategies to exploit the so-called null space. Despite this, classical solutions in the literature suffer from two main drawbacks: the lack of robustness to detection errors and the need for intra-system cooperation.
This thesis focuses on the design of a null space-based opportunistic communication scheme that addresses the drawbacks exhibited by existing methodologies under the assumption that opportunistic nodes do not cooperate. For this purpose, a generalized detection error model independent of the null-space identification mechanism is introduced that allows the design of solutions that exhibit minimal inter-system interference in the worst case. These solutions respond to a maximum signal-to-interference ratio (SIR) criterion, which is optimal under non-cooperative conditions. The proposed methodology allows the design of a family of orthonormal waveforms that perform a spreading of the modulated symbols within the detected null space, which is key to minimizing the induced interference density. The proposed solutions are invariant within the inferred null space, allowing the removal of the feedback link without giving up coherent waveform detection.
In the absence of coordination, the waveform design relies solely on locally sensed network state information, inducing a mismatch between the null spaces identified by the transmitter and receiver that may worsen system performance. Although the proposed solution is robust to this mismatch, the design of enhanced receivers using active subspace detection schemes is also studied.
When the total number of network resources increases arbitrarily, the proposed solutions tend to be linear combinations of complex exponentials, providing an interpretation in the frequency domain. This asymptotic behavior allows us to adapt the proposed solution to frequency-selective channels by means of a cyclic prefix and to study an efficient modulation similar to the time division multiplexing scheme but using circulant waveforms.
Finally, the impact of the use of multiple antennas in opportunistic null space-based communications is studied. The performed analysis reveals that, in any case, the structure of the antenna clusters affects the opportunistic communication, since the proposed waveform mimics the behavior of a single-antenna transmitter. On the other hand, the number of sensors employed translates into an improvement in terms of SIR.(Català ) El creixement incremental dels dispositius sense fils que requereixen serveis d'alta velocitat de dades limita la coexistència de sistemes sense fils que comparteixen els mateixos recursos en una à rea geogrà fica donada a causa de la interferència entre sistemes. Conseqüentment, la gestió d'interferència juga un paper fonamental per a facilitar la coexistència de diversos serveis de comunicació heterogenis. No obstant això, les estratègies clà ssiques de gestió d'interferència requereixen informació lateral originant la necessitat de coordinació i cooperació entre sistemes, que no sempre és prà ctica.
Les comunicacions oportunistes ofereixen una solució potencial al problema de la gestió de les interferències entre sistemes. El principi bà sic de les comunicacions oportunistes és explotar de manera eficient i robusta els recursos disponibles en una xarxa sense fils i adaptar els senyals transmesos a l'estat de la xarxa per evitar interferències entre sistemes. Per tant, les comunicacions oportunistes depenen de la inferència dels recursos de xarxa disponibles que poden ser explotats de manera segura sense induir interferència en els nodes de comunicació coexistents. Una vegada que s'han identificat els recursos de xarxa disponibles, les tècniques de comunicació oportunistes més prominents consisteixen en el disseny d'estratègies de precodificació/descodificació adaptades a l'escenari per explotar l'anomenat espai nul. Malgrat això, les solucions clà ssiques en la literatura sofreixen dos inconvenients principals: la falta de robustesa als errors de detecció i la necessitat de cooperació intra-sistema.
Aquesta tesi tracta el disseny d'un esquema de comunicació oportunista basat en l'espai nul que afronta els inconvenients exposats per les metodologies existents assumint que els nodes oportunistes no cooperen. Per a aquest propòsit, s'introdueix un model generalitzat d'error de detecció independent del mecanisme d'identificació de l'espai nul que permet el disseny de solucions que exhibeixen interferències mÃnimes entre sistemes en el cas pitjor. Aquestes solucions responen a un criteri de mà xima relació de senyal a interferència (SIR), que és òptim en condicions de no cooperació. La metodologia proposada permet dissenyar una famÃlia de formes d'ona ortonormals que realitzen un spreading dels sÃmbols modulats dins de l'espai nul detectat, que és clau per minimitzar la densitat d’interferència induïda. Les solucions proposades són invariants dins de l'espai nul inferit, permetent suprimir l'enllaç de retroalimentació i, tot i aixÃ, realitzar una detecció coherent de forma d'ona.
Sota l’absència de coordinació, el disseny de la forma d'ona es basa únicament en la informació de l'estat de la xarxa detectada localment, induint un desajust entre els espais nuls identificats pel transmissor i receptor que pot empitjorar el rendiment del sistema. Tot i que la solució proposada és robusta a aquest desajust, també s'estudia el disseny de receptors millorats fent ús de tècniques de detecció de subespai actiu.
Quan el nombre total de recursos de xarxa augmenta arbitrà riament, les solucions proposades tendeixen a ser combinacions lineals d'exponencials complexes, proporcionant una interpretació en el domini freqüencial. Aquest comportament asimptòtic permet adaptar la solució proposada a entorns selectius en freqüència fent ús d'un prefix cÃclic i estudiar una modulació eficient derivada de l'esquema de multiplexat per divisió de temps emprant formes d'ona circulant.
Finalment, s’estudia l'impacte de l'ús de múltiples antenes en comunicacions oportunistes basades en l'espai nul. L'anà lisi realitzada permet concloure que, en cap cas, l'estructura de les agrupacions d'antenes tenen un impacte sobre la comunicació oportunista, ja que la forma d'ona proposada imita el comportament d'un transmissor mono-antena. D'altra banda, el nombre de sensors emprat es tradueix en una millora en termes de SIR.(Español) El incremento de los dispositivos inalámbricos que ofrecen servicios de alta velocidad de datos limita la coexistencia de sistemas inalámbricos que comparten los mismos recursos en un área geográfica dada a causa de la interferencia inter-sistema. Por tanto, la gestión de interferencia juega un papel fundamental para facilitar la coexistencia de varios servicios de comunicación heterogéneos. Sin embargo, las estrategias clásicas de gestión de interferencia requieren información lateral originando la necesidad de coordinación y cooperación entre sistemas, que no siempre es práctica.
Las comunicaciones oportunistas ofrecen una solución potencial al problema de la gestión de las interferencias entre sistemas. El principio básico de las comunicaciones oportunistas es explotar de manera eficiente y robusta los recursos disponibles en una red inalámbricas y adaptar las señales transmitidas al estado de la red para evitar interferencias entre sistemas. Por lo tanto, las comunicaciones oportunistas dependen de la inferencia de los recursos de red disponibles que pueden ser explotados de manera segura sin inducir interferencia en los nodos de comunicación coexistentes. Una vez identificados los recursos disponibles, las técnicas de comunicación oportunistas más prominentes consisten en el diseño de estrategias de precodificación/descodificación adaptadas al escenario para explotar el llamado espacio nulo. A pesar de esto, las soluciones clásicas en la literatura sufren dos inconvenientes principales: la falta de robustez a los errores de detección y la necesidad de cooperación intra-sistema.
Esta tesis propone diseñar un esquema de comunicación oportunista basado en el espacio nulo que afronta los inconvenientes expuestos por las metodologÃas existentes asumiendo que los nodos oportunistas no cooperan. Para este propósito, se introduce un modelo generalizado de error de detección independiente del mecanismo de identificación del espacio nulo que permite el diseño de soluciones que exhiben interferencias mÃnimas entre sistemas en el caso peor. Estas soluciones responden a un criterio de máxima relación de señal a interferencia (SIR), que es óptimo en condiciones de no cooperación. La metodologÃa propuesta permite diseñar una familia de formas de onda ortonormales que realizan un spreading de los sÃmbolos modulados dentro del espacio nulo detectado, que es clave para minimizar la densidad de interferencia inducida. Las soluciones propuestas son invariantes dentro del espacio nulo inferido, permitiendo suprimir el enlace de retroalimentación sin renunciar a la detección coherente de forma de onda.
En ausencia de coordinación, el diseño de la forma de onda se basa únicamente en la información del estado de la red detectada localmente, induciendo un desajuste entre los espacios nulos identificados por el transmisor y receptor que puede empeorar el rendimiento del sistema. A pesar de que la solución propuesta es robusta a este desajuste, también se estudia el diseño de receptores mejorados usando técnicas de detección de subespacio activo.
Cuando el número total de recursos de red aumenta arbitrariamente, las soluciones propuestas tienden a ser combinaciones lineales de exponenciales complejas, proporcionando una interpretación en el dominio frecuencial. Este comportamiento asintótico permite adaptar la solución propuesta a canales selectivos en frecuencia mediante un prefijo cÃclico y estudiar una modulación eficiente derivada del esquema de multiplexado por división de tiempo empleando formas de onda circulante.
Finalmente, se estudia el impacto del uso de múltiples antenas en comunicaciones oportunistas basadas en el espacio nulo. El análisis realizado revela que la estructura de las agrupaciones de antenas no afecta la comunicación oportunista, ya que la forma de onda propuesta imita el comportamiento de un transmisor mono-antena. Por otro lado, el número de sensores empleado se traduce en una mejora en términos de SIR.Postprint (published version
Recommended from our members
Improved methods for single-particle cryogenic electron microscopy
Biological macromolecules such as enzymes are nanoscale machines. This is true in a concrete sense: if the atomic structure of a biological macromolecule can be obtained, the theories of mechanics and intermolecular forces can be applied to explain how the machine works in terms that engineers would understand, including motors, ratchets, gates and transducers. Nevertheless, biological macromolecules are complex, fragile and extremely small, so obtaining their structures is a challenging experimental endeavor. Single-particle cryogenic electron microscopy (cryo-EM) is a technique for determining the 3D structure of a biological macromolecule from a large set of 2D electron micrographs of individual structurally-identical particles. To obtain such images, a solution of the macromolecules must be prepared in the frozen-hydrated state, embedded in a thin electron-transparent glassy film of water. This specimen must then be imaged with a very short exposure to avoid radiation damage. A powerful computer must then be used to sort, align, and average the 2D particle images to back-calculate the 3D structure. At its best, cryo-EM can determine the structures of biological macromolecules to atomic resolution. In practice, this goal is usually not achieved. Cryo-EM has gotten significantly more powerful in the past few years due to improvements in equipment and methodology. Several of the most significant advances originated in the labs of David Agard and Yifan Cheng at UCSF. When I began my PhD with Yifan, the spirit in the lab was that cryo-EM could keep getting better and better: with enough engineering, determining the 3D structure of an arbitrary biological macromolecule would be as routine an experiment as gel electrophoresis or DNA sequencing. Inspired, I took on projects in the lab that I thought would move the field closer to that goal. In the first chapter of this thesis, I describe work I did supporting a project initiated by David Agard and his long-time scientific programmer Shawn Zheng. They developed and implemented an algorithm, MotionCor2, for correcting the complex, anisotropic movements that occur when a frozen-hydrated specimen interacts with the high-energy electron beam. My role was to benchmark MotionCor2 on a panel of real-world 3D reconstruction tasks. I was able to show that MotionCor2 restored the highest resolution details in the images, ultimately yielding significantly better structures than simpler algorithms. For me, this projected highlighted the importance of benchmarking an algorithm for use in routine real-world conditions with the right metrics. In chapter 1, I include the manuscript for the MotionCor2 study, formatted to highlight my contributions that were moved to the supplement in the original publication by Nature Methods. One of the major remaining issues with cryo-EM is sample preparation: preparing the thin freestanding films of frozen-hydrated particles necessarily exposes those particles to air-water interfaces. Many fragile macromolecular complexes denature when exposed to such interfaces, preventing structure determination with cryo-EM. In chapters 2 and 3, I describe my efforts to develop a simple, robust approach to stabilizing fragile macromolecular complexes during the vitrification process. In chapter 2, I develop a method for coating EM grids with an electron-transparent and functionalizable graphene-oxide support film. I demonstrate that such GO grids are compatible with high-resolution structure determination. This work was published in the Journal of Structural Biology in 2018. In chapter 3, I extend this work by functionalizing GO grids with nucleic acids, enabling routine structure determination of uncrosslinked chromatin specimens. In on-going work, I used nucleic acid grids to solve high-resolution structures of a highly fragile specimen, the snf2h-nucleosome complex, and analyzed the conformational heterogeneity of the nucleosome substrate. These results were made possible by the nucleic acid grid, as the other major approach for stabilizing chromatin specimens, chemical crosslinking, not work for this specimen.Perhaps the most fundamental problem with single-particle cryo-EM is the radiation sensitivity of frozen-hydrated macromolecules. To image biological matter with electrons is to destroy it, so obtaining images of undamaged specimens requires very short, highly under sampled exposures. The resultant images are extremely noisy and low contrast, with most particles barely visible from the background. In chapter 4, I describe a novel computational approach to generating contrast in cryo-EM. Using a recently described machine learning strategy for training a parameterized denoising algorithm, I developed a computer program, restore, that denoises cryo-EM images, greatly enhancing their contrast and interpretability. This program leverages recent advances in computer vision and deep learning which have not yet been widely used in cryo-EM image processing algorithms. To characterize the performance of the algorithm on real-world data, I extended conventional metrics for image resolution to measure how an arbitrary transformation affects images at different spatial frequencies. These novel metrics are general and may be useful for characterizing other nonlinear reconstruction algorithms in cryo-EM and medical imaging. Finally, I showed that denoised cryo-EM images maintain the high-resolution information required for accurate 3D reconstruction. Denoising can be applied to conventional cryo-EM images and can be reversed whenever necessary. I have made the software for restore program publicly available and have submitted a manuscript for peer-reviewed publication