10 research outputs found

    Joint signal detection and channel estimation in rank-deficient MIMO systems

    Get PDF
    L'évolution de la prospère famille des standards 802.11 a encouragé le développement des technologies appliquées aux réseaux locaux sans fil (WLANs). Pour faire face à la toujours croissante nécessité de rendre possible les communications à très haut débit, les systèmes à antennes multiples (MIMO) sont une solution viable. Ils ont l'avantage d'accroître le débit de transmission sans avoir recours à plus de puissance ou de largeur de bande. Cependant, l'industrie hésite encore à augmenter le nombre d'antennes des portables et des accésoires sans fil. De plus, à l'intérieur des bâtiments, la déficience de rang de la matrice de canal peut se produire dû à la nature de la dispersion des parcours de propagation, ce phénomène est aussi occasionné à l'extérieur par de longues distances de transmission. Ce projet est motivé par les raisons décrites antérieurement, il se veut un étude sur la viabilité des transcepteurs sans fil à large bande capables de régulariser la déficience de rang du canal sans fil. On vise le développement des techniques capables de séparer M signaux co-canal, même avec une seule antenne et à faire une estimation précise du canal. Les solutions décrites dans ce document cherchent à surmonter les difficultés posées par le medium aux transcepteurs sans fil à large bande. Le résultat de cette étude est un algorithme transcepteur approprié aux systèmes MIMO à rang déficient

    Applications of compressive sensing to direction of arrival estimation

    Get PDF
    Die Schätzung der Einfallsrichtungen (Directions of Arrival/DOA) mehrerer ebener Wellenfronten mit Hilfe eines Antennen-Arrays ist eine der prominentesten Fragestellungen im Gebiet der Array-Signalverarbeitung. Das nach wie vor starke Forschungsinteresse in dieser Richtung konzentriert sich vor allem auf die Reduktion des Hardware-Aufwands, im Sinne der Komplexität und des Energieverbrauchs der Empfänger, bei einem vorgegebenen Grad an Genauigkeit und Robustheit gegen Mehrwegeausbreitung. Diese Dissertation beschäftigt sich mit der Anwendung von Compressive Sensing (CS) auf das Gebiet der DOA-Schätzung mit dem Ziel, hiermit die Komplexität der Empfängerhardware zu reduzieren und gleichzeitig eine hohe Richtungsauflösung und Robustheit zu erreichen. CS wurde bereits auf das DOA-Problem angewandt unter der Ausnutzung der Tatsache, dass eine Superposition ebener Wellenfronten mit einer winkelabhängigen Leistungsdichte korrespondiert, die über den Winkel betrachtet sparse ist. Basierend auf der Idee wurden CS-basierte Algorithmen zur DOA-Schätzung vorgeschlagen, die sich durch eine geringe Rechenkomplexität, Robustheit gegenüber Quellenkorrelation und Flexibilität bezüglich der Wahl der Array-Geometrie auszeichnen. Die Anwendung von CS führt darüber hinaus zu einer erheblichen Reduktion der Hardware-Komplexität, da weniger Empfangskanäle benötigt werden und eine geringere Datenmenge zu verarbeiten und zu speichern ist, ohne dabei wesentliche Informationen zu verlieren. Im ersten Teil der Arbeit wird das Problem des Modellfehlers bei der CS-basierten DOA-Schätzung mit gitterbehafteten Verfahren untersucht. Ein häufig verwendeter Ansatz um das CS-Framework auf das DOA-Problem anzuwenden ist es, den kontinuierlichen Winkel-Parameter zu diskreditieren und damit ein Dictionary endlicher Größe zu bilden. Da die tatsächlichen Winkel fast sicher nicht auf diesem Gitter liegen werden, entsteht dabei ein unvermeidlicher Modellfehler, der sich auf die Schätzalgorithmen auswirkt. In der Arbeit wird ein analytischer Ansatz gewählt, um den Effekt der Gitterfehler auf die rekonstruierten Spektra zu untersuchen. Es wird gezeigt, dass sich die Messung einer Quelle aus beliebiger Richtung sehr gut durch die erwarteten Antworten ihrer beiden Nachbarn auf dem Gitter annähern lässt. Darauf basierend wird ein einfaches und effizientes Verfahren vorgeschlagen, den Gitterversatz zu schätzen. Dieser Ansatz ist anwendbar auf einzelne Quellen oder mehrere, räumlich gut separierte Quellen. Für den Fall mehrerer dicht benachbarter Quellen wird ein numerischer Ansatz zur gemeinsamen Schätzung des Gitterversatzes diskutiert. Im zweiten Teil der Arbeit untersuchen wir das Design kompressiver Antennenarrays für die DOA-Schätzung. Die Kompression im Sinne von Linearkombinationen der Antennensignale, erlaubt es, Arrays mit großer Apertur zu entwerfen, die nur wenige Empfangskanäle benötigen und sich konfigurieren lassen. In der Arbeit wird eine einfache Empfangsarchitektur vorgeschlagen und ein allgemeines Systemmodell diskutiert, welches verschiedene Optionen der tatsächlichen Hardware-Realisierung dieser Linearkombinationen zulässt. Im Anschluss wird das Design der Gewichte des analogen Kombinations-Netzwerks untersucht. Numerische Simulationen zeigen die Überlegenheit der vorgeschlagenen kompressiven Antennen-Arrays im Vergleich mit dünn besetzten Arrays der gleichen Komplexität sowie kompressiver Arrays mit zufällig gewählten Gewichten. Schließlich werden zwei weitere Anwendungen der vorgeschlagenen Ansätze diskutiert: CS-basierte Verzögerungsschätzung und kompressives Channel Sounding. Es wird demonstriert, dass die in beiden Gebieten durch die Anwendung der vorgeschlagenen Ansätze erhebliche Verbesserungen erzielt werden können.Direction of Arrival (DOA) estimation of plane waves impinging on an array of sensors is one of the most important tasks in array signal processing, which have attracted tremendous research interest over the past several decades. The estimated DOAs are used in various applications like localization of transmitting sources, massive MIMO and 5G Networks, tracking and surveillance in radar, and many others. The major objective in DOA estimation is to develop approaches that allow to reduce the hardware complexity in terms of receiver costs and power consumption, while providing a desired level of estimation accuracy and robustness in the presence of multiple sources and/or multiple paths. Compressive sensing (CS) is a novel sampling methodology merging signal acquisition and compression. It allows for sampling a signal with a rate below the conventional Nyquist bound. In essence, it has been shown that signals can be acquired at sub-Nyquist sampling rates without loss of information provided they possess a sufficiently sparse representation in some domain and that the measurement strategy is suitably chosen. CS has been recently applied to DOA estimation, leveraging the fact that a superposition of planar wavefronts corresponds to a sparse angular power spectrum. This dissertation investigates the application of compressive sensing to the DOA estimation problem with the goal to reduce the hardware complexity and/or achieve a high resolution and a high level of robustness. Many CS-based DOA estimation algorithms have been proposed in recent years showing tremendous advantages with respect to the complexity of the numerical solution while being insensitive to source correlation and allowing arbitrary array geometries. Moreover, CS has also been suggested to be applied in the spatial domain with the main goal to reduce the complexity of the measurement process by using fewer RF chains and storing less measured data without the loss of any significant information. In the first part of the work we investigate the model mismatch problem for CS based DOA estimation algorithms off the grid. To apply the CS framework a very common approach is to construct a finite dictionary by sampling the angular domain with a predefined sampling grid. Therefore, the target locations are almost surely not located exactly on a subset of these grid points. This leads to a model mismatch which deteriorates the performance of the estimators. We take an analytical approach to investigate the effect of such grid offsets on the recovered spectra showing that each off-grid source can be well approximated by the two neighboring points on the grid. We propose a simple and efficient scheme to estimate the grid offset for a single source or multiple well-separated sources. We also discuss a numerical procedure for the joint estimation of the grid offsets of closer sources. In the second part of the thesis we study the design of compressive antenna arrays for DOA estimation that aim to provide a larger aperture with a reduced hardware complexity and allowing reconfigurability, by a linear combination of the antenna outputs to a lower number of receiver channels. We present a basic receiver architecture of such a compressive array and introduce a generic system model that includes different options for the hardware implementation. We then discuss the design of the analog combining network that performs the receiver channel reduction. Our numerical simulations demonstrate the superiority of the proposed optimized compressive arrays compared to the sparse arrays of the same complexity and to compressive arrays with randomly chosen combining kernels. Finally, we consider two other applications of the sparse recovery and compressive arrays. The first application is CS based time delay estimation and the other one is compressive channel sounding. We show that the proposed approaches for sparse recovery off the grid and compressive arrays show significant improvements in the considered applications compared to conventional methods

    Deep learning for fast and robust medical image reconstruction and analysis

    Get PDF
    Medical imaging is an indispensable component of modern medical research as well as clinical practice. Nevertheless, imaging techniques such as magnetic resonance imaging (MRI) and computational tomography (CT) are costly and are less accessible to the majority of the world. To make medical devices more accessible, affordable and efficient, it is crucial to re-calibrate our current imaging paradigm for smarter imaging. In particular, as medical imaging techniques have highly structured forms in the way they acquire data, they provide us with an opportunity to optimise the imaging techniques holistically by leveraging data. The central theme of this thesis is to explore different opportunities where we can exploit data and deep learning to improve the way we extract information for better, faster and smarter imaging. This thesis explores three distinct problems. The first problem is the time-consuming nature of dynamic MR data acquisition and reconstruction. We propose deep learning methods for accelerated dynamic MR image reconstruction, resulting in up to 10-fold reduction in imaging time. The second problem is the redundancy in our current imaging pipeline. Traditionally, imaging pipeline treated acquisition, reconstruction and analysis as separate steps. However, we argue that one can approach them holistically and optimise the entire pipeline jointly for a specific target goal. To this end, we propose deep learning approaches for obtaining high fidelity cardiac MR segmentation directly from significantly undersampled data, greatly exceeding the undersampling limit for image reconstruction. The final part of this thesis tackles the problem of interpretability of the deep learning algorithms. We propose attention-models that can implicitly focus on salient regions in an image to improve accuracy for ultrasound scan plane detection and CT segmentation. More crucially, these models can provide explainability, which is a crucial stepping stone for the harmonisation of smart imaging and current clinical practice.Open Acces

    Uncertainty in correlation-driven operational modal parameter estimation.

    Get PDF
    Due to the practical advantages over traditional input-output testing, operational or output-only modal analysis is receiving increased attention when the modal parameters of large civil engineering structures are of interest. However, as a consequence of the random nature of ambient loading and the unknown relationship between excitation and response, the identified operational modal parameters are inevitably corrupted by errors. Whether the estimated modal data is used to update a finite element model or different sets of modal parameters are used as a damage indicator, it is desirable to know the extent of the error in the modal data for more accurate response predictions or to assess, if changes in the modal data are indicative of damage or just the result of the random error inherent in the identification process. In this thesis, two techniques are investigated to estimate the error in the modal parameters identified from response data only: a perturbation and a bootstrap based method. The perturbation method, applicable exclusively to the correlation-driven stochastic subspace identification algorithm (SSI/Cov), is a two stage procedure. It operates on correlation functions estimated from a single set of response measurements and, in a first step, the perturbations to these correlation function estimates need to be determined. A robust, data-driven method is developed for this purpose. The next step consists in propagating these perturbations through the algorithm resulting in an estimate of the sensitivities of the modal data to these perturbations. Combining the sensitivities with the perturbations, an estimate of both the random and bias errors in the SSI/Cov-identified modal parameters is found. The bootstrap technique involves creating pseudo time-series by resampling from the only available set of response measurements. With this additional data at hand, a modal identification is performed for each set of data and the errors in the modal parameters are determined by sample statistics. However, the bootstrap itself introduces errors in the computed sample statistics. Three bootstrapping schemes are investigate in relation to operational modal analysis and an automated, optimal block length selection is implemented to minimise the error introduced by the bootstrap. As opposed to the perturbation method, the bootstrap technique is more versatile and it is not restricted to correlation-driven operational modal analysis. Its applicability to the data-driven stochastic subspace identification algorithm (SSI/Data) for error prediction of the SSI/data-identified modal data is explored. The performance of the two techniques is assessed by simulation on simple systems. Monte-Carlo type error estimates are used as a benchmark against which the predicted errors in the modal parameters computed from a single response history from both techniques are validated. Both techniques are assessed in terms of their accuracy and stability in predicting the uncertainty in the operational modal parameters and their computational efficiency is compared. Also, the performance of the bootstrap and the perturbation theoretic method is investigated in hostile ambient excitation conditions such as non-stationarity and the presence of deterministic components and the limitations of both methods are clearly exposed

    Principled methods for mixtures processing

    Get PDF
    This document is my thesis for getting the habilitation à diriger des recherches, which is the french diploma that is required to fully supervise Ph.D. students. It summarizes the research I did in the last 15 years and also provides the short­term research directions and applications I want to investigate. Regarding my past research, I first describe the work I did on probabilistic audio modeling, including the separation of Gaussian and α­stable stochastic processes. Then, I mention my work on deep learning applied to audio, which rapidly turned into a large effort for community service. Finally, I present my contributions in machine learning, with some works on hardware compressed sensing and probabilistic generative models.My research programme involves a theoretical part that revolves around probabilistic machine learning, and an applied part that concerns the processing of time series arising in both audio and life sciences

    Uncertainty in Artificial Intelligence: Proceedings of the Thirty-Fourth Conference

    Get PDF

    Smart Monitoring and Control in the Future Internet of Things

    Get PDF
    The Internet of Things (IoT) and related technologies have the promise of realizing pervasive and smart applications which, in turn, have the potential of improving the quality of life of people living in a connected world. According to the IoT vision, all things can cooperate amongst themselves and be managed from anywhere via the Internet, allowing tight integration between the physical and cyber worlds and thus improving efficiency, promoting usability, and opening up new application opportunities. Nowadays, IoT technologies have successfully been exploited in several domains, providing both social and economic benefits. The realization of the full potential of the next generation of the Internet of Things still needs further research efforts concerning, for instance, the identification of new architectures, methodologies, and infrastructures dealing with distributed and decentralized IoT systems; the integration of IoT with cognitive and social capabilities; the enhancement of the sensing–analysis–control cycle; the integration of consciousness and awareness in IoT environments; and the design of new algorithms and techniques for managing IoT big data. This Special Issue is devoted to advancements in technologies, methodologies, and applications for IoT, together with emerging standards and research topics which would lead to realization of the future Internet of Things

    The NASTRAN theoretical manual

    Get PDF
    Designed to accommodate additions and modifications, this commentary on NASTRAN describes the problem solving capabilities of the program in a narrative fashion and presents developments of the analytical and numerical procedures that underlie the program. Seventeen major sections and numerous subsections cover; the organizational aspects of the program, utility matrix routines, static structural analysis, heat transfer, dynamic structural analysis, computer graphics, special structural modeling techniques, error analysis, interaction between structures and fluids, and aeroelastic analysis

    Fuelling the zero-emissions road freight of the future: routing of mobile fuellers

    Get PDF
    The future of zero-emissions road freight is closely tied to the sufficient availability of new and clean fuel options such as electricity and Hydrogen. In goods distribution using Electric Commercial Vehicles (ECVs) and Hydrogen Fuel Cell Vehicles (HFCVs) a major challenge in the transition period would pertain to their limited autonomy and scarce and unevenly distributed refuelling stations. One viable solution to facilitate and speed up the adoption of ECVs/HFCVs by logistics, however, is to get the fuel to the point where it is needed (instead of diverting the route of delivery vehicles to refuelling stations) using "Mobile Fuellers (MFs)". These are mobile battery swapping/recharging vans or mobile Hydrogen fuellers that can travel to a running ECV/HFCV to provide the fuel they require to complete their delivery routes at a rendezvous time and space. In this presentation, new vehicle routing models will be presented for a third party company that provides MF services. In the proposed problem variant, the MF provider company receives routing plans of multiple customer companies and has to design routes for a fleet of capacitated MFs that have to synchronise their routes with the running vehicles to deliver the required amount of fuel on-the-fly. This presentation will discuss and compare several mathematical models based on different business models and collaborative logistics scenarios
    corecore