502 research outputs found

    Advanced DSP for coherent optical fiber communication

    Get PDF
    In this paper, we provide an overview of recent progress on advanced digital signal processing (DSP) techniques for high-capacity long-haul coherent optical fiber transmission systems. Not only the linear impairments existing in optical transmission links need to be compensated, but also, the nonlinear impairments require proper algorithms for mitigation because they become major limiting factors for long-haul large-capacity optical transmission systems. Besides the time domain equalization (TDE), the frequency domain equalization (FDE) DSP also provides a similar performance, with a much-reduced computational complexity. Advanced DSP also plays an important role for the realization of space division multiplexing (SDM). SDM techniques have been developed recently to enhance the system capacity by at least one order of magnitude. Some impressive results have been reported and have outperformed the nonlinear Shannon limit of the single-mode fiber (SMF). SDM introduces the space dimension to the optical fiber communication. The few-mode fiber (FMF) and multi-core fiber (MCF) have been manufactured for novel multiplexing techniques such as mode-division multiplexing (MDM) and multi-core multiplexing (MCM). Each mode or core can be considered as an independent degree of freedom, but unfortunately, signals will suffer serious coupling during the propagation. Multi-input−multi-output (MIMO) DSP can equalize the signal coupling and makes SDM transmission feasible. The machine learning (ML) technique has attracted worldwide attention and has been explored for advanced DSP. In this paper, we firstly introduce the principle and scheme of coherent detection to explain why the DSP techniques can compensate for transmission impairments. Then corresponding technologies related to the DSP, such as nonlinearity compensation, FDE, SDM and ML will be discussed. Relevant techniques will be analyzed, and representational results and experimental verifications will be demonstrated. In the end, a brief conclusion and perspective will be provided

    Distributed scene reconstruction from multiple mobile platforms

    Get PDF
    Recent research on mobile robotics has produced new designs that provide house-hold robots with omnidirectional motion. The image sensor embedded in these devices motivates the application of 3D vision techniques on them for navigation and mapping purposes. In addition to this, distributed cheapsensing systems acting as unitary entity have recently been discovered as an efficient alternative to expensive mobile equipment. In this work we present an implementation of a visual reconstruction method, structure from motion (SfM), on a low-budget, omnidirectional mobile platform, and extend this method to distributed 3D scene reconstruction with several instances of such a platform. Our approach overcomes the challenges yielded by the plaform. The unprecedented levels of noise produced by the image compression typical of the platform is processed by our feature filtering methods, which ensure suitable feature matching populations for epipolar geometry estimation by means of a strict quality-based feature selection. The robust pose estimation algorithms implemented, along with a novel feature tracking system, enable our incremental SfM approach to novelly deal with ill-conditioned inter-image configurations provoked by the omnidirectional motion. The feature tracking system developed efficiently manages the feature scarcity produced by noise and outputs quality feature tracks, which allow robust 3D mapping of a given scene even if - due to noise - their length is shorter than what it is usually assumed for performing stable 3D reconstructions. The distributed reconstruction from multiple instances of SfM is attained by applying loop-closing techniques. Our multiple reconstruction system merges individual 3D structures and resolves the global scale problem with minimal overlaps, whereas in the literature 3D mapping is obtained by overlapping stretches of sequences. The performance of this system is demonstrated in the 2-session case. The management of noise, the stability against ill-configurations and the robustness of our SfM system is validated on a number of experiments and compared with state-of-the-art approaches. Possible future research areas are also discussed

    Advanced DSP Techniques for High-Capacity and Energy-Efficient Optical Fiber Communications

    Get PDF
    The rapid proliferation of the Internet has been driving communication networks closer and closer to their limits, while available bandwidth is disappearing due to an ever-increasing network load. Over the past decade, optical fiber communication technology has increased per fiber data rate from 10 Tb/s to exceeding 10 Pb/s. The major explosion came after the maturity of coherent detection and advanced digital signal processing (DSP). DSP has played a critical role in accommodating channel impairments mitigation, enabling advanced modulation formats for spectral efficiency transmission and realizing flexible bandwidth. This book aims to explore novel, advanced DSP techniques to enable multi-Tb/s/channel optical transmission to address pressing bandwidth and power-efficiency demands. It provides state-of-the-art advances and future perspectives of DSP as well

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Improving Maternal and Fetal Cardiac Monitoring Using Artificial Intelligence

    Get PDF
    Early diagnosis of possible risks in the physiological status of fetus and mother during pregnancy and delivery is critical and can reduce mortality and morbidity. For example, early detection of life-threatening congenital heart disease may increase survival rate and reduce morbidity while allowing parents to make informed decisions. To study cardiac function, a variety of signals are required to be collected. In practice, several heart monitoring methods, such as electrocardiogram (ECG) and photoplethysmography (PPG), are commonly performed. Although there are several methods for monitoring fetal and maternal health, research is currently underway to enhance the mobility, accuracy, automation, and noise resistance of these methods to be used extensively, even at home. Artificial Intelligence (AI) can help to design a precise and convenient monitoring system. To achieve the goals, the following objectives are defined in this research: The first step for a signal acquisition system is to obtain high-quality signals. As the first objective, a signal processing scheme is explored to improve the signal-to-noise ratio (SNR) of signals and extract the desired signal from a noisy one with negative SNR (i.e., power of noise is greater than signal). It is worth mentioning that ECG and PPG signals are sensitive to noise from a variety of sources, increasing the risk of misunderstanding and interfering with the diagnostic process. The noises typically arise from power line interference, white noise, electrode contact noise, muscle contraction, baseline wandering, instrument noise, motion artifacts, electrosurgical noise. Even a slight variation in the obtained ECG waveform can impair the understanding of the patient's heart condition and affect the treatment procedure. Recent solutions, such as adaptive and blind source separation (BSS) algorithms, still have drawbacks, such as the need for noise or desired signal model, tuning and calibration, and inefficiency when dealing with excessively noisy signals. Therefore, the final goal of this step is to develop a robust algorithm that can estimate noise, even when SNR is negative, using the BSS method and remove it based on an adaptive filter. The second objective is defined for monitoring maternal and fetal ECG. Previous methods that were non-invasive used maternal abdominal ECG (MECG) for extracting fetal ECG (FECG). These methods need to be calibrated to generalize well. In other words, for each new subject, a calibration with a trustable device is required, which makes it difficult and time-consuming. The calibration is also susceptible to errors. We explore deep learning (DL) models for domain mapping, such as Cycle-Consistent Adversarial Networks, to map MECG to fetal ECG (FECG) and vice versa. The advantages of the proposed DL method over state-of-the-art approaches, such as adaptive filters or blind source separation, are that the proposed method is generalized well on unseen subjects. Moreover, it does not need calibration and is not sensitive to the heart rate variability of mother and fetal; it can also handle low signal-to-noise ratio (SNR) conditions. Thirdly, AI-based system that can measure continuous systolic blood pressure (SBP) and diastolic blood pressure (DBP) with minimum electrode requirements is explored. The most common method of measuring blood pressure is using cuff-based equipment, which cannot monitor blood pressure continuously, requires calibration, and is difficult to use. Other solutions use a synchronized ECG and PPG combination, which is still inconvenient and challenging to synchronize. The proposed method overcomes those issues and only uses PPG signal, comparing to other solutions. Using only PPG for blood pressure is more convenient since it is only one electrode on the finger where its acquisition is more resilient against error due to movement. The fourth objective is to detect anomalies on FECG data. The requirement of thousands of manually annotated samples is a concern for state-of-the-art detection systems, especially for fetal ECG (FECG), where there are few publicly available FECG datasets annotated for each FECG beat. Therefore, we will utilize active learning and transfer-learning concept to train a FECG anomaly detection system with the least training samples and high accuracy. In this part, a model is trained for detecting ECG anomalies in adults. Later this model is trained to detect anomalies on FECG. We only select more influential samples from the training set for training, which leads to training with the least effort. Because of physician shortages and rural geography, pregnant women's ability to get prenatal care might be improved through remote monitoring, especially when access to prenatal care is limited. Increased compliance with prenatal treatment and linked care amongst various providers are two possible benefits of remote monitoring. If recorded signals are transmitted correctly, maternal and fetal remote monitoring can be effective. Therefore, the last objective is to design a compression algorithm that can compress signals (like ECG) with a higher ratio than state-of-the-art and perform decompression fast without distortion. The proposed compression is fast thanks to the time domain B-Spline approach, and compressed data can be used for visualization and monitoring without decompression owing to the B-spline properties. Moreover, the stochastic optimization is designed to retain the signal quality and does not distort signal for diagnosis purposes while having a high compression ratio. In summary, components for creating an end-to-end system for day-to-day maternal and fetal cardiac monitoring can be envisioned as a mix of all tasks listed above. PPG and ECG recorded from the mother can be denoised using deconvolution strategy. Then, compression can be employed for transmitting signal. The trained CycleGAN model can be used for extracting FECG from MECG. Then, trained model using active transfer learning can detect anomaly on both MECG and FECG. Simultaneously, maternal BP is retrieved from the PPG signal. This information can be used for monitoring the cardiac status of mother and fetus, and also can be used for filling reports such as partogram

    Adaptive Background Modeling with Temporal Feature Update for Dynamic Foreground Object Removal

    Get PDF
    In the study of computer vision, background modeling is a fundamental and critical task in many conventional applications. This thesis presents an introduction to background modeling and various computer vision techniques for estimating the background model to achieve the goal of removing dynamic objects in a video sequence. The process of estimating the background model with temporal changes in the absence of foreground moving objects is called adaptive background modeling. In this thesis, three adaptive background modeling approaches were presented for the purpose of developing \teacher removal algorithms. First, an adaptive background modeling algorithm based on linear adaptive prediction is presented. Second, an adaptive background modeling algorithm based on statistical dispersion is presented. Third, a novel adaptive background modeling algorithm based on low rank and sparsity constraints is presented. The design and implementation of these algorithms are discussed in detail, and the experimental results produced by each algorithm are presented. Lastly, the results of this research are generalized and potential future research is discussed

    Tecnologias coerentes para redes ópticas flexíveis

    Get PDF
    Next-generation networks enable a broad range of innovative services with the best delivery by utilizing very dense wired/wireless networks. However, the development of future networks will require several breakthroughs in optical networks such as high-performance optical transceivers to support a very-high capacity optical network as well as optimization of the network concept, ensuring a dramatic reduction of the cost per bit. At the same time, all of the optical network segments (metro, access, long-haul) need new technology options to support high capacity, spectral efficiency and data-rate flexibility. Coherent detection offers an opportunity by providing very high sensitivity and supporting high spectral efficiency. Coherent technology can still be combined with polarization multiplexing. Despite the increased cost and complexity, the migration to dual-polarization coherent transceivers must be considered, as it enables to double the spectral efficiency. These dual-polarization systems require an additional digital signal processing (DSP) subsystem for polarization demultiplexing. This work seeks to provide and characterize cost-effective novel coherent transceivers for the development of new generation practical, flexible and high capacity transceivers for optical metro-access and data center interconnects. In this regard, different polarization demultiplexing (PolDemux) algorithms, as well as adaptive Stokes will be considered. Furthermore, low complexity and modulation format-agnostic DSP techniques based on adaptive Stokes PolDemux for flexible and customizable optical coherent systems will be proposed. On this subject, the performance of the adaptive Stokes algorithm in an ultra-dense wavelength division multiplexing (U-DWDM) system will be experimentally evaluated, in offline and real-time operations over a hybrid optical-wireless link. In addition, the efficiency of this PolDemux algorithm in a flexible optical metro link based on Nyquist pulse shaping U-DWDM system and hybrid optical signals will be assessed. Moreover, it is of great importance to find a transmission technology that enables to apply the Stokes PolDemux for long-haul transmission systems and data center interconnects. In this work, it is also proposed a solution based on the use of digital multi-subcarrier multiplexing, which improve the performance of long-haul optical systems, without increasing substantially, their complexity and cost.As redes de telecomunicações futuras permitirão uma ampla gama de serviços inovadores e com melhor desempenho. No entanto, o desenvolvimento das futuras redes implicará vários avanços nas redes de fibra ótica, como transcetores óticos de alto desempenho capazes de suportar ligações de muito elevada capacidade, e a otimização da estrutura da rede, permitindo uma redução drástica do custo por bit transportado. Simultaneamente, todos os segmentos de rede ótica (metropolitanas, acesso e longo alcance) necessitam de novas opções tecnológicas para suportar uma maior capacidade, maior eficiência espetral e flexibilidade. Neste contexto, a deteção coerente surge como uma oportunidade, fornecendo alta sensibilidade e elevada eficiência espetral. A tecnologia de deteção coerente pode ainda ser associada à multiplexação na polarização. Apesar de um potencial aumento ao nível do custo e da complexidade, a migração para transcetores coerentes de dupla polarização deve ser ponderada, pois permite duplicar a eficiência espetral. Esses sistemas de dupla polarização requerem um subsistema de processamento digital de sinal (DSP) adicional para desmultiplexagem da polarização. Este trabalho procura fornecer e caracterizar novos transcetores coerentes de baixo custo para o desenvolvimento de uma nova geração de transcetores mais práticos, flexíveis e de elevada capacidade, para interconexões óticas ao nível das futuras redes de acesso e metro. Assim, serão analisados diferentes algoritmos para a desmultiplexagem da polarização, incluindo uma abordagem adaptativa baseada no espaço de Stokes. Além disso, são propostas técnicas de DSP independentes do formato de modulação e de baixa complexidade baseadas na desmultiplexagem de Stokes adaptativa para sistemas óticos coerentes flexíveis. Neste contexto, o desempenho do algoritmo adaptativo de desmultiplexagem na polarização baseado no espaço de Stokes é avaliado experimentalmente num sistema U-DWDM, tanto em análises off-line como em tempo real, considerando um percurso ótico hibrido que combina um sistema de transmissão suportado por fibra e outro em espaço livre. Foi ainda analisada a eficiência do algoritmo de desmultiplexagem na polarização numa rede ótica de acesso flexível U-DWDM com formatação de pulso do tipo Nyquist. Neste trabalho foi ainda analisada a aplicação da técnica de desmultiplexagem na polarização baseada no espaço de Stokes para sistemas de longo alcance. Assim, foi proposta uma solução de aplicação baseada no uso da multiplexagem digital de múltiplas sub-portadoras, tendo-se demonstrado uma melhoria na eficiência do desempenho dos sistemas óticos de longo alcance, sem aumentar significativamente a respetiva complexidade e custo.Programa Doutoral em Engenharia Eletrotécnic

    Optical Delay Interferometers and their Application for Self-coherent Detection

    Get PDF
    Self-coherent receivers are promising candidates for reception of 100 Gbit/s data rates in optical networks. Self-coherent receivers consist of multiple optical delay interferometers (DI) with high-speed photodiodes attached to the outputs. By DSP of the photo currents it becomes possible to receive coherently modulated optical signals. Especially promising for 100 Gbit/s networks is the PolMUX DQPSK format, the self-coherent reception of which is described in detail

    PEDESTRIAN DETECTION BY LASER SCANNING AND DEPTH IMAGERY

    Get PDF
    corecore