15 research outputs found

    Spectrum sensing for cognitive radio and radar systems

    Get PDF
    The use of the radio frequency spectrum is increasing at a rapid rate. Reliable and efficient operation in a crowded radio spectrum requires innovative solutions and techniques. Future wireless communication and radar systems should be aware of their surrounding radio environment in order to have the ability to adapt their operation to the effective situation. Spectrum sensing techniques such as detection, waveform recognition, and specific emitter identification are key sources of information for characterizing the surrounding radio environment and extracting valuable information, and consequently adjusting transceiver parameters for facilitating flexible, efficient, and reliable operation. In this thesis, spectrum sensing algorithms for cognitive radios and radar intercept receivers are proposed. Single-user and collaborative cyclostationarity-based detection algorithms are proposed: Multicycle detectors and robust nonparametric spatial sign cyclic correlation based fixed sample size and sequential detectors are proposed. Asymptotic distributions of the test statistics under the null hypothesis are established. A censoring scheme in which only informative test statistics are transmitted to the fusion center is proposed for collaborative detection. The proposed detectors and methods have the following benefits: employing cyclostationarity enables distinction among different systems, collaboration mitigates the effects of shadowing and multipath fading, using multiple strong cyclic frequencies improves the performance, robust detection provides reliable performance in heavy-tailed non-Gaussian noise, sequential detection reduces the average detection time, and censoring improves energy efficiency. In addition, a radar waveform recognition system for classifying common pulse compression waveforms is developed. The proposed supervised classification system classifies an intercepted radar pulse to one of eight different classes based on the pulse compression waveform: linear frequency modulation, Costas frequency codes, binary codes, as well as Frank, P1, P2, P3, and P4 polyphase codes. A robust M-estimation based method for radar emitter identification is proposed as well. A common modulation profile from a group of intercepted pulses is estimated and used for identifying the radar emitter. The M-estimation based approach provides robustness against preprocessing errors and deviations from the assumed noise model

    Real-Time Machine Learning for Quickest Detection

    Get PDF
    Safety-critical Cyber-Physical Systems (CPS) require real-time machine learning for control and decision making. One promising solution is to use deep learning to discover useful patterns for event detection from heterogeneous data. However, deep learning algorithms encounter challenges in CPS with assurability requirements: 1) Decision explainability, 2) Real-time and quickest event detection, and 3) Time-eficient incremental learning. To address these obstacles, I developed a real-time Machine Learning Framework for Quickest Detection (MLQD). To be specific, I first propose the zero-bias neural network, which removes decision bias and preferabilities from regular neural networks and provides an interpretable decision process. Second, I discover the latent space characteristic of the zero-bias neural network and the method to mathematically convert a Deep Neural Network (DNN) classifier into a performance-assured binary abnormality detector. In this way, I can seamlessly integrate the deep neural networks\u27 data processing capability with Quickest Detection (QD) and provide real-time sequential event detection paradigm. Thirdly, after discovering that a critical factor that impedes the incremental learning of neural networks is the concept interference (confusion) in latent space, and I prove that to minimize interference, the concept representation vectors (class fingerprints) within the latent space need to be organized orthogonally and I invent a new incremental learning strategy using the findings, I facilitate deep neural networks in the CPS to evolve eficiently without retraining. All my algorithms are evaluated on real-world applications, ADS-B (Automatic Dependent Surveillance Broadcasting) signal identification, and spoofing detection in the aviation communication system. Finally, I discuss the current trends in MLQD and conclude this dissertation by presenting the future research directions and applications. As a summary, the innovations of this dissertation are as follows: i) I propose the zerobias neural network, which provides transparent latent space characteristics, I apply it to solve the wireless device identification problem. ii) I discover and prove the orthogonal memory organization mechanism in artificial neural networks and apply this mechanism in time-efficient incremental learning. iii) I discover and mathematically prove the converging point theorem, with which we can predict the latent space topological characteristics and estimate the topological maturity of neural networks. iv) I bridge the gap between machine learning and quickest detection with assurable performance

    Resource management in sensing services with audio applications

    Get PDF
    Middleware abstractions, or services, that can bridge the gap between the increasingly pervasive sensors and the sophisticated inference applications exist, but they lack the necessary resource-awareness to support high data-rate sensing modalities such as audio/video. This work therefore investigates the resource management problem in sensing services, with application in audio sensing. First, a modular, data-centric architecture is proposed as the framework within which optimal resource management is studied. Next, the guided-processing principle is proposed to achieve optimized trade-off between resource (energy) and (inference) performance. On cascade-based systems, empirical results show that the proposed approach significantly improves the detection performance (up to 1.7x and 4x reduction in false-alarm and miss rate, respectively) for the same energy consumption, when compared to the duty-cycling approach. Furthermore, the guided-processing approach is also generalizable to graph-based systems. Resource-efficiency in the multiple-application setting is achieved through the feature-sharing principle. Once applied, the method results in a system that can achieve 9x resource saving and 1.43x improvement in detection performance in an example application. Based on the encouraging results above, a prototype audio sensing service is built for demonstration. An interference-robust audio classification technique with limited training data would prove valuable within the service, so a novel algorithm with the desired properties is proposed. The technique combines AI-gram time-frequency representation and multidimensional dynamic time warping, and it outperforms the state-of-the-art using the prominent-region-based approach across a wide range of (synthetic, both stationary and transient) interference types and signal-to-interference ratios, and also on field recordings (with areas under the receiver operating characteristic and precision-recall curves being 91% and 87%, respectively)

    Soluções IoT para caracterização de tráfego em cidades inteligentes

    Get PDF
    mestrado em Engenharia de Computadores e TelemáticaAs agências que administram o tráfego rodoviário têm de tomar decisões importantes, a fim de definir quais as secções de estrada que têm o maior risco de impactes relacionados com o tráfego. Neste contexto, reconhece-se que a implementação do Sistema Avançado de Gestão de Tráfego (SAGT) pode melhorar não apenas a eficiência da rede, mas também minimizar outras externalidades de tráfego. Neste contexto, novas soluções de software e hardware, capazes de fornecer informações melhoradas a partir da dinâmica do tráfego, podem desempenhar um papel essencial no conhecimento que temos e a forma como o SAGT é executado. Em particular, a disponibilidade de dados georreferenciados está a aumentar rápidamente, seja em dispositivos móveis, como em redes sociais e monitoramento de redes de sensores. Um dos desafios é combinar e melhorar o potencial de cada fonte de informação, e depois juntar várias fontes num modelo agregado. Nesta dissertação é descrita a arquitetura e implementação de diferentes dispositivos para recolha de dados, um protótipo para monitorização com integração de parâmetros do motor em tempo-real e uma aplicação móvel, o desenvolvimento de toda a infra-estrutura necessária e ainda uma aplicação web que combine estes dados e forneça ferramentas de análise e visualização. Nesta dissertação é possível que os utilizadores possam utilizar estas ferramentas para adotarem escolhas mais sustentáveis e uso de estradas menos congestionadas, contribuindo para a diminuição do congestionamento do tráfego, poupando tempo, aumentando o fluxo de tráfego, e contribuindo positivamente para o impacte ambiental.Agencies managing road traffic need to make informed decisions, in order to define which road sections have the highest risk of traffic-related impacts. In this context, it is recognized that the implementation of Advanced Traffic Management System (ATMS) may improve not only network efficiency but also minimize other traffic externalities. In this context, novel software and hardware solutions, capable of providing improved information from the traffic dynamics, can play an essential role in the knowledge we have, and the way ATMS is executed. In particular, the availability of geo-referenced data is increasing quickly, either from nomadic devices as well as from social media, and monitoring sensors networks. One of the challenges is to combine and to improve the potential of each source of information, and then combine multiple sources together in an aggregate model. This dissertation describes the architecture and implementation of an accurate, high-frequency vehicle tracker, with the integration of real-time engine statistics, and enhanced with an autonomous inertial model as well as a mobile application for data collection from the embedded sensors and positioning, the development of all necessary infrastructure and a web application that combine these data and provide analysis and visualization tools. In this dissertation it is possible for users to use these tools to adopt more sustainable choices and use of less congested roads, contributing to the reduction of traffic congestion, saving time, improving traffic throughput, and contributing positively to the environmental impact

    Real-time sound synthesis on a multi-processor platform

    Get PDF
    Real-time sound synthesis means that the calculation and output of each sound sample for a channel of audio information must be completed within a sample period. At a broadcasting standard, a sampling rate of 32,000 Hz, the maximum period available is 31.25 μsec. Such requirements demand a large amount of data processing power. An effective solution for this problem is a multi-processor platform; a parallel and distributed processing system. The suitability of the MIDI [Music Instrument Digital Interface] standard, published in 1983, as a controller for real-time applications is examined. Many musicians have expressed doubts on the decade old standard's ability for real-time performance. These have been investigated by measuring timing in various musical gestures, and by comparing these with the subjective characteristics of human perception. An implementation and its optimisation of real-time additive synthesis programs on a multi-transputer network are described. A prototype 81-polyphonic-note- organ configuration was implemented. By devising and deploying monitoring processes, the network's performance was measured and enhanced, leading to an efficient usage; the 88-note configuration. Since 88 simultaneous notes are rarely necessary in most performances, a scheduling program for dynamic note allocation was then introduced to achieve further efficiency gains. Considering calculation redundancies still further, a multi-sampling rate approach was applied as a further step to achieve an optimal performance. The theories underlining sound granulation, as a means of constructing complex sounds from grains, and the real-time implementation of this technique are outlined. The idea of sound granulation is quite similar to the quantum-wave theory, "acoustic quanta". Despite the conceptual simplicity, the signal processing requirements set tough demands, providing a challenge for this audio synthesis engine. Three issues arising from the results of the implementations above are discussed; the efficiency of the applications implemented, provisions for new processors and an optimal network architecture for sound synthesis

    Proceedings of the 7th Sound and Music Computing Conference

    Get PDF
    Proceedings of the SMC2010 - 7th Sound and Music Computing Conference, July 21st - July 24th 2010

    Proceedings of the 2018 Canadian Society for Mechanical Engineering (CSME) International Congress

    Get PDF
    Published proceedings of the 2018 Canadian Society for Mechanical Engineering (CSME) International Congress, hosted by York University, 27-30 May 2018

    Digital neuromorphic auditory systems

    Get PDF
    This dissertation presents several digital neuromorphic auditory systems. Neuromorphic systems are capable of running in real-time at a smaller computing cost and consume lower power than on widely available general computers. These auditory systems are considered neuromorphic as they are modelled after computational models of the mammalian auditory pathway and are capable of running on digital hardware, or more specifically on a field-programmable gate array (FPGA). The models introduced are categorised into three parts: a cochlear model, an auditory pitch model, and a functional primary auditory cortical (A1) model. The cochlear model is the primary interface of an input sound signal and transmits the 2D time-frequency representation of the sound to the pitch models as well as to the A1 model. In the pitch model, pitch information is extracted from the sound signal in the form of a fundamental frequency. From the A1 model, timbre information in the form of time-frequency envelope information of the sound signal is extracted. Since the computational auditory models mentioned above are required to be implemented on FPGAs that possess fewer computational resources than general-purpose computers, the algorithms in the models are optimised so that they fit on a single FPGA. The optimisation includes using simplified hardware-implementable signal processing algorithms. Computational resource information of each model on FPGA is extracted to understand the minimum computational resources required to run each model. This information includes the quantity of logic modules, register quantity utilised, and power consumption. Similarity comparisons are also made between the output responses of the computational auditory models on software and hardware using pure tones, chirp signals, frequency-modulated signal, moving ripple signals, and musical signals as input. The limitation of the responses of the models to musical signals at multiple intensity levels is also presented along with the use of an automatic gain control algorithm to alleviate such limitations. With real-world musical signals as their inputs, the responses of the models are also tested using classifiers – the response of the auditory pitch model is used for the classification of monophonic musical notes, and the response of the A1 model is used for the classification of musical instruments with their respective monophonic signals. Classification accuracy results are shown for model output responses on both software and hardware. With the hardware implementable auditory pitch model, the classification score stands at 100% accuracy for musical notes from the 4th and 5th octaves containing 24 classes of notes. With the hardware implementation auditory timbre model, the classification score is 92% accuracy for 12 classes musical instruments. Also presented is the difference in memory requirements of the model output responses on both software and hardware – pitch and timbre responses used for the classification exercises use 24 and 2 times less memory space for hardware than software
    corecore