31 research outputs found

    The University Defence Research Collaboration In Signal Processing

    Get PDF
    This chapter describes the development of algorithms for automatic detection of anomalies from multi-dimensional, undersampled and incomplete datasets. The challenge in this work is to identify and classify behaviours as normal or abnormal, safe or threatening, from an irregular and often heterogeneous sensor network. Many defence and civilian applications can be modelled as complex networks of interconnected nodes with unknown or uncertain spatio-temporal relations. The behavior of such heterogeneous networks can exhibit dynamic properties, reflecting evolution in both network structure (new nodes appearing and existing nodes disappearing), as well as inter-node relations. The UDRC work has addressed not only the detection of anomalies, but also the identification of their nature and their statistical characteristics. Normal patterns and changes in behavior have been incorporated to provide an acceptable balance between true positive rate, false positive rate, performance and computational cost. Data quality measures have been used to ensure the models of normality are not corrupted by unreliable and ambiguous data. The context for the activity of each node in complex networks offers an even more efficient anomaly detection mechanism. This has allowed the development of efficient approaches which not only detect anomalies but which also go on to classify their behaviour

    Convolutional Neural Networks - Generalizability and Interpretations

    Get PDF

    The University Defence Research Collaboration In Signal Processing: 2013-2018

    Get PDF
    Signal processing is an enabling technology crucial to all areas of defence and security. It is called for whenever humans and autonomous systems are required to interpret data (i.e. the signal) output from sensors. This leads to the production of the intelligence on which military outcomes depend. Signal processing should be timely, accurate and suited to the decisions to be made. When performed well it is critical, battle-winning and probably the most important weapon which you鈥檝e never heard of. With the plethora of sensors and data sources that are emerging in the future network-enabled battlespace, sensing is becoming ubiquitous. This makes signal processing more complicated but also brings great opportunities. The second phase of the University Defence Research Collaboration in Signal Processing was set up to meet these complex problems head-on while taking advantage of the opportunities. Its unique structure combines two multi-disciplinary academic consortia, in which many researchers can approach different aspects of a problem, with baked-in industrial collaboration enabling early commercial exploitation. This phase of the UDRC will have been running for 5 years by the time it completes in March 2018, with remarkable results. This book aims to present those accomplishments and advances in a style accessible to stakeholders, collaborators and exploiters

    Modern GPR Target Recognition Methods

    Full text link
    Traditional GPR target recognition methods include pre-processing the data by removal of noisy signatures, dewowing (high-pass filtering to remove low-frequency noise), filtering, deconvolution, migration (correction of the effect of survey geometry), and can rely on the simulation of GPR responses. The techniques usually suffer from the loss of information, inability to adapt from prior results, and inefficient performance in the presence of strong clutter and noise. To address these challenges, several advanced processing methods have been developed over the past decade to enhance GPR target recognition. In this chapter, we provide an overview of these modern GPR processing techniques. In particular, we focus on the following methods: adaptive receive processing of range profiles depending on the target environment; adoption of learning-based methods so that the radar utilizes the results from prior measurements; application of methods that exploit the fact that the target scene is sparse in some domain or dictionary; application of advanced classification techniques; and convolutional coding which provides succinct and representatives features of the targets. We describe each of these techniques or their combinations through a representative application of landmine detection.Comment: Book chapter, 56 pages, 17 figures, 12 tables. arXiv admin note: substantial text overlap with arXiv:1806.0459

    Semi-supervised and unsupervised kernel-based novelty detection with application to remote sensing images

    Get PDF
    The main challenge of new information technologies is to retrieve intelligible information from the large volume of digital data gathered every day. Among the variety of existing data sources, the satellites continuously observing the surface of the Earth are key to the monitoring of our environment. The new generation of satellite sensors are tremendously increasing the possibilities of applications but also increasing the need for efficient processing methodologies in order to extract information relevant to the users' needs in an automatic or semi-automatic way. This is where machine learning comes into play to transform complex data into simplified products such as maps of land-cover changes or classes by learning from data examples annotated by experts. These annotations, also called labels, may actually be difficult or costly to obtain since they are established on the basis of ground surveys. As an example, it is extremely difficult to access a region recently flooded or affected by wildfires. In these situations, the detection of changes has to be done with only annotations from unaffected regions. In a similar way, it is difficult to have information on all the land-cover classes present in an image while being interested in the detection of a single one of interest. These challenging situations are called novelty detection or one-class classification in machine learning. In these situations, the learning phase has to rely only on a very limited set of annotations, but can exploit the large set of unlabeled pixels available in the images. This setting, called semi-supervised learning, allows significantly improving the detection. In this Thesis we address the development of methods for novelty detection and one-class classification with few or no labeled information. The proposed methodologies build upon the kernel methods, which take place within a principled but flexible framework for learning with data showing potentially non-linear feature relations. The thesis is divided into two parts, each one having a different assumption on the data structure and both addressing unsupervised (automatic) and semi-supervised (semi-automatic) learning settings. The first part assumes the data to be formed by arbitrary-shaped and overlapping clusters and studies the use of kernel machines, such as Support Vector Machines or Gaussian Processes. An emphasis is put on the robustness to noise and outliers and on the automatic retrieval of parameters. Experiments on multi-temporal multispectral images for change detection are carried out using only information from unchanged regions or none at all. The second part assumes high-dimensional data to lie on multiple low dimensional structures, called manifolds. We propose a method seeking a sparse and low-rank representation of the data mapped in a non-linear feature space. This representation allows us to build a graph, which is cut into several groups using spectral clustering. For the semi-supervised case where few labels of one class of interest are available, we study several approaches incorporating the graph information. The class labels can either be propagated on the graph, constrain spectral clustering or used to train a one-class classifier regularized by the given graph. Experiments on the unsupervised and oneclass classification of hyperspectral images demonstrate the effectiveness of the proposed approaches

    Design of large polyphase filters in the Quadratic Residue Number System

    Full text link

    Algoritmos de procesado de se帽al basados en Non-negative Matrix Factorization aplicados a la separaci贸n, detecci贸n y clasificaci贸n de sibilancias en se帽ales de audio respiratorias monocanal

    Get PDF
    La auscultaci贸n es el primer examen cl铆nico que un m茅dico lleva a cabo para evaluar el estado del sistema respiratorio, debido a que es un m茅todo no invasivo, de bajo coste, f谩cil de realizar y seguro para el paciente. Sin embargo, el diagn贸stico que se deriva de la auscultaci贸n sigue siendo un diagn贸stico subjetivo que se encuentra condicionado a la habilidad, experiencia y entrenamiento de cada m茅dico en la escucha e interpretaci贸n de las se帽ales de audio respiratorias. En consecuencia, se producen un alto porcentaje de diagn贸sticos err贸neos que ponen en riesgo la salud de los pacientes e incrementan el coste asociado a los centros de salud. Esta Tesis propone nuevos m茅todos basados en Non-negative Matrix Factorization aplicados a la separaci贸n, detecci贸n y clasificaci贸n de sonidos sibilantes para proporcionar una v铆a de informaci贸n complementaria al m茅dico que ayude a mejorar la fiabilidad del diagn贸stico emitido por el especialista. Auscultation is the first clinical examination that a physician performs to evaluate the condition of the respiratory system, because it is a non-invasive, low-cost, easy-to-perform and safe method for the patient. However, the diagnosis derived from auscultation remains a subjective diagnosis that is conditioned by the ability, experience and training of each physician in the listening and interpretation of respiratory audio signals. As a result, a high percentage of misdiagnoses are produced that endanger the health of patients and increase the cost associated with health centres. This Thesis proposes new methods based on Non-negative Matrix Factorization applied to separation, detection and classification of wheezing sounds in order to provide a complementary information pathway to the physician that helps to improve the reliability of the diagnosis made by the doctor.Tesis Univ. Ja茅n. Departamento INGENIER脥A DE TELECOMUNICACI脫

    Temperature aware power optimization for multicore floating-point units

    Full text link

    Sensor Signal and Information Processing II

    Get PDF
    In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing
    corecore