191 research outputs found
Survey of FPGA applications in the period 2000 – 2015 (Technical Report)
Romoth J, Porrmann M, Rückert U. Survey of FPGA applications in the period 2000 – 2015 (Technical Report).; 2017.Since their introduction, FPGAs can be seen in more and more different fields of applications. The key advantage is the combination of software-like flexibility with the performance otherwise common to hardware. Nevertheless, every application field introduces special requirements to the used computational architecture. This paper provides an overview of the different topics FPGAs have been used for in the last 15 years of research and why they have been chosen over other processing units like e.g. CPUs
Internet of Underwater Things and Big Marine Data Analytics -- A Comprehensive Survey
The Internet of Underwater Things (IoUT) is an emerging communication
ecosystem developed for connecting underwater objects in maritime and
underwater environments. The IoUT technology is intricately linked with
intelligent boats and ships, smart shores and oceans, automatic marine
transportations, positioning and navigation, underwater exploration, disaster
prediction and prevention, as well as with intelligent monitoring and security.
The IoUT has an influence at various scales ranging from a small scientific
observatory, to a midsized harbor, and to covering global oceanic trade. The
network architecture of IoUT is intrinsically heterogeneous and should be
sufficiently resilient to operate in harsh environments. This creates major
challenges in terms of underwater communications, whilst relying on limited
energy resources. Additionally, the volume, velocity, and variety of data
produced by sensors, hydrophones, and cameras in IoUT is enormous, giving rise
to the concept of Big Marine Data (BMD), which has its own processing
challenges. Hence, conventional data processing techniques will falter, and
bespoke Machine Learning (ML) solutions have to be employed for automatically
learning the specific BMD behavior and features facilitating knowledge
extraction and decision support. The motivation of this paper is to
comprehensively survey the IoUT, BMD, and their synthesis. It also aims for
exploring the nexus of BMD with ML. We set out from underwater data collection
and then discuss the family of IoUT data communication techniques with an
emphasis on the state-of-the-art research challenges. We then review the suite
of ML solutions suitable for BMD handling and analytics. We treat the subject
deductively from an educational perspective, critically appraising the material
surveyed.Comment: 54 pages, 11 figures, 19 tables, IEEE Communications Surveys &
Tutorials, peer-reviewed academic journa
Sensor Signal and Information Processing II
In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing
Wrist-based Phonocardiogram Diagnosis Leveraging Machine Learning
With the tremendous growth of technology and the fast pace of life, the need for instant information has become an everyday necessity, more so in emergency cases when every minute counts towards saving lives. mHealth has been the adopted approach for quick diagnosis using mobile devices. However, it has been challenging due to the required high quality of data, high computation load, and high-power consumption. The aim of this research is to diagnose the heart condition based on phonocardiogram (PCG) analysis using Machine Learning techniques assuming limited processing power, in order to be encapsulated later in a mobile device. The diagnosis of PCG is performed using two techniques;
1. parametric estimation with multivariate classification, particularly discriminant function. Which will be explored at length using different number of descriptive features. The feature extraction will be performed using Wavelet Transform (Filter Bank).
2. Artificial Neural Networks, and specifically Pattern Recognition. This will also use decomposed version of PCG using Wavelet Transform (Filter Bank).
The results showed 97.33% successful diagnosis using the first technique using PCG with a 19 dB Signal-to-Noise-Ratio. When the signal was decomposed into four sub-bands using a Filter Bank of the second order. Each sub-band was described using two features; the signal’s mean and covariance. Additionally, different Filter Bank orders and number of features are explored and compared.
Using the second technique the diagnosis resulted in a 100% successful classification with 83.3% trust level. The results are assessed, and new improvements are recommended and discussed as part of future work.Teknologian valtavan kehittymisen ja nopean elämänrytmin myötä välittömästi saatu tieto on noussut jokapäiväiseksi välttämättömyydeksi, erityisesti hätätapauksissa, joissa jokainen säästetty minuutti on tärkeää ihmishenkien pelastamiseksi. Mobiiliterveys, eli mHealth, on yleisesti valjastettu käyttöön nopeaksi diagnoosimenetelmäksi mobiililaitteiden avulla. Käyttö on kuitenkin ollut haastavaa korkean datan laatuvaatimuksen ja suurten tiedonkäsittelyvaatimuksien, nopean laskentatehon ja sekä suuren virrankulutuksen vuoksi. Tämän tutkimuksen tavoitteena oli diagnosoida sydänsairauksia fonokardiogrammianalyysin (PCG) perusteella käyttämällä koneoppimistekniikoita niin, että käytettävä laskentateho rajoitetaan vastaamaan mobiililaitteiden kapasiteettia. PCG-diagnoosi tehtiin käyttäen kahta tekniikkaa
1. Parametrinen estimointi käyttäen moniulotteista luokitusta, erityisesti signaalien erotteluanalyysin avulla. Tätä asiaa tutkittiin syvällisesti käyttäen erilaisia tilastotieteellisesti kuvailevia piirteitä. Piirteiden irrotus suoritettiin käyttäen Wavelet-muunnosta ja suodatinpankkia.
2. Keinotekoisia neuroverkkoja ja erityisesti hahmontunnistusta. Tässä menetelmässä käytetään myös PCG-signaalin hajoitusta ja Wavelet-muunnos -suodatinpankkia.
Tulokset osoittivat, että PCG 19dB:n signaali-kohina-suhteella voi johtaa 97,33% onnistuneeseen diagnoosiin käytettäessä ensimmäistä tekniikkaa. Signaalin hajottaminen neljään alikaistaan suoritettiin käyttämällä toisen asteen suodatinpankkia. Jokainen alikaista kuvattiin käyttäen kahta piirrettä: signaalin keskiarvoa ja kovarianssia, näin saatiin yhteensä kahdeksan ominaisuutta kuvaamaan noin yhden minuutin näytettä PCG-signaalista. Lisäksi tutkittiin ja verrattiin eriasteisia suodattimia ja piirteitä. Toista tekniikkaa käyttäen diagnoosi johti 100% onnistuneeseen luokitteluun 83,3% luotettavuustasolla. Tuloksia käsitellään ja pohditaan, sekä tehdään niistä johtopäätöksiä. Lopuksi ehdotetaan ja suositellaan käytettyihin menetelmiin uusia parannuksia jatkotutkimuskohteiksi.fi=vertaisarvioitu|en=peerReviewed
Reports on industrial information technology. Vol. 12
The 12th volume of Reports on Industrial Information Technology presents some selected results of research achieved at the Institute of Industrial Information Technology during the last two years.These results have contributed to many cooperative projects with partners from academia and industry and cover current research interests including signal and image processing, pattern recognition, distributed systems, powerline communications, automotive applications, and robotics
A survey on artificial intelligence-based acoustic source identification
The concept of Acoustic Source Identification (ASI), which refers to the process of identifying noise sources has attracted increasing attention in recent years. The ASI technology can be used for surveillance, monitoring, and maintenance applications in a wide range of sectors, such as defence, manufacturing, healthcare, and agriculture. Acoustic signature analysis and pattern recognition remain the core technologies for noise source identification. Manual identification of acoustic signatures, however, has become increasingly challenging as dataset sizes grow. As a result, the use of Artificial Intelligence (AI) techniques for identifying noise sources has become increasingly relevant and useful. In this paper, we provide a comprehensive review of AI-based acoustic source identification techniques. We analyze the strengths and weaknesses of AI-based ASI processes and associated methods proposed by researchers in the literature. Additionally, we did a detailed survey of ASI applications in machinery, underwater applications, environment/event source recognition, healthcare, and other fields. We also highlight relevant research directions
Recommended from our members
Time-frequency analysis based on split spectrum applied to audio and ultrasonic signals
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonSignal processing is a large subject with applications integral to a number of technological fields such as communication, audio, Voice over IP (VoIP), pattern recognition, sonar, radar, ultrasound and medical imaging. Techniques exist for the analysis, modelling, extraction, recognition and synthesis of signals of interest. The focus of this thesis is signal processing for acoustics (both sonic and ultrasonic). In the applications examined, signals of interest are usually incomplete, distorted and/or noisy. Therefore, reconstructing the signal, noise reduction and removal of any distortion/interference are the main goals of the signal processing techniques presented. The primary aim is to study and develop an advanced time-frequency signal processing technique for acoustic applications to enhance the quality of the signals. In the first part of the thesis, a technique is presented that models and maintains the correlation between temporal and spectral parameters of audio signals. A novel Packet Loss Concealment (PLC) method is developed with applications to VoIP, audio broadcasting, and streaming. The problem of modelling the time-varying frequency spectrum in the context of PLC is addressed, and a novel solution is proposed for tracking and using the temporal motion of spectral flow to reconstruct the signal. The proposed method utilises a Time-Frequency Motion (TFM) matrix representation of the audio signal, where each frequency is tagged with a motion vector estimate that is assessed by cross-correlation of the movement of spectral energy within sub-bands across time frames. The missing packets are estimated using extrapolation or interpolation algorithms using a TFM matrix and then inverse transformed to the time-domain for reconstruction of the signal. The proposed method is compared with conventional approaches using objective Performance Evaluation of Speech Quality (PESQ), and subjective Mean Opinion Scores (MOS) in a range of packet loss from 5% to 20%. The evaluation results demonstrate that the proposed algorithm substantially improves performance by an average of 2.85% and 5.9% in terms of PESQ and MOS respectively. In the second part of the thesis, the proposed method is extended and modified to address challenges of excessive coherent noise arising from ultrasonic signals gathered during Guided Wave Testing (GWT). It is an advanced Non-destructive testing technique which is used over several branches of industry to inspect large structures for defects where the structural integrity is of concern. In such systems, signal interpretation can often be challenging due to the multi-modal and dispersive propagation of Ultrasonic Guided Waves (UGWs). The multi-modal and dispersive nature of the received signals hampers the ability to detect defects in a given structure. The Split-Spectrum Processing (SSP) method with application for such signal has been studied and reviewed quantitatively to measure the enhancement in terms of Signal-to-Noise Ratio (SNR) and spatial resolution. In this thesis, the influence of SSP filter bank parameters on these signals is studied and optimised to improve SNR and spatial resolution considerably. The proposed method is compared analytically and experimentally with conventional approaches. The proposed SSP algorithm substantially improves SNR by an average of 30dB. The conclusions reached in this thesis will contribute to the progression of the GWT technique through considerable improvement in defect detection capability.Centre for Electronic Systems Research (CESR) of Brunel University London, The National Structural Integrity Research Centre (NSIRC) and TWI Ltd
COBE's search for structure in the Big Bang
The launch of Cosmic Background Explorer (COBE) and the definition of Earth Observing System (EOS) are two of the major events at NASA-Goddard. The three experiments contained in COBE (Differential Microwave Radiometer (DMR), Far Infrared Absolute Spectrophotometer (FIRAS), and Diffuse Infrared Background Experiment (DIRBE)) are very important in measuring the big bang. DMR measures the isotropy of the cosmic background (direction of the radiation). FIRAS looks at the spectrum over the whole sky, searching for deviations, and DIRBE operates in the infrared part of the spectrum gathering evidence of the earliest galaxy formation. By special techniques, the radiation coming from the solar system will be distinguished from that of extragalactic origin. Unique graphics will be used to represent the temperature of the emitting material. A cosmic event will be modeled of such importance that it will affect cosmological theory for generations to come. EOS will monitor changes in the Earth's geophysics during a whole solar color cycle
- …