217 research outputs found
Efficient ECG Compression and QRS Detection for E-Health Applications
Current medical screening and diagnostic procedures have shifted toward recording longer electrocardiogram (ECG) signals, which have traditionally been processed on personal computers (PCs) with high-speed multi-core processors and efficient memory processing. Battery-driven devices are now more commonly used for the same purpose and thus exploring highly efficient, low-power alternatives for local ECG signal collection and processing is essential for efficient and convenient clinical use. Several ECG compression methods have been reported in the current literature with limited discussion on the performance of the compressed and the reconstructed ECG signals in terms of the QRS complex detection accuracy. This paper proposes and evaluates different compression methods based not only on the compression ratio (CR) and percentage root-mean-square difference (PRD), but also based on the accuracy of QRS detection. In this paper, we have developed a lossy method (Methods III) and compared them to the most current lossless and lossy ECG compression methods (Method I and Method II, respectively). The proposed lossy compression method (Method III) achieves CR of 4.5×, PRD of 0.53, as well as an overall sensitivity of 99.78% and positive predictivity of 99.92% are achieved (when coupled with an existing QRS detection algorithm) on the MIT-BIH Arrhythmia database and an overall sensitivity of 99.90% and positive predictivity of 99.84% on the QT database.This work was made possible by NPRP grant #7-684-1-127 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.Scopu
Sistema baseado em técnicas de compressão para o reconhecimento de dígitos manuscritos
Mestrado em Engenharia Eletrónica e TelecomunicaçõesO reconhecimento de dígitos manuscritos é uma habilidade humana
adquirida. Com pouco esforço, um humano pode reconhecer adequadamente
em milissegundos uma sequência de dígitos manuscritos. Com o
auxílio de um computador, esta tarefa de reconhecimento pode ser facilmente
automatizada, melhorando um número significativo de processos. A
separação do correio postal, a verificação de cheques bancários e operações
que têm como entrada de dados dígitos manuscritos estão incluídas num
amplo conjunto de aplicações que podem ser realizadas de forma mais eficaz e automatizada. Nos últimos anos, várias técnicas e métodos foram
propostos para automatizar o mecanismo de reconhecimento de dígitos
manuscritos. No entanto, para resolver esta desafiante questão de reconhecimento
de imagem são utilizadas técnicas complexas e computacionalmente
muito exigentes de machine learning, como é o caso do deep learning.
Nesta dissertação é introduzida uma nova solução para o problema do reconhecimento
de dígitos manuscritos, usando métricas de similaridade entre
imagens de dígitos. As métricas de similaridade são calculadas com base
na compressão de dados, nomeadamente pelo uso de Modelos de Contexto
Finito.The Recognition of Handwritten Digits is a human-acquired ability. With
little e ort, a human can properly recognize, in milliseconds, a sequence of
handwritten digits. With the help of a computer, the task of handwriting
recognition can be easily automated, improving and making a signi cant
number of processes faster. The postal mail sorting, bank check veri cation
and handwritten digit data entry operations are in a wide group of
applications that can be performed in a more e ective and automated way.
In the recent past years, a number of techniques and methods have been
proposed to automate the handwritten digit recognition mechanism. However,
to solve this challenging question of image recognition, there are used
complex and computationally demanding machine learning techniques, as
it is the case of deep learning. In this dissertation is introduced a novel
solution to the problem of handwritten digit recognition, using metrics of
similarity between digit images. The metrics are computed based on data
compression, namely by the use of Finite Context Models
Graph based transforms for block-based predictive transform coding
Orthogonal transforms are the key aspects of the encoding and decoding process in many state-of-the-art compression systems. The transforms in blockbased predictive transform coding (PTC) is essential for improving coding performance, as it allows decorrelating the signal in the form of transform coefficients. Recently, the Graph-Based Transform (GBT), has been shown to attain promising results for data decorrelation and energy compaction especially for block-based PTC. However, in order to reconstruct a frame for GBT using block-based PTC, extra-information is needed to be signalled into the bitstream, which may lead to an increased overhead. Additionally, the same graph should be available at the reconstruction stage to compute the inverse GBT of each block.
In this thesis, we propose a set of a novel class of GBTs to enhance the performance of transform. These GBTs adopt several methods to address the issue of the availability of the same graph at the decoder while reconstructing video frames. Our methods to predict the graph can be categorized in two types: non-learning-based approaches and deep learning (DL) based prediction. For the first type our method uses reference samples and template-based strategies for reconstructing the same graph. For our next strategies we learn the graphs so that the information needed to compute the inverse transform is common knowledge between the compression and reconstruction processes. Finally, we train our model online to avoid the amount, quality, and relevance of the training data.
Our evaluation is based on all the possible classes of HEVC videos, consist of class A to F/Screen content based on their varied resolution and characteristics. Our experimental results show that the proposed transforms outperforms the other non-trainable transforms, such as DCT and DCT/DST, which are commonly employed in current video codecs in terms of compression and reconstruction quality
Reconhecimento de padrões baseado em compressão: um exemplo de biometria utilizando ECG
The amount of data being collected by sensors and smart devices that
people use on their daily lives has been increasing at higher rates than
ever before. That enables the possibility of using biomedical signals in
several applications, with the aid of pattern recognition algorithms in several
applications. In this thesis we investigate the usage of compression based
methods to perform classification using one-dimensional signals. In order to
test those methods, we use as testbed example, electrocardiographic (ECG)
signals and the task biometric identification.
First and foremost, we introduce the notion of Kolmogorov complexity
and how it relates with compression methods. Then, we explain how
can these methods be useful for pattern recognition, by exploring different
compression-based measures, namely, the Normalized Relative Compression,
a measure based on the relative similarity between strings. For this purpose,
we present finite-context models and explain the theory behind a generalized
version of those models, called the extended-alphabet finite-context models,
a novel contribution.
Since the testbed application for the methods presented in the thesis is
based on ECG signals, we explain what constitutes such a signal and the
methods that should be used before data compresison can be applied to
them, such as filtering and quantization.
Finally, we explore the application of biometric identification using the ECG
signal into more depth, making some tests regarding the acquisition of
signals and benchmark different proposals based on compresison methods,
namely, non-fiducial ones. We also highlight the advantages of such an
alternative approach to machine learning methods, namely, low computational
costs and not requiring any kind of feature extraction, making this
approach easily transferable into different applications and signals.A quantidade de dados recolhidos por sensores e dispositivos inteligentes
que as pessoas utilizam no seu dia a dia tem aumentado a taxas mais
elevadas do que nunca. Isso possibilita a utilização de sinais biomédicos
em diversas aplicações práticas, com o auxílio de algoritmos de reconhecimento
de padrões. Nesta tese, investigamos o uso de métodos baseados
em compressão para realizar classificação de sinais unidimensionais. Para
testar esses métodos, utilizamos, como aplicação de exemplo, o problema
de identificação biométrica através de sinais eletrocardiográficos (ECG).
Em primeiro lugar, introduzimos a noção de complexidade de Kolmogorov
e a forma como a mesma se relaciona com os métodos de compressão. De
seguida, explicamos como esses métodos são úteis para reconhecimento de
padrões, explorando diferentes medidas baseadas em compressão, nomeadamente,
a compressão relativa normalizada (NRC), uma medida baseada
na similaridade relativa entre strings. Para isso, apresentamos os modelos
de contexto finito e explicaremos a teoria por detrás de uma versão generalizada
desses modelos, chamados de modelos de contexto finito de alfabeto
estendido (xaFCM), uma nova contribuição.
Uma vez que a aplicação de exemplo para os métodos apresentados na tese
é baseada em sinais de ECG, explicamos também o que constitui tal sinal
e os métodos que devem ser utilizados antes que a compressão de dados
possa ser aplicada aos mesmos, tais como filtragem e quantização.
Por fim, exploramos com maior profundidade a aplicação da identificação
biométrica utilizando o sinal de ECG, realizando alguns testes relativos à
aquisição de sinais e comparando diferentes propostas baseadas em métodos
de compressão, nomeadamente os não fiduciais. Destacamos também as
vantagens de tal abordagem, alternativa aos métodos de aprendizagem computacional, nomeadamente, baixo custo computacional bem como não exigir tipo de extração de atributos, tornando esta abordagem mais facilmente
transponível para diferentes aplicações e sinais.Programa Doutoral em Informátic
- …