414 research outputs found
Towards Personalized Healthcare in Cardiac Population: The Development of a Wearable ECG Monitoring System, an ECG Lossy Compression Schema, and a ResNet-Based AF Detector
Cardiovascular diseases (CVDs) are the number one cause of death worldwide.
While there is growing evidence that the atrial fibrillation (AF) has strong
associations with various CVDs, this heart arrhythmia is usually diagnosed
using electrocardiography (ECG) which is a risk-free, non-intrusive, and
cost-efficient tool. Continuously and remotely monitoring the subjects' ECG
information unlocks the potentials of prompt pre-diagnosis and timely
pre-treatment of AF before the development of any life-threatening
conditions/diseases. Ultimately, the CVDs associated mortality could be
reduced. In this manuscript, the design and implementation of a personalized
healthcare system embodying a wearable ECG device, a mobile application, and a
back-end server are presented. This system continuously monitors the users' ECG
information to provide personalized health warnings/feedbacks. The users are
able to communicate with their paired health advisors through this system for
remote diagnoses, interventions, etc. The implemented wearable ECG devices have
been evaluated and showed excellent intra-consistency (CVRMS=5.5%), acceptable
inter-consistency (CVRMS=12.1%), and negligible RR-interval errors (ARE<1.4%).
To boost the battery life of the wearable devices, a lossy compression schema
utilizing the quasi-periodic feature of ECG signals to achieve compression was
proposed. Compared to the recognized schemata, it outperformed the others in
terms of compression efficiency and distortion, and achieved at least 2x of CR
at a certain PRD or RMSE for ECG signals from the MIT-BIH database. To enable
automated AF diagnosis/screening in the proposed system, a ResNet-based AF
detector was developed. For the ECG records from the 2017 PhysioNet CinC
challenge, this AF detector obtained an average testing F1=85.10% and a best
testing F1=87.31%, outperforming the state-of-the-art
Post-stack seismic data compression with multidimensional deep autoencoders
Seismic data are surveys from the Earth's subsurface with the goal of representing the
geophysical characteristics from the region where they were obtained in order to be
interpreted. These data can occupy hundreds of Gigabytes of storage, motivating their
compression. In this work, we approach the problem of three-dimensional post-stack
seismic data using models based on deep autoencoders. The deep autoencoder is a neural
network that allows representing most of the information of a seismic data with a lower
cost in comparison to its original representation. To the best of our knowledge, this is the
rst work to deal with seismic compression using deep learning. Four compression methods
for post-stack data are proposed: two based on a bi-dimensional compression, named
2D-based Seismic Data Compression(2DSC) and 2D-based Seismic Data Compression using
Multi-resolution (2DSC-MR), and two based on three-dimensional compression, named
3D-based Seismic Data Compression (3DSC) and 3D-based Seismic Data Compression
using Vector Quantization (3DSC-VQ). The 2DSC is our simplest method for seismic
compression, in which the volume is compressed through its bi-dimensional sections. The
2DSC-MR extends the previous method by introducing the data compression in multiple
resolutions. The 3DSC extends the 2DSC method by allowing the seismic data compression
by using the three-dimensional volume instead of 2D slices. This method considers the
similarity between sections to compress a whole volume with the cost of a single section.
The 3DSC-VQ uses vector quantization aiming to extract more information from the
seismic volumes in the encoding part. Our main goal is to compress the seismic data at
low bit rates, attaining a high quality reconstruction. Experiments show that our methods
can compress seismic data yielding PSNR values over 40 dB and bit rates below 1.0 bpv.Dados sísmicos s~ao mapeamentos da subsuperfície terrestre que têm como objetivo representar
as características geofísicas da região onde eles foram obtidos de forma que possam
ser interpretados. Esses dados podem ocupar centenas de Gigabytes de armazenamento,
motivando sua compressão. Neste trabalho o problema de compressão de dados sísmicos
tridimensionais pós-pilha é abordado usando modelos baseados em autocodificadores
profundos. O autocodificador profundo é uma rede neural que permite representar a
maior parte da informação contida em um dado sísmico com um custo menor que sua
representação original. De acordo com nosso conhecimento, este é o primeiro trabalho a
lidar com compressão de dados sísmicos utilizando aprendizado profundo. Dessa forma,
através de aproximações sucessivas, são propostos quatro métodos de compressão de dados
tridimensionais pós-pilha: dois baseados em compressão bidimensional, chamados Método
de Compressão 2D de Dado Sísmico (2DSC) e Método de Compressão 2D de Dado Sísmico
usando Multi-resolução (2DSC-MR), e dois baseados em compressão tridimensional,
chamados Método de Compressão 3D de Dado Sísmico (3DSC) e Método de Compressão
3D de Dado Sísmico usando Quantização Vetorial (3DSC-VQ). O método 2DSC é o nosso
método de compressão do dado sísmico mais simples, onde o volume é comprimido a
partir de suas seções bidimensionais. O método 2DSC-MR estende o método anterior
introduzindo a compressão do dado em múltiplas resoluções. O método 3DSC estende
o método 2DSC permitindo a compressão do dado sísmico em sua forma volumétrica,
considerando a similaridade entre seções para representar um volume inteiro com o custo de
apenas uma seção. O método 3DSC-VQ utiliza quantização vetorial para relaxar a etapa
de codificação do método anterior, dando maior liberdade à rede para extrair informação
dos volumes sísmicos. O objetivo deste trabalho é comprimir o dado sísmico a baixas
taxas de bits e com alta qualidade de reconstrução em termos de PSNR e bits-por-voxel
(bpv). Experimentos mostram que os quatro métodos podem comprimir o dado sísmico
fornecendo valores de PSNR acima de 40 dB a taxas de bits abaixo de 1.0 bpv.CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superio
Aligned and Non-Aligned Double JPEG Detection Using Convolutional Neural Networks
Due to the wide diffusion of JPEG coding standard, the image forensic
community has devoted significant attention to the development of double JPEG
(DJPEG) compression detectors through the years. The ability of detecting
whether an image has been compressed twice provides paramount information
toward image authenticity assessment. Given the trend recently gained by
convolutional neural networks (CNN) in many computer vision tasks, in this
paper we propose to use CNNs for aligned and non-aligned double JPEG
compression detection. In particular, we explore the capability of CNNs to
capture DJPEG artifacts directly from images. Results show that the proposed
CNN-based detectors achieve good performance even with small size images (i.e.,
64x64), outperforming state-of-the-art solutions, especially in the non-aligned
case. Besides, good results are also achieved in the commonly-recognized
challenging case in which the first quality factor is larger than the second
one.Comment: Submitted to Journal of Visual Communication and Image Representation
(first submission: March 20, 2017; second submission: August 2, 2017
- …