8,996 research outputs found
Semantic Compression for Edge-Assisted Systems
A novel semantic approach to data selection and compression is presented for
the dynamic adaptation of IoT data processing and transmission within "wireless
islands", where a set of sensing devices (sensors) are interconnected through
one-hop wireless links to a computational resource via a local access point.
The core of the proposed technique is a cooperative framework where local
classifiers at the mobile nodes are dynamically crafted and updated based on
the current state of the observed system, the global processing objective and
the characteristics of the sensors and data streams. The edge processor plays a
key role by establishing a link between content and operations within the
distributed system. The local classifiers are designed to filter the data
streams and provide only the needed information to the global classifier at the
edge processor, thus minimizing bandwidth usage. However, the better the
accuracy of these local classifiers, the larger the energy necessary to run
them at the individual sensors. A formulation of the optimization problem for
the dynamic construction of the classifiers under bandwidth and energy
constraints is proposed and demonstrated on a synthetic example.Comment: Presented at the Information Theory and Applications Workshop (ITA),
February 17, 201
Deeply-Supervised CNN for Prostate Segmentation
Prostate segmentation from Magnetic Resonance (MR) images plays an important
role in image guided interven- tion. However, the lack of clear boundary
specifically at the apex and base, and huge variation of shape and texture
between the images from different patients make the task very challenging. To
overcome these problems, in this paper, we propose a deeply supervised
convolutional neural network (CNN) utilizing the convolutional information to
accurately segment the prostate from MR images. The proposed model can
effectively detect the prostate region with additional deeply supervised layers
compared with other approaches. Since some information will be abandoned after
convolution, it is necessary to pass the features extracted from early stages
to later stages. The experimental results show that significant segmentation
accuracy improvement has been achieved by our proposed method compared to other
reported approaches.Comment: Due to a crucial sign error in equation
Efficient video indexing for monitoring disease activity and progression in the upper gastrointestinal tract
Endoscopy is a routine imaging technique used for both diagnosis and
minimally invasive surgical treatment. While the endoscopy video contains a
wealth of information, tools to capture this information for the purpose of
clinical reporting are rather poor. In date, endoscopists do not have any
access to tools that enable them to browse the video data in an efficient and
user friendly manner. Fast and reliable video retrieval methods could for
example, allow them to review data from previous exams and therefore improve
their ability to monitor disease progression. Deep learning provides new
avenues of compressing and indexing video in an extremely efficient manner. In
this study, we propose to use an autoencoder for efficient video compression
and fast retrieval of video images. To boost the accuracy of video image
retrieval and to address data variability like multi-modality and view-point
changes, we propose the integration of a Siamese network. We demonstrate that
our approach is competitive in retrieving images from 3 large scale videos of 3
different patients obtained against the query samples of their previous
diagnosis. Quantitative validation shows that the combined approach yield an
overall improvement of 5% and 8% over classical and variational autoencoders,
respectively.Comment: Accepted at IEEE International Symposium on Biomedical Imaging
(ISBI), 201
Learning midlevel image features for natural scene and texture classification
This paper deals with coding of natural scenes in order to extract semantic information. We present a new scheme to project natural scenes onto a basis in which each dimension encodes statistically independent information. Basis extraction is performed by independent component analysis (ICA) applied to image patches culled from natural scenes. The study of the resulting coding units (coding filters) extracted from well-chosen categories of images shows that they adapt and respond selectively to discriminant features in natural scenes. Given this basis, we define global and local image signatures relying on the maximal activity of filters on the input image. Locally, the construction of the signature takes into account the spatial distribution of the maximal responses within the image. We propose a criterion to reduce the size of the space of representation for faster computation. The proposed approach is tested in the context of texture classification (111 classes), as well as natural scenes classification (11 categories, 2037 images). Using a common protocol, the other commonly used descriptors have at most 47.7% accuracy on average while our method obtains performances of up to 63.8%. We show that this advantage does not depend on the size of the signature and demonstrate the efficiency of the proposed criterion to select ICA filters and reduce the dimensio
Fog Computing in Medical Internet-of-Things: Architecture, Implementation, and Applications
In the era when the market segment of Internet of Things (IoT) tops the chart
in various business reports, it is apparently envisioned that the field of
medicine expects to gain a large benefit from the explosion of wearables and
internet-connected sensors that surround us to acquire and communicate
unprecedented data on symptoms, medication, food intake, and daily-life
activities impacting one's health and wellness. However, IoT-driven healthcare
would have to overcome many barriers, such as: 1) There is an increasing demand
for data storage on cloud servers where the analysis of the medical big data
becomes increasingly complex, 2) The data, when communicated, are vulnerable to
security and privacy issues, 3) The communication of the continuously collected
data is not only costly but also energy hungry, 4) Operating and maintaining
the sensors directly from the cloud servers are non-trial tasks. This book
chapter defined Fog Computing in the context of medical IoT. Conceptually, Fog
Computing is a service-oriented intermediate layer in IoT, providing the
interfaces between the sensors and cloud servers for facilitating connectivity,
data transfer, and queryable local database. The centerpiece of Fog computing
is a low-power, intelligent, wireless, embedded computing node that carries out
signal conditioning and data analytics on raw data collected from wearables or
other medical sensors and offers efficient means to serve telehealth
interventions. We implemented and tested an fog computing system using the
Intel Edison and Raspberry Pi that allows acquisition, computing, storage and
communication of the various medical data such as pathological speech data of
individuals with speech disorders, Phonocardiogram (PCG) signal for heart rate
estimation, and Electrocardiogram (ECG)-based Q, R, S detection.Comment: 29 pages, 30 figures, 5 tables. Keywords: Big Data, Body Area
Network, Body Sensor Network, Edge Computing, Fog Computing, Medical
Cyberphysical Systems, Medical Internet-of-Things, Telecare, Tele-treatment,
Wearable Devices, Chapter in Handbook of Large-Scale Distributed Computing in
Smart Healthcare (2017), Springe
- âŠ