260 research outputs found
JALAD: Joint Accuracy- and Latency-Aware Deep Structure Decoupling for Edge-Cloud Execution
Recent years have witnessed a rapid growth of deep-network based services and
applications. A practical and critical problem thus has emerged: how to
effectively deploy the deep neural network models such that they can be
executed efficiently. Conventional cloud-based approaches usually run the deep
models in data center servers, causing large latency because a significant
amount of data has to be transferred from the edge of network to the data
center. In this paper, we propose JALAD, a joint accuracy- and latency-aware
execution framework, which decouples a deep neural network so that a part of it
will run at edge devices and the other part inside the conventional cloud,
while only a minimum amount of data has to be transferred between them. Though
the idea seems straightforward, we are facing challenges including i) how to
find the best partition of a deep structure; ii) how to deploy the component at
an edge device that only has limited computation power; and iii) how to
minimize the overall execution latency. Our answers to these questions are a
set of strategies in JALAD, including 1) A normalization based in-layer data
compression strategy by jointly considering compression rate and model
accuracy; 2) A latency-aware deep decoupling strategy to minimize the overall
execution latency; and 3) An edge-cloud structure adaptation strategy that
dynamically changes the decoupling for different network conditions.
Experiments demonstrate that our solution can significantly reduce the
execution latency: it speeds up the overall inference execution with a
guaranteed model accuracy loss.Comment: conference, copyright transfered to IEE
AdaCompress: Adaptive Compression for Online Computer Vision Services
With the growth of computer vision based applications and services, an
explosive amount of images have been uploaded to cloud servers which host such
computer vision algorithms, usually in the form of deep learning models. JPEG
has been used as the {\em de facto} compression and encapsulation method before
one uploads the images, due to its wide adaptation. However, standard JPEG
configuration does not always perform well for compressing images that are to
be processed by a deep learning model, e.g., the standard quality level of JPEG
leads to 50\% of size overhead (compared with the best quality level selection)
on ImageNet under the same inference accuracy in popular computer vision models
including InceptionNet, ResNet, etc. Knowing this, designing a better JPEG
configuration for online computer vision services is still extremely
challenging: 1) Cloud-based computer vision models are usually a black box to
end-users; thus it is difficult to design JPEG configuration without knowing
their model structures. 2) JPEG configuration has to change when different
users use it. In this paper, we propose a reinforcement learning based JPEG
configuration framework. In particular, we design an agent that adaptively
chooses the compression level according to the input image's features and
backend deep learning models. Then we train the agent in a reinforcement
learning way to adapt it for different deep learning cloud services that act as
the {\em interactive training environment} and feeding a reward with
comprehensive consideration of accuracy and data size. In our real-world
evaluation on Amazon Rekognition, Face++ and Baidu Vision, our approach can
reduce the size of images by 1/2 -- 1/3 while the overall classification
accuracy only decreases slightly.Comment: ACM Multimedi
Interpretable and Efficient Beamforming-Based Deep Learning for Single Snapshot DOA Estimation
We introduce an interpretable deep learning approach for direction of arrival
(DOA) estimation with a single snapshot. Classical subspace-based methods like
MUSIC and ESPRIT use spatial smoothing on uniform linear arrays for single
snapshot DOA estimation but face drawbacks in reduced array aperture and
inapplicability to sparse arrays. Single-snapshot methods such as compressive
sensing and iterative adaptation approach (IAA) encounter challenges with high
computational costs and slow convergence, hampering real-time use. Recent deep
learning DOA methods offer promising accuracy and speed. However, the practical
deployment of deep networks is hindered by their black-box nature. To address
this, we propose a deep-MPDR network translating minimum power distortionless
response (MPDR)-type beamformer into deep learning, enhancing generalization
and efficiency. Comprehensive experiments conducted using both simulated and
real-world datasets substantiate its dominance in terms of inference time and
accuracy in comparison to conventional methods. Moreover, it excels in terms of
efficiency, generalizability, and interpretability when contrasted with other
deep learning DOA estimation networks.Comment: 10 pages, 10 figure
Identifying the degree of luminescence signal bleaching in fluvial sediments from the Inner Mongolian reaches of the Yellow River
Abstract
The partial bleaching of the luminescence signal prior to deposition results in age overestimation, and can be a problem in delineating fluvial evolution within an OSL chronological framework. The Inner Mongolian reaches of the Yellow River are characterised by a high sediment load and complex sources of sediments. To test the incomplete bleaching occurring in this type of environment, the residual doses and the luminescence signal characteristics of different particle size fractions from 14 modern fluvial sediment samples were investigated. Furthermore, 26 OSL ages derived from drilling cores were compared with 11 radiocarbon ages. Our results show that the residual equivalent doses principally range between 0.16 and 0.49 Gy for silt grains, and between 0.35 and 3.72 Gy for sand grains of modern samples. This suggests that medium-grained quartz has been well bleached prior to deposition, and is preferable to coarse-grained quartz when dating fluvial sediments in this region. The results also show that the De values of coarse-grained fractions display a stronger correlation with distance downstream. In addition, a comparison of OSL and radiocarbon ages from drilling cores establishes further confidence that any initial bleaching of these sediments was sufficient. As a result, we believe that the studied fluvial samples were well bleached prior to deposition.</jats:p
Bacteroides and NAFLD: pathophysiology and therapy
Non-alcoholic fatty liver disease (NAFLD) is a prevalent chronic liver condition observed globally, with the potential to progress to non-alcoholic steatohepatitis (NASH), cirrhosis, and even hepatocellular carcinoma. Currently, the US Food and Drug Administration (FDA) has not approved any drugs for the treatment of NAFLD. NAFLD is characterized by histopathological abnormalities in the liver, such as lipid accumulation, steatosis, hepatic balloon degeneration, and inflammation. Dysbiosis of the gut microbiota and its metabolites significantly contribute to the initiation and advancement of NAFLD. Bacteroides, a potential probiotic, has shown strong potential in preventing the onset and progression of NAFLD. However, the precise mechanism by which Bacteroides treats NAFLD remains uncertain. In this review, we explore the current understanding of the role of Bacteroides and its metabolites in the treatment of NAFLD, focusing on their ability to reduce liver inflammation, mitigate hepatic steatosis, and enhance intestinal barrier function. Additionally, we summarize how Bacteroides alleviates pathological changes by restoring the metabolism, improving insulin resistance, regulating cytokines, and promoting tight-junctions. A deeper comprehension of the mechanisms through which Bacteroides is involved in the pathogenesis of NAFLD should aid the development of innovative drugs targeting NAFLD
- …