5 research outputs found
ENROUTE: An Entropy Aware Routing Scheme for Information-Centric Networks (ICN)
With the exponential growth of end users and web data, the internet is undergoing the change of paradigm from a user-centric model to a content-centric one, popularly known as information-centric networks (ICN). Current ICN research evolves around three key-issues namely (i) content request searching, (ii) content routing, and (iii) in-network caching scheme to deliver the requested content to the end user. This would improve the user experience to obtain requested content because it lowers the download delay and provides higher throughput. Existing researches have mainly focused on on-path congestion or expected delivery time of a content to determine the optimized path towards custodian. However, it ignores the cumulative effect of the link-state parameters and the state of the cache, and consequently it leads to degrade the delay performance. In order to overcome this shortfall, we consider both the congestion of a link and the state of on-path caches to determine the best possible routes. We introduce a generic term entropy to quantify the effects of link congestion and state of on-path caches. Thereafter, we develop a novel entropy dependent algorithm namely ENROUTE for searching of content request triggered by any user, routing of this content, and caching for the delivery this requested content to the user. The entropy value of an intra-domain node indicates how many popular contents are already cached in the node, which, in turn, signifies the degree of enrichment of that node with the popular contents. On the other hand, the entropy for a link indicates how much the link is congested with the traversal of contents. In order to have reduced delay, we enhance the entropy of caches in nodes, and also use path with low entropy for downloading contents. We evaluate the performance of our proposed ENROUTE algorithm against state-of-the-art schemes for various network parameters and observe an improvement of 29–52% in delay, 12–39% in hit rate, and 4–39% in throughput
Why is the video analytics accuracy fluctuating, and what can we do about it?
It is a common practice to think of a video as a sequence of images (frames),
and re-use deep neural network models that are trained only on images for
similar analytics tasks on videos. In this paper, we show that this leap of
faith that deep learning models that work well on images will also work well on
videos is actually flawed. We show that even when a video camera is viewing a
scene that is not changing in any human-perceptible way, and we control for
external factors like video compression and environment (lighting), the
accuracy of video analytics application fluctuates noticeably. These
fluctuations occur because successive frames produced by the video camera may
look similar visually, but these frames are perceived quite differently by the
video analytics applications. We observed that the root cause for these
fluctuations is the dynamic camera parameter changes that a video camera
automatically makes in order to capture and produce a visually pleasing video.
The camera inadvertently acts as an unintentional adversary because these
slight changes in the image pixel values in consecutive frames, as we show,
have a noticeably adverse impact on the accuracy of insights from video
analytics tasks that re-use image-trained deep learning models. To address this
inadvertent adversarial effect from the camera, we explore the use of transfer
learning techniques to improve learning in video analytics tasks through the
transfer of knowledge from learning on image analytics tasks. In particular, we
show that our newly trained Yolov5 model reduces fluctuation in object
detection across frames, which leads to better tracking of objects(40% fewer
mistakes in tracking). Our paper also provides new directions and techniques to
mitigate the camera's adversarial effect on deep learning models used for video
analytics applications
CamTuner: Reinforcement-Learning based System for Camera Parameter Tuning to enhance Analytics
Video analytics systems critically rely on video cameras, which capture
high-quality video frames, to achieve high analytics accuracy. Although modern
video cameras often expose tens of configurable parameter settings that can be
set by end-users, deployment of surveillance cameras today often uses a fixed
set of parameter settings because the end-users lack the skill or understanding
to reconfigure these parameters.
In this paper, we first show that in a typical surveillance camera
deployment, environmental condition changes can significantly affect the
accuracy of analytics units such as person detection, face detection and face
recognition, and how such adverse impact can be mitigated by dynamically
adjusting camera settings. We then propose CAMTUNER, a framework that can be
easily applied to an existing video analytics pipeline (VAP) to enable
automatic and dynamic adaptation of complex camera settings to changing
environmental conditions, and autonomously optimize the accuracy of analytics
units (AUs) in the VAP. CAMTUNER is based on SARSA reinforcement learning (RL)
and it incorporates two novel components: a light-weight analytics quality
estimator and a virtual camera. CAMTUNER is implemented in a system with AXIS
surveillance cameras and several VAPs (with various AUs) that processed
day-long customer videos captured at airport entrances. Our evaluations show
that CAMTUNER can adapt quickly to changing environments. We compared CAMTUNER
with two alternative approaches where either static camera settings were used,
or a strawman approach where camera settings were manually changed every hour
(based on human perception of quality). We observed that for the face detection
and person detection AUs, CAMTUNER is able to achieve up to 13.8% and 9.2%
higher accuracy, respectively, compared to the best of the two approaches
(average improvement of 8% for both AUs)