3,607 research outputs found
Evolving Large-Scale Data Stream Analytics based on Scalable PANFIS
Many distributed machine learning frameworks have recently been built to
speed up the large-scale data learning process. However, most distributed
machine learning used in these frameworks still uses an offline algorithm model
which cannot cope with the data stream problems. In fact, large-scale data are
mostly generated by the non-stationary data stream where its pattern evolves
over time. To address this problem, we propose a novel Evolving Large-scale
Data Stream Analytics framework based on a Scalable Parsimonious Network based
on Fuzzy Inference System (Scalable PANFIS), where the PANFIS evolving
algorithm is distributed over the worker nodes in the cloud to learn
large-scale data stream. Scalable PANFIS framework incorporates the active
learning (AL) strategy and two model fusion methods. The AL accelerates the
distributed learning process to generate an initial evolving large-scale data
stream model (initial model), whereas the two model fusion methods aggregate an
initial model to generate the final model. The final model represents the
update of current large-scale data knowledge which can be used to infer future
data. Extensive experiments on this framework are validated by measuring the
accuracy and running time of four combinations of Scalable PANFIS and other
Spark-based built in algorithms. The results indicate that Scalable PANFIS with
AL improves the training time to be almost two times faster than Scalable
PANFIS without AL. The results also show both rule merging and the voting
mechanisms yield similar accuracy in general among Scalable PANFIS algorithms
and they are generally better than Spark-based algorithms. In terms of running
time, the Scalable PANFIS training time outperforms all Spark-based algorithms
when classifying numerous benchmark datasets.Comment: 20 pages, 5 figure
Impliance: A Next Generation Information Management Appliance
ably successful in building a large market and adapting to the changes of the
last three decades, its impact on the broader market of information management
is surprisingly limited. If we were to design an information management system
from scratch, based upon today's requirements and hardware capabilities, would
it look anything like today's database systems?" In this paper, we introduce
Impliance, a next-generation information management system consisting of
hardware and software components integrated to form an easy-to-administer
appliance that can store, retrieve, and analyze all types of structured,
semi-structured, and unstructured information. We first summarize the trends
that will shape information management for the foreseeable future. Those trends
imply three major requirements for Impliance: (1) to be able to store, manage,
and uniformly query all data, not just structured records; (2) to be able to
scale out as the volume of this data grows; and (3) to be simple and robust in
operation. We then describe four key ideas that are uniquely combined in
Impliance to address these requirements, namely the ideas of: (a) integrating
software and off-the-shelf hardware into a generic information appliance; (b)
automatically discovering, organizing, and managing all data - unstructured as
well as structured - in a uniform way; (c) achieving scale-out by exploiting
simple, massive parallel processing, and (d) virtualizing compute and storage
resources to unify, simplify, and streamline the management of Impliance.
Impliance is an ambitious, long-term effort to define simpler, more robust, and
more scalable information systems for tomorrow's enterprises.Comment: This article is published under a Creative Commons License Agreement
(http://creativecommons.org/licenses/by/2.5/.) You may copy, distribute,
display, and perform the work, make derivative works and make commercial use
of the work, but, you must attribute the work to the author and CIDR 2007.
3rd Biennial Conference on Innovative Data Systems Research (CIDR) January
710, 2007, Asilomar, California, US
DESIGN FRAMEWORK FOR INTERNET OF THINGS BASED NEXT GENERATION VIDEO SURVEILLANCE
Modern artificial intelligence and machine learning opens up new era towards video
surveillance system. Next generation video surveillance in Internet of Things (IoT) environment is
an emerging research area because of high bandwidth, big-data generation, resource constraint
video surveillance node, high energy consumption for real time applications. In this thesis, various
opportunities and functional requirements that next generation video surveillance system should
achieve with the power of video analytics, artificial intelligence and machine learning are
discussed. This thesis also proposes a new video surveillance system architecture introducing fog
computing towards IoT based system and contributes the facilities and benefits of proposed system
which can meet the forthcoming requirements of surveillance. Different challenges and issues
faced for video surveillance in IoT environment and evaluate fog-cloud integrated architecture to
penetrate and eliminate those issues.
The focus of this thesis is to evaluate the IoT based video surveillance system. To this end,
two case studies were performed to penetrate values towards energy and bandwidth efficient video
surveillance system. In one case study, an IoT-based power efficient color frame transmission and
generation algorithm for video surveillance application is presented. The conventional way is to
transmit all R, G and B components of all frames. Using proposed technique, instead of sending
all components, first one color frame is sent followed by a series of gray-scale frames. After a
certain number of gray-scale frames, another color frame is sent followed by the same number of
gray-scale frames. This process is repeated for video surveillance system. In the decoder, color
information is formulated from the color frame and then used to colorize the gray-scale frames. In
another case study, a bandwidth efficient and low complexity frame reproduction technique that is
also applicable in IoT based video surveillance application is presented. Using the second
technique, only the pixel intensity that differs heavily comparing to previous frame’s
corresponding pixel is sent. If the pixel intensity is similar or near similar comparing to the
previous frame, the information is not transferred. With this objective, the bit stream is created for
every frame with a predefined protocol. In cloud side, the frame information can be reproduced by
implementing the reverse protocol from the bit stream.
Experimental results of the two case studies show that the IoT-based proposed approach
gives better results than traditional techniques in terms of both energy efficiency and quality of the video, and therefore, can enable sensor nodes in IoT to perform more operations with energy
constraints
Edge AI for Internet of Energy: Challenges and Perspectives
The digital landscape of the Internet of Energy (IoE) is on the brink of a
revolutionary transformation with the integration of edge Artificial
Intelligence (AI). This comprehensive review elucidates the promise and
potential that edge AI holds for reshaping the IoE ecosystem. Commencing with a
meticulously curated research methodology, the article delves into the myriad
of edge AI techniques specifically tailored for IoE. The myriad benefits,
spanning from reduced latency and real-time analytics to the pivotal aspects of
information security, scalability, and cost-efficiency, underscore the
indispensability of edge AI in modern IoE frameworks. As the narrative
progresses, readers are acquainted with pragmatic applications and techniques,
highlighting on-device computation, secure private inference methods, and the
avant-garde paradigms of AI training on the edge. A critical analysis follows,
offering a deep dive into the present challenges including security concerns,
computational hurdles, and standardization issues. However, as the horizon of
technology ever expands, the review culminates in a forward-looking
perspective, envisaging the future symbiosis of 5G networks, federated edge AI,
deep reinforcement learning, and more, painting a vibrant panorama of what the
future beholds. For anyone vested in the domains of IoE and AI, this review
offers both a foundation and a visionary lens, bridging the present realities
with future possibilities
Serving Deep Learning Model in Relational Databases
Serving deep learning (DL) models on relational data has become a critical
requirement across diverse commercial and scientific domains, sparking growing
interest recently. In this visionary paper, we embark on a comprehensive
exploration of representative architectures to address the requirement. We
highlight three pivotal paradigms: The state-of-the-artDL-Centricarchitecture
offloadsDL computations to dedicated DL frameworks. The potential UDF-Centric
architecture encapsulates one or more tensor computations into User Defined
Functions (UDFs) within the database system. The
potentialRelation-Centricarchitecture aims to represent a large-scale tensor
computation through relational operators. While each of these architectures
demonstrates promise in specific use scenarios, we identify urgent requirements
for seamless integration of these architectures and the middle ground between
these architectures. We delve into the gaps that impede the integration and
explore innovative strategies to close them. We present a pathway to establish
a novel database system for enabling a broad class of data-intensive DL
inference applications.Comment: Authors are ordered alphabetically; Jia Zou is the corresponding
autho
- …