6 research outputs found
Introducing the new paradigm of Social Dispersed Computing: Applications, Technologies and Challenges
[EN] If last decade viewed computational services as a utility then surely
this decade has transformed computation into a commodity. Computation
is now progressively integrated into the physical networks in
a seamless way that enables cyber-physical systems (CPS) and the
Internet of Things (IoT) meet their latency requirements. Similar to
the concept of Âżplatform as a serviceÂż or Âżsoftware as a serviceÂż, both
cloudlets and fog computing have found their own use cases. Edge
devices (that we call end or user devices for disambiguation) play the
role of personal computers, dedicated to a user and to a set of correlated
applications. In this new scenario, the boundaries between
the network node, the sensor, and the actuator are blurring, driven
primarily by the computation power of IoT nodes like single board
computers and the smartphones. The bigger data generated in this
type of networks needs clever, scalable, and possibly decentralized
computing solutions that can scale independently as required. Any
node can be seen as part of a graph, with the capacity to serve as a
computing or network router node, or both. Complex applications can
possibly be distributed over this graph or network of nodes to improve
the overall performance like the amount of data processed over time.
In this paper, we identify this new computing paradigm that we call
Social Dispersed Computing, analyzing key themes in it that includes
a new outlook on its relation to agent based applications. We architect
this new paradigm by providing supportive application examples that
include next generation electrical energy distribution networks, next
generation mobility services for transportation, and applications for
distributed analysis and identification of non-recurring traffic congestion
in cities. The paper analyzes the existing computing paradigms
(e.g., cloud, fog, edge, mobile edge, social, etc.), solving the ambiguity
of their definitions; and analyzes and discusses the relevant foundational
software technologies, the remaining challenges, and research
opportunities.Garcia Valls, MS.; Dubey, A.; Botti, V. (2018). Introducing the new paradigm of Social Dispersed Computing: Applications, Technologies and Challenges. Journal of Systems Architecture. 91:83-102. https://doi.org/10.1016/j.sysarc.2018.05.007S831029
Scatter-gather based approach in scaling complex event processing systems for stateful operators
With the introduction of Internet of Things (IoT), scalable Complex Event
Processing (CEP) and stream processing on memory, CPU, and bandwidth constraint
infrastructure have become essential. While several related work focuses on
replication of CEP engines to enhance scalability, they do not provide expected
performance while scaling stateful queries for event streams that do not have predefined
partitions. Most of the CEP systems provide scalability for stateless queries or
for the stateful queries where the event streams can be partitioned based on one or
more event attributes. These systems can only scale up to the pre-defined number of
partitions, limiting the number of events they can process. Meanwhile, some CEP
systems do not support cloud-native and microservices features such as startup time in
milliseconds.
In this research, we address the scalability of CEP systems for stateful
operators such as windows, joins, and pattern by scaling data processing nodes and
connecting them as a directed acyclic graph. This enabled us to scale the processing
and working memory using the scatter and gather based approach. We tested the
proposed technique by implementing it using a set of Siddhi CEP engines running on
Docker containers managed by Kubernetes container orchestration system. The tests
were carried out for a fixed data rate, on uniform capacity nodes, to understand the
processing capacity of the deployment. As we scale the nodes, for all cases, the
proposed system was able to scale almost linearly while producing zero errors for
patterns, 0.1% for windows, and 6.6% for joins, respectively. By reordering events the
error rate of window and join queries was reduced to 0.03% and 1% while introducing
54ms and 260ms of delays, respectively
Siddhi-CEP - high performance complex event processing engine
Complex Event Processing (CEP) is one of the most rapidly emerging fields in data processing. Processing of
high volume of events to derive higher level events is a vital part of several business applications including;
business activity monitoring, financial transaction pattern analysis, and row RFID feeds filtering. The tasks of
the CEP is to identify meaningful patterns, relationships and data abstractions among unrelated events, and
fire an immediate response such as an alert message.
In this paper, we address the need of a scalable, generic complex event processing engine, which was designed
focusing on higher performance to process events in an efficient manner, with added advantage of a
permissive open-source license. The implementation and design of different features have been carried out
along with testing and profiling in order to be certain about the performance Siddhi CEP can provide