39,072 research outputs found
MOBANA: A distributed stream-based information system for public transit
Abstract:
Public transit generates a wide range of diverse data, which include static data and high-velocity data streams from sensors. Integrating and processing this big real-time data is a challenge in developing analytical systems for public transit. We here propose MOBANA (MOBility ANAlyzer), a distributed stream-based system, which provides real-time information to a wide range of users for monitoring and analyzing the performance of public transit. To do so, MOBANA integrates the diverse data sources of public transit, and converts them into standard and exchangeable data formats. In order to manage such diverse data, we propose a layered architecture, where each layer handles a specific kind of data. MOBANA is designed to be efficient. e.g., it identifies the real time position of vehicles by adjusting planned position with real-time data as needed, thus dropping network load. MOBANA is implemented by Distributed Stream Processing Engine (DSPE) and Distributed Messaging System (DMS), which pursue scalable, efficient and reliable real-time processing and analytics. MOBANA was deployed as pilot in Pavia, and tested with real data
Real-Time Data Processing With Lambda Architecture
Data has evolved immensely in recent years, in type, volume and velocity. There are several frameworks to handle the big data applications. The project focuses on the Lambda Architecture proposed by Marz and its application to obtain real-time data processing. The architecture is a solution that unites the benefits of the batch and stream processing techniques. Data can be historically processed with high precision and involved algorithms without loss of short-term information, alerts and insights. Lambda Architecture has an ability to serve a wide range of use cases and workloads that withstands hardware and human mistakes. The layered architecture enhances loose coupling and flexibility in the system. This a huge benefit that allows understanding the trade-offs and application of various tools and technologies across the layers. There has been an advancement in the approach of building the LA due to improvements in the underlying tools. The project demonstrates a simplified architecture for the LA that is maintainable
A Configurable Transport Layer for CAF
The message-driven nature of actors lays a foundation for developing scalable
and distributed software. While the actor itself has been thoroughly modeled,
the message passing layer lacks a common definition. Properties and guarantees
of message exchange often shift with implementations and contexts. This adds
complexity to the development process, limits portability, and removes
transparency from distributed actor systems.
In this work, we examine actor communication, focusing on the implementation
and runtime costs of reliable and ordered delivery. Both guarantees are often
based on TCP for remote messaging, which mixes network transport with the
semantics of messaging. However, the choice of transport may follow different
constraints and is often governed by deployment. As a first step towards
re-architecting actor-to-actor communication, we decouple the messaging
guarantees from the transport protocol. We validate our approach by redesigning
the network stack of the C++ Actor Framework (CAF) so that it allows to combine
an arbitrary transport protocol with additional functions for remote messaging.
An evaluation quantifies the cost of composability and the impact of individual
layers on the entire stack
Architecture for Analysis of Streaming Data
While several attempts have been made to construct a scalable and flexible
architecture for analysis of streaming data, no general model to tackle this
task exists. Thus, our goal is to build a scalable and maintainable
architecture for performing analytics on streaming data.
To reach this goal, we introduce a 7-layered architecture consisting of
microservices and publish-subscribe software. Our study shows that this
architecture yields a good balance between scalability and maintainability due
to high cohesion and low coupling of the solution, as well as asynchronous
communication between the layers.
This architecture can help practitioners to improve their analytic solutions.
It is also of interest to academics, as it is a building block for a general
architecture for processing streaming data
Minimum Distortion Variance Concatenated Block Codes for Embedded Source Transmission
Some state-of-art multimedia source encoders produce embedded source bit
streams that upon the reliable reception of only a fraction of the total bit
stream, the decoder is able reconstruct the source up to a basic quality.
Reliable reception of later source bits gradually improve the reconstruction
quality. Examples include scalable extensions of H.264/AVC and progressive
image coders such as JPEG2000. To provide an efficient protection for embedded
source bit streams, a concatenated block coding scheme using a minimum mean
distortion criterion was considered in the past. Although, the original design
was shown to achieve better mean distortion characteristics than previous
studies, the proposed coding structure was leading to dramatic quality
fluctuations. In this paper, a modification of the original design is first
presented and then the second order statistics of the distortion is taken into
account in the optimization. More specifically, an extension scheme is proposed
using a minimum distortion variance optimization criterion. This robust system
design is tested for an image transmission scenario. Numerical results show
that the proposed extension achieves significantly lower variance than the
original design, while showing similar mean distortion performance using both
convolutional codes and low density parity check codes.Comment: 6 pages, 4 figures, In Proc. of International Conference on
Computing, Networking and Communications, ICNC 2014, Hawaii, US
- …