541,725 research outputs found
Deterministic method of data sequence processing
A data management system can be separated in typical data processing systems. Unfortunately, relational data management systems are not efficient enough to handle the on-line signal processing task in a monitoring system. The main current in research into database management system model for the needs of monitoring systems is connected with a data stream model. However, these systems are non-deterministic. This paper presents the developed methods of data stream processing for signal processing tasks in medical database management systems, as well as the developed theorems of data sequences (stream) algebra with formal proofs. A direct link between some introduced operators and Beatty and Fraenkel theorems has been proved
Real-time Scheduling for Data Stream Management Systems
Quality-aware management of data streams is gaining more and more importance with the amount of data produced by streams growing continuously. The resources required for data stream processing depend on different factors and are limited by the environment of the data stream management system (DSMS). Thus, with a potentially unbounded amount of stream data and limited processing resources, some of the data stream processing tasks (originating from different users) may not be satisfyingly answered, and therefore, users should be enabled to negotiate a certain quality for the execution of their stream processing tasks. After the negotiation process, it is the responsibility of the Data Stream Management System to meet the quality constraints by using adequate resource reservation and scheduling techniques. Within this paper, we consider different aspects of real-time scheduling for operations within a DSMS. We propose a scheduling concept which enables us to meet certain time-dependent quality of service requirements for user-given processing tasks. Furthermore, we describe the implementation of our scheduling concept within a real-time capable data stream management system, and we give experimental results on that
Temporal Stream Algebra
Data stream management systems (DSMS) so far focus on
event queries and hardly consider combined queries to both
data from event streams and from a database. However,
applications like emergency management require combined
data stream and database queries. Further requirements are
the simultaneous use of multiple timestamps after different
time lines and semantics, expressive temporal relations between multiple time-stamps and
exible negation, grouping
and aggregation which can be controlled, i. e. started and
stopped, by events and are not limited to fixed-size time
windows. Current DSMS hardly address these requirements.
This article proposes Temporal Stream Algebra (TSA) so
as to meet the afore mentioned requirements. Temporal
streams are a common abstraction of data streams and data-
base relations; the operators of TSA are generalizations of
the usual operators of Relational Algebra. A in-depth 'analysis of temporal relations guarantees that valid TSA expressions are non-blocking, i. e. can be evaluated incrementally.
In this respect TSA differs significantly from previous algebraic approaches which use specialized operators to prevent
blocking expressions on a "syntactical" level
Integrating database and data stream systems
Traditionally, Database systems are viewed as passive data storage. Finite data sets are stored in traditional Database Systems and retrieved when needed. But applications such as sensor networks, network monitoring, retail transactions, and others, produce infinite data sets. A new system is under research and development, known as Data Stream Management System (DSMS), to deal with the infinite data sets. In DSMS, Data stream is a continuous source of sequential data. In Object-Oriented languages, like C/C++ and Java, the concept of stream does exist. The stream is viewed as a channel to which data is being inserted at one end and retrieved from the other end. To the database world, stream is a relatively new concept. In DSMS, data is processed on-line. Due to its very nature, the data fed to application through Data Stream can get lost, as it is never stored. This makes Data Stream non-persistent. Unlike this, Database Systems are persistent, which is the basis of my hypothesis. My hypothesis is Data Stream Management System and Database System can be combined under the same concepts and Data Stream can be made persistent. In this project, I have used an embedded database as a middleware to cache the data that is fed to an application through Data Stream. The embedded database is directly linked to the application that requires access to the stored data and is faster compared to a conventional database management system. Storing the streaming data in an embedded database makes Data Stream persistent. In the system developed, embedded database also stores the history of data from Database System. Now, any query that is run against the embedded database will generate combined result from Data Streams and Database Systems. An application is developed, using Active Collection Framework as a test bed, to prove the concept
Using Dedicated and Opportunistic Networks in Synergy for a Cost-effective Distributed Stream Processing Platform
This paper presents a case for exploiting the synergy of dedicated and
opportunistic network resources in a distributed hosting platform for data
stream processing applications. Our previous studies have demonstrated the
benefits of combining dedicated reliable resources with opportunistic resources
in case of high-throughput computing applications, where timely allocation of
the processing units is the primary concern. Since distributed stream
processing applications demand large volume of data transmission between the
processing sites at a consistent rate, adequate control over the network
resources is important here to assure a steady flow of processing. In this
paper, we propose a system model for the hybrid hosting platform where stream
processing servers installed at distributed sites are interconnected with a
combination of dedicated links and public Internet. Decentralized algorithms
have been developed for allocation of the two classes of network resources
among the competing tasks with an objective towards higher task throughput and
better utilization of expensive dedicated resources. Results from extensive
simulation study show that with proper management, systems exploiting the
synergy of dedicated and opportunistic resources yield considerably higher task
throughput and thus, higher return on investment over the systems solely using
expensive dedicated resources.Comment: 9 page
VStorm: Video Traffic Management By Distributed Data Stream Processing Systems
Recent published work has shown that Quality of Experience (QoE) has become one of the major concerns
in the area of large scale multimedia Internet services.
Due to the significant increase of video traffic and continuously growing need of video quality experience, the
need of a new management platform specifically designed for multimedia traffic can no longer be ignored.
Recent studies have been proposed advanced solution
that can potentially burden the video streaming framework. On the other hand, we have also witnessed the
emergence of large scale distributed stream processing
systems. These systems provide real-time results for
nearly all types of data streams and computation in a
massive scale. In this project, we are going to explore
the possibility of deploying distributed data stream processing (DSP) systems in a large-scale multimedia network with dynamically changing Internet resource.
In this paper, we are going to use a popular distributed stream processing system, Apache Storm to
implement multiple frameworks proposed by recent
published work. Simulation results on our frameworks
show that implementing complex stream control strategy in DSPs can be efficient and flexible.Ope
ASIC implemented MicroBlaze-based Coprocessor for Data Stream Management Systems
Indiana University-Purdue University Indianapolis (IUPUI)The drastic increase in Internet usage demands the need for processing data in real time with higher efficiency than ever before. Symbiote Coprocessor Unit (SCU), developed by Dr. Pranav Vaidya, is a hardware accelerator which has potential of providing data processing speedup of up to 150x compared with traditional data stream processors. However, SCU implementation is very complex, fixed, and uses an outdated host interface, which limits future improvement. Mr. Tareq S. Alqaisi, an MSECE graduate from IUPUI worked on curbing these limitations. In his architecture, he used a Xilinx MicroBlaze microcontroller to reduce the complexity of SCU along with few other modifications. The objective of this study is to make SCU suitable for mass production while reducing its power consumption and delay. To accomplish this, the execution unit of SCU has been implemented in application specific integrated circuit and modules such as ACG/OCG, sequential comparator, and D-word multiplier/divider are integrated into the design. Furthermore, techniques such as operand isolation, buffer insertion, cell swapping, and cell resizing are also integrated into the system. As a result, the new design attains 67.9435 µW of dynamic power as compared to 74.0012 µW before power optimization along with a small increase in static power, 39.47 ns of clock period as opposed to 52.26 ns before time optimization
- …