4,438 research outputs found
A Survey on the Evolution of Stream Processing Systems
Stream processing has been an active research field for more than 20 years,
but it is now witnessing its prime time due to recent successful efforts by the
research community and numerous worldwide open-source communities. This survey
provides a comprehensive overview of fundamental aspects of stream processing
systems and their evolution in the functional areas of out-of-order data
management, state management, fault tolerance, high availability, load
management, elasticity, and reconfiguration. We review noteworthy past research
findings, outline the similarities and differences between early ('00-'10) and
modern ('11-'18) streaming systems, and discuss recent trends and open
problems.Comment: 34 pages, 15 figures, 5 table
Online failure prediction in air traffic control systems
This thesis introduces a novel approach to online failure prediction for mission critical distributed systems that has the distinctive features to be black-box, non-intrusive and online. The approach combines Complex Event Processing (CEP) and Hidden Markov Models (HMM) so as to analyze symptoms of failures that might occur in the form of anomalous conditions of performance metrics identified for such purpose. The thesis presents an architecture named CASPER, based on CEP and HMM, that relies on sniffed information from the communication network of a mission critical system, only, for predicting anomalies that can lead to software failures. An instance of Casper has been implemented, trained and tuned to monitor a real Air Traffic Control (ATC) system developed by Selex ES, a Finmeccanica Company. An extensive experimental evaluation of CASPER is presented. The obtained results show (i) a very low percentage of false positives over both normal and under stress conditions, and (ii) a sufficiently high failure prediction time that allows the system to apply appropriate recovery procedures
Online failure prediction in air traffic control systems
This thesis introduces a novel approach to online failure prediction for mission critical distributed systems that has the distinctive features to be black-box, non-intrusive and online. The approach combines Complex Event Processing (CEP) and Hidden Markov Models (HMM) so as to analyze symptoms of failures that might occur in the form of anomalous conditions of performance metrics identified for such purpose. The thesis presents an architecture named CASPER, based on CEP and HMM, that relies on sniffed information from the communication network of a mission critical system, only, for predicting anomalies that can lead to software failures. An instance of Casper has been implemented, trained and tuned to monitor a real Air Traffic Control (ATC) system developed by Selex ES, a Finmeccanica Company. An extensive experimental evaluation of CASPER is presented. The obtained results show (i) a very low percentage of false positives over both normal and under stress conditions, and (ii) a sufficiently high failure prediction time that allows the system to apply appropriate recovery procedures
BUILDING EFFICIENT AND COST-EFFECTIVE CLOUD-BASED BIG DATA MANAGEMENT SYSTEMS
In today’s big data world, data is being produced in massive volumes, at great velocity
and from a variety of different sources such as mobile devices, sensors, a plethora
of small devices hooked to the internet (Internet of Things), social networks, communication
networks and many others. Interactive querying and large-scale analytics are being
increasingly used to derive value out of this big data. A large portion of this data is being
stored and processed in the Cloud due the several advantages provided by the Cloud such
as scalability, elasticity, availability, low cost of ownership and the overall economies
of scale. There is thus, a growing need for large-scale cloud-based data management
systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics
can grow linearly with the time and resources required. Reducing the cost of data analytics
in the Cloud thus remains a primary challenge. In my dissertation research, I have
focused on building efficient and cost-effective cloud-based data management systems for
different application domains that are predominant in cloud computing environments.
In the first part of my dissertation, I address the problem of reducing the cost of
transactional workloads on relational databases to support database-as-a-service in the
Cloud. The primary challenges in supporting such workloads include choosing how to
partition the data across a large number of machines, minimizing the number of distributed
transactions, providing high data availability, and tolerating failures gracefully.
I have designed, built and evaluated SWORD, an end-to-end scalable online transaction
processing system, that utilizes workload-aware data placement and replication to minimize
the number of distributed transactions that incorporates a suite of novel techniques
to significantly reduce the overheads incurred both during the initial placement of data,
and during query execution at runtime.
In the second part of my dissertation, I focus on sampling-based progressive analytics
as a means to reduce the cost of data analytics in the relational domain. Sampling has
been traditionally used by data scientists to get progressive answers to complex analytical
tasks over large volumes of data. Typically, this involves manually extracting samples
of increasing data size (progressive samples) for exploratory querying. This provides the
data scientists with user control, repeatable semantics, and result provenance. However,
such solutions result in tedious workflows that preclude the reuse of work across samples.
On the other hand, existing approximate query processing systems report early results,
but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive
data-parallel computation framework, NOW!, that provides support for progressive
analytics over big data. In particular, NOW! enables progressive relational (SQL) query
support in the Cloud using unique progress semantics that allow efficient and deterministic
query processing over samples providing meaningful early results and provenance
to data scientists. NOW! enables the provision of early results using significantly fewer
resources thereby enabling a substantial reduction in the cost incurred during such analytics.
Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics
on large-scale graph-structured data in the Cloud. The system is based on the
key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in
the graph; examples include ego network analysis, motif counting in biological networks,
finding social circles in social networks, personalized recommendations, link prediction,
etc. These tasks are not well served by existing vertex-centric graph processing frameworks
whose computation and execution models limit the user program to directly access
the state of a single vertex, resulting in high execution overheads. Further, the lack of
support for extracting the relevant portions of the graph that are of interest to an analysis
task and loading it onto distributed memory leads to poor scalability. NSCALE allows
users to write programs at the level of neighborhoods or subgraphs rather than at the level
of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient
distributed execution of these neighborhood-centric complex analysis tasks over largescale
graphs, while minimizing resource consumption and communication cost, thereby
substantially reducing the overall cost of graph data analytics in the Cloud.
The results of our extensive experimental evaluation of these prototypes with several
real-world data sets and applications validate the effectiveness of our techniques
which provide orders-of-magnitude reductions in the overheads of distributed data querying
and analysis in the Cloud
Aeronautical Engineering: A special bibliography with indexes, supplement 67, February 1976
This bibliography lists 341 reports, articles, and other documents introduced into the NASA scientific and technical information system in January 1976
Structures Division 1994 Annual Report
The NASA Lewis Research Center Structures Division is an international leader and pioneer in developing new structural analysis, life prediction, and failure analysis related to rotating machinery and more specifically to hot section components in air-breathing aircraft engines and spacecraft propulsion systems. The research consists of both deterministic and probabilistic methodology. Studies include, but are not limited to, high-cycle and low-cycle fatigue as well as material creep. Studies of structural failure are at both the micro- and macrolevels. Nondestructive evaluation methods related to structural reliability are developed, applied, and evaluated. Materials from which structural components are made, studied, and tested are monolithics and metal-matrix, polymer-matrix, and ceramic-matrix composites. Aeroelastic models are developed and used to determine the cyclic loading and life of fan and turbine blades. Life models are developed and tested for bearings, seals, and other mechanical components, such as magnetic suspensions. Results of these studies are published in NASA technical papers and reference publication as well as in technical society journal articles. The results of the work of the Structures Division and the bibliography of its publications for calendar year 1994 are presented
Ono: an open platform for social robotics
In recent times, the focal point of research in robotics has shifted from industrial ro- bots toward robots that interact with humans in an intuitive and safe manner. This evolution has resulted in the subfield of social robotics, which pertains to robots that function in a human environment and that can communicate with humans in an int- uitive way, e.g. with facial expressions. Social robots have the potential to impact many different aspects of our lives, but one particularly promising application is the use of robots in therapy, such as the treatment of children with autism. Unfortunately, many of the existing social robots are neither suited for practical use in therapy nor for large scale studies, mainly because they are expensive, one-of-a-kind robots that are hard to modify to suit a specific need. We created Ono, a social robotics platform, to tackle these issues. Ono is composed entirely from off-the-shelf components and cheap materials, and can be built at a local FabLab at the fraction of the cost of other robots. Ono is also entirely open source and the modular design further encourages modification and reuse of parts of the platform
- …