11 research outputs found

    How to Measure Scalability of Distributed Stream Processing Engines?

    Get PDF
    Scalability is promoted as a key quality feature of modern big data stream processing engines. However, even though research made huge efforts to provide precise definitions and corresponding metrics for the term scalability, experimental scalability evaluations or benchmarks of stream processing engines apply different and inconsistent metrics. With this paper, we aim to establish general metrics for scalability of stream processing engines. Derived from common definitions of scalability in cloud computing, we propose two metrics: a load capacity function and a resource demand function. Both metrics relate provisioned resources and load intensities, while requiring specific service level objectives to be fulfilled. We show how these metrics can be employed for scalability benchmarking and discuss their advantages in comparison to other metrics, used for stream processing engines and other software systems

    Prospects for Scalability: Relationships and Uncertainty in Responsive Regulation

    Get PDF
    Ian Ayres’s and John Braithwaite’s book Responsive Regulation: Transcending the Regulatory Debate (1992) gave us many significant insights. The book has transcended its own time. At the same time, on the 20th anniversary of its publication, two things about Responsive Regulation are striking. The first is the direct, personal relationship on which the regulatory interaction is premised. The second is the boundedness and manageability of the regulatory project. At least in prudential regulation of global financial institutions in the wake of the recent financial crisis (though surely elsewhere too), neither of these features can be taken for granted. This brief essay seeks to open a preliminary conversation about Responsive Regulation in terms of its scalability. It considers whether as a practical matter, Responsive Regulation can be scaled up to more diffuse, multiparty, logistically complex contexts, such as financial regulation. As a matter of representation, it asks whether by projecting outward from its focal object, the responsive relationship, Responsive Regulation distorts our image of regulation in other contexts (or even in Responsive Regulation’s own home environment). The essay closes by arguing that in order to incorporate Responsive Regulation’s considerable discursive and relational benefits into regulatory environments such as global financial regulation, it needs to be buttressed by additional regulatory technologies

    Efficient Processing of Geospatial mHealth Data Using a Scalable Crowdsensing Platform

    Get PDF
    Smart sensors and smartphones are becoming increasingly prevalent. Both can be used to gather environmental data (e.g., noise). Importantly, these devices can be connected to each other as well as to the Internet to collect large amounts of sensor data, which leads to many new opportunities. In particular, mobile crowdsensing techniques can be used to capture phenomena of common interest. Especially valuable insights can be gained if the collected data are additionally related to the time and place of the measurements. However, many technical solutions still use monolithic backends that are not capable of processing crowdsensing data in a flexible, efficient, and scalable manner. In this work, an architectural design was conceived with the goal to manage geospatial data in challenging crowdsensing healthcare scenarios. It will be shown how the proposed approach can be used to provide users with an interactive map of environmental noise, allowing tinnitus patients and other health-conscious people to avoid locations with harmful sound levels. Technically, the shown approach combines cloud-native applications with Big Data and stream processing concepts. In general, the presented architectural design shall serve as a foundation to implement practical and scalable crowdsensing platforms for various healthcare scenarios beyond the addressed use case

    Elastic Highly Available Cloud Computing

    Get PDF
    High availability and elasticity are two the cloud computing services technical features. Elasticity is a key feature of cloud computing where provisioning of resources is closely tied to the runtime demand. High availability assure that cloud applications are resilient to failures. Existing cloud solutions focus on providing both features at the level of the virtual resource through virtual machines by managing their restart, addition, and removal as needed. These existing solutions map applications to a specific design, which is not suitable for many applications especially virtualized telecommunication applications that are required to meet carrier grade standards. Carrier grade applications typically rely on the underlying platform to manage their availability by monitoring heartbeats, executing recoveries, and attempting repairs to bring the system back to normal. Migrating such applications to the cloud can be particularly challenging, especially if the elasticity policies target the application only, without considering the underlying platform contributing to its high availability (HA). In this thesis, a Network Function Virtualization (NFV) framework is introduced; the challenges and requirements of its use in mobile networks are discussed. In particular, an architecture for NFV framework entities in the virtual environment is proposed. In order to reduce signaling traffic congestion and achieve better performance, a criterion to bundle multiple functions of virtualized evolved packet-core in a single physical device or a group of adjacent devices is proposed. The analysis shows that the proposed grouping can reduce the network control traffic by 70 percent. Moreover, a comprehensive framework for the elasticity of highly available applications that considers the elastic deployment of the platform and the HA placement of the application’s components is proposed. The approach is applied to an internet protocol multimedia subsystem (IMS) application and demonstrate how, within a matter of seconds, the IMS application can be scaled up while maintaining its HA status

    Quantifying cloud performance and dependability:Taxonomy, metric design, and emerging challenges

    Get PDF
    In only a decade, cloud computing has emerged from a pursuit for a service-driven information and communication technology (ICT), becoming a significant fraction of the ICT market. Responding to the growth of the market, many alternative cloud services and their underlying systems are currently vying for the attention of cloud users and providers. To make informed choices between competing cloud service providers, permit the cost-benefit analysis of cloud-based systems, and enable system DevOps to evaluate and tune the performance of these complex ecosystems, appropriate performance metrics, benchmarks, tools, and methodologies are necessary. This requires re-examining old system properties and considering new system properties, possibly leading to the re-design of classic benchmarking metrics such as expressing performance as throughput and latency (response time). In this work, we address these requirements by focusing on four system properties: (i) elasticity of the cloud service, to accommodate large variations in the amount of service requested, (ii) performance isolation between the tenants of shared cloud systems and resulting performance variability, (iii) availability of cloud services and systems, and (iv) the operational risk of running a production system in a cloud environment. Focusing on key metrics for each of these properties, we review the state-of-the-art, then select or propose new metrics together with measurement approaches. We see the presented metrics as a foundation toward upcoming, future industry-standard cloud benchmarks

    The Advanced Framework for Evaluating Remote Agents (AFERA): A Framework for Digital Forensic Practitioners

    Get PDF
    Digital forensics experts need a dependable method for evaluating evidence-gathering tools. Limited research and resources challenge this process and the lack of multi-endpoint data validation hinders reliability in distributed digital forensics. A framework was designed to evaluate distributed agent-based forensic tools while enabling practitioners to self-evaluate and demonstrate evidence reliability as required by the courts. Grounded in Design Science, the framework features guidelines, data, criteria, and checklists. Expert review enhances its quality and practicality

    A framework for the characterization and analysis of software systems scalability

    Get PDF
    The term scalability appears frequently in computing literature, but it is a term that is poorly defined and poorly understood. It is an important attribute of computer systems that is frequently asserted but rarely validated in any meaningful, systematic way. The lack of a consistent, uniform and systematic treatment of scalability makes it difficult to identify and avoid scalability problems, clearly and objectively describe the scalability of software systems, evaluate claims of scalability, and compare claims from different sources. This thesis provides a definition of scalability and describes a systematic framework for the characterization and analysis of software systems scalability. The framework is comprised of a goal-oriented approach for describing, modeling and reasoning about scalability requirements, and an analysis technique that captures the dependency relationships that underlie typical notions of scalability. The framework is validated against a real-world data analysis system and is used to recast a number of examples taken from the computing literature and from industry in order to demonstrate its use across different application domains and system designs

    Intensional Cyberforensics

    Get PDF
    This work focuses on the application of intensional logic to cyberforensic analysis and its benefits and difficulties are compared with the finite-state-automata approach. This work extends the use of the intensional programming paradigm to the modeling and implementation of a cyberforensics investigation process with backtracing of event reconstruction, in which evidence is modeled by multidimensional hierarchical contexts, and proofs or disproofs of claims are undertaken in an eductive manner of evaluation. This approach is a practical, context-aware improvement over the finite state automata (FSA) approach we have seen in previous work. As a base implementation language model, we use in this approach a new dialect of the Lucid programming language, called Forensic Lucid, and we focus on defining hierarchical contexts based on intensional logic for the distributed evaluation of cyberforensic expressions. We also augment the work with credibility factors surrounding digital evidence and witness accounts, which have not been previously modeled. The Forensic Lucid programming language, used for this intensional cyberforensic analysis, formally presented through its syntax and operational semantics. In large part, the language is based on its predecessor and codecessor Lucid dialects, such as GIPL, Indexical Lucid, Lucx, Objective Lucid, and JOOIP bound by the underlying intensional programming paradigm.Comment: 412 pages, 94 figures, 18 tables, 19 algorithms and listings; PhD thesis; v2 corrects some typos and refs; also available on Spectrum at http://spectrum.library.concordia.ca/977460

    Quality Goal Oriented Architectural Design and Traceability for Evolvable Software Systems

    Get PDF
    Softwaresysteme werden heute z.B. aufgrund sich Ă€ndernder GeschĂ€ftsprozesse oder Technologien mit hĂ€ufigen VerĂ€nderungen konfrontiert. Die Software und speziell ihre Architektur muss diese Änderungen zur dauerhaften Nutzbarkeit ermöglichen.WĂ€hrend der Software-Evolution können Änderungen zu einer Verschlechterung der Architektur fĂŒhren, der Architekturerosion. Dies erschwert oder verhindert weitere Änderungen wegen Inkonsistenz oder fehlendem Programmverstehen. Zur Erosionsvermeidung mĂŒssen QualitĂ€tsziele wie Weiterentwickelbarkeit, Performanz oder Usability sowie die Nachvollziehbarkeit von Architekturentwurfsentscheidungen berĂŒcksichtigt werden. Dies wird jedoch oft vernachlĂ€ssigt.Existierende Entwurfsmethoden unterstĂŒtzen den Übergang von QualitĂ€tzielen zu geeigneten Architekturlösungen nur unzureichend aufgrund einer LĂŒcke zwischen Methoden des Requirements Engineering und des Architekturentwurfs. Insbesondere gilt dies fĂŒr Weiterentwickelbarkeit und die Nachvollziehbarkeit von Entwurfsentscheidungen durch explizite ModellabhĂ€ngigkeiten.Diese Arbeit prĂ€sentiert ein neues Konzept, genannt Goal Solution Scheme, das QualitĂ€tsziele ĂŒber Architekturprinzipien auf Lösungsinstrumente durch explizite AbhĂ€ngigkeiten abbildet. Es hilft somit, Architekturlösungen entsprechend ihrem Einfluss auf QualitĂ€tsziele auszuwĂ€hlen. Das Schema wird speziell hinsichtlich Weiterentwickelbarkeit diskutiert und ist in ein zielorientiertes Vorgehen eingebettet, das etablierte Methoden und Konzepte des Requirements Engineering und Architekturentwurfs verbessert und integriert. Dies wird ergĂ€nzt durch ein Traceability-Konzept, welches einen regelbasierten Ansatz mit Techniken des Information Retrieval verbindet. Dies ermöglicht eine (halb-) automatische Erstellung von Traceability Links mit spezifischen Linktypen und Attributen fĂŒr eine reichhaltige Semantik sowie mit hoher Genauigkeit und Trefferquote.Die Realisierbarkeit des Ansatzes wird an einer Fallstudie einer Software fĂŒr mobile Serviceroboter gezeigt. Das Werkzeug EMFTrace wurde als eine erweiterbare Plattform basierend auf Eclipse-Technologie implementiert, um die Anwendbarkeit der Konzepte zu zeigen. Es integriert Entwurfsmodelle von externen CASE-Tools mittels XML-Technologie in einem gemeinsamen Modell-Repository, wendet Regeln zur Linkerstellung an und bietet Validierungsfunktionen fĂŒr Regeln und Links.Today software systems are frequently faced with demands for changes, for example, due to changing business processes or technologies. The software and especially its architecture has to cope with those frequent changes to permanently remain usable.During software evolution changes can lead to a deterioration of the structure of software architectures called architectural erosion, which hampers or even inhibits further changes because of inconsistencies or lacking program comprehension. To support changes and avoid erosion, especially quality goals, such as evolvability, performance, or usability, and the traceability of design decisions have to be considered during architectural design. This however often is neglected.Existing design methods do not sufficiently support the transition from the quality goals to appropriate architectural solutions because there is still a gap between requirements engineering and architectural design methods. Particularly support is lacking for the goal evolvability and for the traceability of design decisions by explicit model dependencies.This thesis presents a new concept called Goal Solution Scheme, which provides a mapping from goals via architectural principles to solution instruments by explicit dependencies. Thus it helps to select appropriate architectural solutions according to their influence on quality goals. The scheme is discussed especially regarding evolvability, and it is embedded in a goal-oriented architectural design method, which enhances and integrates established methods and concepts from requirements engineering as well as architectural design. This is supplemented by a traceability concept, which combines a rule-based approach with information retrieval techniques for a (semi-) automated establishment of links with specific link types and attributes for rich semantics and a high precision and recall.The feasibility of the design approach has been evaluated in a case study of a software platform for mobile robots. A prototype tool suite called EMFTrace was implemented as an extensible platform based on Eclipse technology to show the practicability of the thesis' concept. It integrates design models from external CASE tools in a joint model repository by means of XML technology, applies rules for link establishment, and provides validation capabilities for rules and links
    corecore