15,119 research outputs found
CloudHealth: A Model-Driven Approach to Watch the Health of Cloud Services
Cloud systems are complex and large systems where services provided by
different operators must coexist and eventually cooperate. In such a complex
environment, controlling the health of both the whole environment and the
individual services is extremely important to timely and effectively react to
misbehaviours, unexpected events, and failures. Although there are solutions to
monitor cloud systems at different granularity levels, how to relate the many
KPIs that can be collected about the health of the system and how health
information can be properly reported to operators are open questions. This
paper reports the early results we achieved in the challenge of monitoring the
health of cloud systems. In particular we present CloudHealth, a model-based
health monitoring approach that can be used by operators to watch specific
quality attributes. The CloudHealth Monitoring Model describes how to
operationalize high level monitoring goals by dividing them into subgoals,
deriving metrics for the subgoals, and using probes to collect the metrics. We
use the CloudHealth Monitoring Model to control the probes that must be
deployed on the target system, the KPIs that are dynamically collected, and the
visualization of the data in dashboards.Comment: 8 pages, 2 figures, 1 tabl
A framework for selecting workflow tools in the context of composite information systems
When an organization faces the need of integrating some workflow-related activities in its information system, it becomes necessary to have at hand some well-defined informational model to be used as a framework for determining the selection criteria onto which the requirements of the organization can be mapped. Some proposals exist that provide such a framework, remarkably the WfMC reference model, but they are designed to be appl icable when workflow tools are selected independently from other software, and departing from a set of well-known requirements. Often this is not the case: workflow facilities are needed as a part of the procurement of a larger, composite information syste m and therefore the general goals of the system have to be analyzed, assigned to its individual components and further detailed. We propose in this paper the MULTSEC method in charge of analyzing the initial goals of the system, determining the types of components that form the system architecture, building quality models for each type and then mapping the goals into detailed requirements which can be measured using quality criteria. We develop in some detail the quality model (compliant with the ISO/IEC 9126-1 quality standard) for the workflow type of tools; we show how the quality model can be used to refine and clarify the requirements in order to guarantee a highly reliable selection result; and we use it to evaluate two particular workflow solutions a- ailable in the market (kept anonymous in the paper). We develop our proposal using a particular selection experience we have recently been involved in, namely the procurement of a document management subsystem to be integrated in an academic data management information system for our university.Peer ReviewedPostprint (author's final draft
Microservice Architecture Reconstruction and Visualization Techniques: A Review
Microservice system solutions are driving digital transformation; however,
fundamental tools and system perspectives are missing to better observe,
understand, and manage these systems, their properties, and their dependencies.
Microservices architecture leads towards decentralization, which implies many
advantages to system operation; it, however, brings challenges to their
development. Microservice systems often lack a system-centric perspective that
would help engineers better cope with system evolution and quality assessment.
In this work, we explored microservice-specific architecture reconstruction
based on static analysis. Such reconstruction typically results in system
models to visualize selected system-centric perspectives. Conventional models
involve 2D methods; however, these methods are limited in utility when services
proliferate. We considered various architectural perspectives relevant to
microservices and assessed the relevancy of the traditional method, comparing
it to alternative data visualization using 3D space. As a representative of the
3D method, we considered a 3D graph model presented in augmented reality. To
begin testing the feasibility of deriving such perspectives from microservice
systems, we developed and implemented prototype tools for software architecture
reconstruction and visualization of compared perspectives. Using these
prototypes, we performed a small user study with software practitioners to
highlight the potentials and limitations of these innovative visualizations
used for common practitioner reasoning and tasks
Hacker Combat: A Competitive Sport from Programmatic Dueling & Cyberwarfare
The history of humanhood has included competitive activities of many
different forms. Sports have offered many benefits beyond that of
entertainment. At the time of this article, there exists not a competitive
ecosystem for cyber security beyond that of conventional capture the flag
competitions, and the like. This paper introduces a competitive framework with
a foundation on computer science, and hacking. This proposed competitive
landscape encompasses the ideas underlying information security, software
engineering, and cyber warfare. We also demonstrate the opportunity to rank,
score, & categorize actionable skill levels into tiers of capability.
Physiological metrics are analyzed from participants during gameplay. These
analyses provide support regarding the intricacies required for competitive
play, and analysis of play. We use these intricacies to build a case for an
organized competitive ecosystem. Using previous player behavior from gameplay,
we also demonstrate the generation of an artificial agent purposed with
gameplay at a competitive level
Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure
This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version
A Quality Model for Actionable Analytics in Rapid Software Development
Background: Accessing relevant data on the product, process, and usage
perspectives of software as well as integrating and analyzing such data is
crucial for getting reliable and timely actionable insights aimed at
continuously managing software quality in Rapid Software Development (RSD). In
this context, several software analytics tools have been developed in recent
years. However, there is a lack of explainable software analytics that software
practitioners trust. Aims: We aimed at creating a quality model (called
Q-Rapids quality model) for actionable analytics in RSD, implementing it, and
evaluating its understandability and relevance. Method: We performed workshops
at four companies in order to determine relevant metrics as well as product and
process factors. We also elicited how these metrics and factors are used and
interpreted by practitioners when making decisions in RSD. We specified the
Q-Rapids quality model by comparing and integrating the results of the four
workshops. Then we implemented the Q-Rapids tool to support the usage of the
Q-Rapids quality model as well as the gathering, integration, and analysis of
the required data. Afterwards we installed the Q-Rapids tool in the four
companies and performed semi-structured interviews with eight product owners to
evaluate the understandability and relevance of the Q-Rapids quality model.
Results: The participants of the evaluation perceived the metrics as well as
the product and process factors of the Q-Rapids quality model as
understandable. Also, they considered the Q-Rapids quality model relevant for
identifying product and process deficiencies (e.g., blocking code situations).
Conclusions: By means of heterogeneous data sources, the Q-Rapids quality model
enables detecting problems that take more time to find manually and adds
transparency among the perspectives of system, process, and usage.Comment: This is an Author's Accepted Manuscript of a paper to be published by
IEEE in the 44th Euromicro Conference on Software Engineering and Advanced
Applications (SEAA) 2018. The final authenticated version will be available
onlin
- …