13 research outputs found

    Agile Testing for Blockchain Development QA

    Get PDF
    Agile testing has evolved into a commonly employed practice in most development disciplines. It has been around as long as the agile manifesto and has developed all the hallmarks of a mature set of practices, i.e., tools, metrics, techniques, etc. But its overlap with blockchain is something that has yet to reach the maturity of either – agile testing or blockchain development. The QA for blockchain development hasn’t been standardized in the same manner as the QA for web development and other areas of software development, even newer ones like cloud-native development. Agile testing leans heavily towards automation, Artificial Intelligence (AI), and Machine Learning (ML) and can benefit from collective or separate advances in the three technologies. But these technologies, regardless of their influence on blockchain development and its QA, cannot become the bridge connecting the two. Blockchain development QA suffers from a significant lack of standardization and a unified set of good practices, and this hinders its ability to adapt agile testing practices into the existing paradigm. However, as blockchain development is adopted by agile teams and its QA becomes more standardized, we may see more overlap between agile testing and blockchain development QA

    Multi-layered simulations at the heart of workflow enactment on clouds

    Get PDF
    Scientific workflow systems face new challenges when supporting Cloud computing, as the information on the state of the used infrastructures is much less detailed than before. Thus, organising virtual infrastructures in a way that not only supports the workflow execution but also optimises it for several service level objectives (e.g. maximum energy consumption limit, cost, reliability, availability) become reliant on good Cloud modelling and prediction information. While simulators were successfully aiding research on such workflow management systems, the currently available Cloud related simulation toolkits suffer from several issues (e.g. scalability and narrow scope) that hinder their applicability. To address these issues, this article introduces techniques for unifying two existing simulation toolkits by first analysing the problems with the current simulators, and then by illustrating the problems faced by workflow systems. We use for this purpose the example of the ASKALON environment, a scientific workflow composition and execution tool for cloud and grid environments. We illustrate the advantages of a workflow system with directly integrated simulation back-end and how the unification of the selected simulators does not affect the overall workflow execution simulation performance. Copyright © 2015 John Wiley & Sons, Ltd

    Journal of Telecommunications and Information Technology, 2018, nr 1

    Get PDF
    This paper proposes a fuzzy Manhattan distance-based similarity for gang formation of resources (FMDSGR) method with priority task scheduling in cloud computing. The proposed work decides which processor is to execute the current task in order to achieve efficient resource utilization and effective task scheduling. FMDSGR groups the resources into gangs which rely upon the similarity of resource characteristics in order to use the resources effectively. Then, the tasks are scheduled based on the priority in the gang of processors using gang-based priority scheduling (GPS). This reduces mainly the cost of deciding which processor is to execute the current task. Performance has been evaluated in terms of makespan, scheduling length ratio, speedup, efficiency and load balancing. CloudSim simulator is the toolkit used for simulation and for demonstrating experimental results in cloud computing environments

    From security to assurance in the cloud: a survey

    Get PDF
    The cloud computing paradigm has become a mainstream solution for the deployment of business processes and applications. In the public cloud vision, infrastructure, platform, and software services are provisioned to tenants (i.e., customers and service providers) on a pay-as-you-go basis. Cloud tenants can use cloud resources at lower prices, and higher performance and flexibility, than traditional on-premises resources, without having to care about infrastructure management. Still, cloud tenants remain concerned with the cloud's level of service and the nonfunctional properties their applications can count on. In the last few years, the research community has been focusing on the nonfunctional aspects of the cloud paradigm, among which cloud security stands out. Several approaches to security have been described and summarized in general surveys on cloud security techniques. The survey in this article focuses on the interface between cloud security and cloud security assurance. First, we provide an overview of the state of the art on cloud security. Then, we introduce the notion of cloud security assurance and analyze its growing impact on cloud security approaches. Finally, we present some recommendations for the development of next-generation cloud security and assurance solutions

    Técnicas de altas prestaciones aplicadas al diseño de infraestructuras ferroviarias complejas

    Get PDF
    In this work we will focus on overhead air switches design problem. The design of railway infrastructures is an important problem in the railway world, non-optimal designs cause limitations in the train speed and, most important, malfunctions and breakages. Most railway companies have regulations for the design of these elements. Those regulations have been defined by the experience, but, as far as we know, there are no computerized software tools that assist with the task of designing and testing optimal solutions for overhead switches. The aim of this thesis is the design, implementation, and evaluation of a simulator that that facilitates the exploration of all possible solutions space, looking for the set of optimal solutions in the shortest time and at the lowest possible cost. Simulators are frequently used in the world of rail infrastructure. Many of them only focus on simulated scenarios predefined by the users, analyzing the feasibility or otherwise of the proposed design. Throughout this thesis, we will propose a framework to design a complete simulator that be able to propose, simulate and evaluate multiple solutions. This framework is based on four pillars: compromise between simulation accuracy and complexity, automatic generation of possible solutions (automatic exploration of the solution space), consideration of all the actors involved in the design process (standards, additional restrictions, etc.), and finally, the expert’s knowledge and integration of optimization metrics. Once we defined the framework different deployment proposes are presented, one to be run in a single node, and one in a distributed system. In the first paradigm, one thread per CPU available in the system is launched. All the simulators are designed around this paradigm of parallelism. The second simulation approach will be designed to be deploy in a cluster with several nodes, MPI will be used for that purpose. Finally, after the implementation of each of the approaches, we will proceed to evaluate the performance of each of them, carrying out a comparison of time and cost. Two examples of real scenarios will be used.El diseño de agujas aéreas es un problema bastante complejo y critico dentro del proceso de diseño de sistemas ferroviarios. Un diseño no óptimo puede provocar limitaciones en el servicio, como menor velocidad de tránsito, y lo que es más importante, puede ser la causa principal de accidentes y averías. La mayoría de las compañías ferroviarias disponen de regulaciones para el diseño correcto de estas agujas aéreas. Todas estas regulaciones han sido definidas bajo décadas de experiencia, pero hasta donde sé, no existen aplicaciones software que ayuden en la tarea de diseñar y probar soluciones óptimas. Es en este punto donde se centra el objetivo de la tesis, el diseño, implementación y evaluación de un simulador capaz de explorar todo el posible espacio de soluciones buscando el conjunto de soluciones óptimas en el menor tiempo y con el menor coste posible. Los simuladores son utilizados frecuentemente en el mundo de la infraestructura ferroviaria. Muchos de ellos solo se centran en la simulación de escenarios preestablecidos por el usuario, analizando la viabilidad o no del diseño propuesto. A lo largo de esta tesis, se propondrá un framework que permita al simulador final ser capaz de proponer, simular y evaluar múltiples soluciones. El framework se basa en 4 pilares fundamentales, compromiso entre precisión en la simulación y la complejidad del simulador; generación automática de posibles soluciones (exploración automática del espacio de soluciones), consideración de todos los agentes que intervienen en el proceso de diseño (normativa, restricciones adicionales, etc.) y por último, el conocimiento del experto y la integración de métricas de optimización. Una vez definido el framework se presentaran varias opciones de implementación del simulador, en la primera de ellas se diseñará e implementara una versión con hilos pura. Se lanzara un hilo por cada CPU disponible en el sistema. Todo el simulador se diseñará en torno a este paradigma de paralelismo. En un segundo simulador, se aplicará un paradigma mucho más pensado para su despliegue en un cluster y no en un único nodo (como el paradigma inicial), para ello se empleara MPI. Con esta versión se podrá adaptar el simulador al cluster en el que se va a ejecutar. Por último, se va a emplear un paradigma basado en cloud computing. Para ello, según las necesidades del escenario a simular, se emplearán más o menos máquinas virtuales. Finalmente, tras la implementación de cada uno de los simuladores, se procederá a evaluar el rendimiento de cada uno de ellos, realizando para ello una comparativa de tiempo y coste. Se empleara para ello dos ejemplos de escenarios reales.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: José Daniel García Sánchez.- Secretario: Antonio García Dopico.- Vocal: Juan Carlos Díaz Martí

    Aallon tutkimustietosäilö: Tutkimustiedon hallinta, julkaisu ja jakaminen tutkimustietokeskeisen tieteen ajassa

    Get PDF
    All fields of science are becoming data intensive. The decrease of computing price and the evolution of data collection methods have created novel research opportunities. This new data intensive paradigm puts a new kind of premium to research data, since it more than ever before forms the lifeblood of research. As a result the demand for publishing research data has increased from both funding bodies and the research community. These factors combined present novel challenges for research data management and publication. This thesis sheds light on the current status of research data management, sharing and publishing. The primary contribution of the thesis is the examination of existing technical solution to these research data challenges. In addition requirements for successful research data solutions are proposed. The secondary contribution of the thesis is the research of the cultural atmosphere surrounding research data management, sharing and publishing. Technical solutions for the three research data challenges were found mainly from within the open source community. Solutions like Dataverse, Invenio, Hydra Project and CKAN offer platforms for sharing and publishing data. Solutions like iRODS can be used to manage research data. These solutions serve their purpose, but there is no good integrated solution that would solve all three research data challenges. The lack of holistic solutions combined with the lack of culture and knowledge about research data management result in limited research data publishing and sharing. Future work should, in addition to building an integrated solution for sharing, publishing and managing research data, aim to make the culture around research data management more open.Tutkimustieto on nykypäivänä keskeinen osa kaikkea tutkimusta. Tieteellisen laskennan hinnan lasku ja tutkimustiedon keräämismenetelmien kehitys ovat johtaneet uusiin tutkimusmenetelmiin. Tutkimustietokeskeinen suuntaus asettaa tutkimustiedon tärkeämpään asemaan kuin koskaan aiemmin, sillä ilman laadukasta tutkimustietoa parhaat tulokset jäävät saavuttamatta. Tämän johdosta tutkijayhteisö ja tutkimuksen rahoittajatahot ovat alkaneet vaatia tutkimustiedon julkaisemista. Näiden kehitysten johdosta tutkimsutiedon hallintaan ja julkaisuun täytyy kehittää uusia ratkaisuja. Tämä opinnäytetyö valottaa tutkimustiedon julkaisemisen, jakamisen ja hallinnan nykytilannetta. Opinnäytteen pääpanos on näitä tutkimustiedon haasteita ratkovien teknisten toteutusten tutkimus, minkä lisäksi työssä ehdotetaan onnistuneen tutkimustietoratkaisun vaatimuksia. Opinnäytetyö tutkii myös tutkimustiedon julkaisemisen ja hallinnan kulttuuria ja käytäntöjä. Opinnäytetyössä esitellään pääasiassa avoimen lähdekoodin ratkaisuja tutkimustiedon haasteisiin. Dataversen, Invenion, Hydra-projektin ja CKANin kaltaiset järjestelmät ovat alustoja tutkimustiedon julkaisemiseen. iRODSin kaltaiset sovellukset soveltuvat tutkimustiedon hallintaan. Nämä ratkaisut toimivat, mutta yhdistettyä ratkaisua tutkimustiedon hallintaan, julkaisuun ja jakamiseen ei ole. Kokonaisratkaisujen sekä tutkimustiedon hallintaan ja jakamiseen liittyvän kulttuurin puutteesta seuraa, että tutkimustietoa jaetaan vähän verrattuna sen määrään. Jatkotutkimuksen tulisi keskittyä tutkimustiedon hallinnan, jakamisen ja julkaisemisen yhdistävän palvelun lisäksi etsimään ratkaisuja aiheeseen liittyvän kulttuurin ja tietotaidon parantamiseen

    Automating Industrial Event Stream Analytics: Methods, Models, and Tools

    Get PDF
    Industrial event streams are an important cornerstone of Industrial Internet of Things (IIoT) applications. For instance, in the manufacturing domain, such streams are typically produced by distributed industrial assets at high frequency on the shop floor. To add business value and extract the full potential of the data (e.g. through predictive quality assessment or maintenance), industrial event stream analytics is an essential building block. One major challenge is the distribution of required technical and domain knowledge across several roles, which makes the realization of analytics projects time-consuming and error-prone. For instance, accessing industrial data sources requires a high level of technical skills due to a large heterogeneity of protocols and formats. To reduce the technical overhead of current approaches, several problems must be addressed. The goal is to enable so-called "citizen technologists" to evaluate event streams through a self-service approach. This requires new methods and models that cover the entire data analytics cycle. In this thesis, the research question is answered, how citizen technologists can be facilitated to independently perform industrial event stream analytics. The first step is to investigate how the technical complexity of modeling and connecting industrial data sources can be reduced. Subsequently, it is analyzed how the event streams can be automatically adapted (directly at the edge), to meet the requirements of data consumers and the infrastructure. Finally, this thesis examines how machine learning models for industrial event streams can be trained in an automated way to evaluate previously integrated data. The main research contributions of this work are: 1. A semantics-based adapter model to describe industrial data sources and to automatically generate adapter instances on edge nodes. 2. An extension for publish-subscribe systems that dynamically reduces event streams while considering requirements of downstream algorithms. 3. A novel AutoML approach to enable citizen data scientists to train and deploy supervised ML models for industrial event streams. The developed approaches are fully implemented in various high-quality software artifacts. These have been integrated into a large open-source project, which enables rapid adoption of the novel concepts into real-world environments. For the evaluation, two user studies to investigate the usability, as well as performance and accuracy tests of the individual components were performed
    corecore