11 research outputs found

    Verkkojärjestelmien seurantatyökalujen kehittäminen

    Get PDF
    With increasing amounts of software services provided to users and the more demanding requirements needed from them, monitoring of services is becoming increasingly important. Web service monitoring is the process of confirming system functionality by studying its various attributes, such as availability, reliability, and performance. Monitoring the services helps the software developers, maintainers and owners as they allow for increased reliability, robustness and possibly performance analysis. This thesis focuses on web service monitoring and the tools that it is done with. Specific goals are to learn about the different categories that monitoring services can take and to showcase a custom web service monitoring tool and its further development. The subject is important to the case company LogiNets, which has specific monitoring requirements that need to be fulfilled. These goals were achieved by researching literature on different types of monitoring tools for a literature review and then doing a case study of monitoring tool development. The case study was done about adding a new functionality to LogiNets’s indoor web service monitoring tool called Agent. The literature review was successful in identifying different categories of monitoring tools both by their location relative to the monitored service as well as by the quality of service requirements they fulfill. The review did not, however, discover significant research about existing commercial monitoring tools, and thus provided little help in the case study. The case study was more successful, with the new functionality added and similar extensions planned for the future

    Analysis of technical implementations of security processes for cloud computing services

    Get PDF
    Створення автоматизованої системи аналізу журналів для виявлення аномалій і загроз безпеки в комп'ютерній системі // Кваліфікаційна робота ОР «Бакалавр» //Микитюк Тарас Володимирович// Тернопільський національний технічний університет імені Івана Пулюя, факультет комп’ютерно-інформаційних систем і програмної інженерії, кафедра кібербезпеки, група СБ-41 // Тернопіль, 2023 // С. – 52, рис. – 25, ліст. – 3.Парадигма хмарних обчислень стала основним рішенням для розгортання бізнес-процесів і програм. У загальнодоступному хмарному баченні послуги інфраструктури, платформи та програмного забезпечення надаються споживачам (тобто клієнтам і постачальникам послуг) на основі оплати за використання. Орендарі хмари можуть використовувати хмарні ресурси за нижчими цінами, з вищою продуктивністю та гнучкістю, ніж традиційні локальні ресурси, не турбуючись про керування інфраструктурою. Тим не менш, орендарі хмари залишаються стурбовані рівнем обслуговування хмари та нефункціональними властивостями, на які можуть розраховувати їхні програми. В останні кілька років дослідницьке співтовариство зосередилося на нефункціональних аспектах парадигми хмари, серед яких виділяється безпека хмари. Дослідження в цій роботі зосереджено на інтерфейсі між безпекою в хмарі та процесами забезпеченням безпеки в хмарі. По-перше, пропонується огляд рівня безпеки в хмарі. Потім подано поняття забезпечення безпеки хмари та аналіз його зростаючого впливу. В роботі наведено ряд рекомендацій стосовно безпеки при використанні хмарних обчислень.The cloud computing paradigm has become the primary solution for deploying business processes and applications. In the public cloud vision, infrastructure, platform, and software services are provided to tenants (i.e., customers and service providers) on a actually utilized services fee basis. Cloud clients can use cloud resources at lower prices, with higher performance and flexibility than traditional on-premises resources. They do not worry about infrastructure management. However, cloud tenants remain concerned about cloud service levels and the non-functional features their applications can expect. Recent few years, the major researches was focused on the non-functional aspects of the cloud computing paradigm, with cloud security standing out. The research in this paper focuses on the interface between cloud security and cloud security processes. First, we provide an overview of the current state of cloud security. We then introduce the concept of cloud security and analyze its growing impact. The work gives a number of recommendations regarding security when using cloud computing for development.ВСТУП ... 7 РОЗДІЛ 1. АНАІЗ ПРОБЛЕМИ ФОРМУВАННЯ ВИМОГ В РОЗПОДІЛЕНИХ КОМАНДАХ ... 9 1.1 Критерії відбору ... 9 1.2 Виділення характеристик безпеки хмарних обчислень ... 10 1.3 Висновки до розділу ... 12 РОЗДІЛ 2. АНАЛІЗ ПУБЛІКАЦІЙ ВІДПОВІДНО ДО КЛАСИФІКАЦІЇ ... 13 2.1 Вразливості, загрози та атаки ... 13 2.1.1 Рівень програми ... 13 2.1.2 Рівень клієнт-клієнт ... 14 2.1.3 Рівень провайдер-клієнт та клієнт-провайдер ... 15 2.2 Безпека хмарних сервісів ... 16 2.2.1 Шифрування ... 17 2.2.2 Сигнатури ... 20 2.2.3 Управління доступом ... 21 2.2.4 Аутентифікація ... 23 2.2.5 Довірені обчислення ... 23 2.2.6 IDS/IPS ... 24 2.2.7 Узагальнення огляду методик забезпечення безпеки в хмарі ... 27 2.3 Забезпечення безпеки ... 27 2.3.1 Тестування ... 30 2.3.2 Моніторинг ... 30 2.3.3 Атестація ... 31 2.3.4 Хмарний аудит/відповідність ... 32 2.3.5 Угода про рівень обслуговування (SLA) ... 33 2.3.6 Узагальнення методів гарантування безпеки ... 34 2.4 Узагальнення результатів огляду літературних джерел ... 34 РОЗДІЛ 3. БЕЗПЕКА ЖИТТЄДІЯЛЬНОСТІ, ОСНОВИ ОХОРОНИ ПРАЦІ ... 41 3.1 Охорона праці та її актуальність в ІТ-сфері ... 41 3.2 Шкідлива дія шуту та вібрації і захист від неї ... 45 ВИСНОВОК ... 51 ПЕРЕЛІК ПОСИЛАНЬ ... 5

    Security Policy Monitoring of BPMN-based Service Compositions

    Get PDF
    Service composition is a key concept of Service-Oriented Architecture that allows for combining loosely coupled services that are offered and operated by different service providers. Such environments are expected to dynamically respond to changes that may occur at runtime, including changes in the environment and individual services themselves. Therefore, it is crucial to monitor these loosely-coupled services throughout their lifetime. In this paper, we present a novel framework for monitoring services at runtime and ensuring that services behave as they have promised. In particular, we focus on monitoring non-functional properties that are specified within an agreed security contract. The novelty of our work is based on the way in which monitoring information can be combined from multiple dynamic services to automate the monitoring of business processes and proactively report compliance violations. The framework enables monitoring of both atomic and composite services and provides a user friendly interface for specifying the monitoring policy. We provide an information service case study using a real composite service to demonstrate how we achieve compliance monitoring. The transformation of security policy into monitoring rules, which is done automatically, makes our framework more flexible and accurate than existing techniques

    From security to assurance in the cloud: a survey

    Get PDF
    The cloud computing paradigm has become a mainstream solution for the deployment of business processes and applications. In the public cloud vision, infrastructure, platform, and software services are provisioned to tenants (i.e., customers and service providers) on a pay-as-you-go basis. Cloud tenants can use cloud resources at lower prices, and higher performance and flexibility, than traditional on-premises resources, without having to care about infrastructure management. Still, cloud tenants remain concerned with the cloud's level of service and the nonfunctional properties their applications can count on. In the last few years, the research community has been focusing on the nonfunctional aspects of the cloud paradigm, among which cloud security stands out. Several approaches to security have been described and summarized in general surveys on cloud security techniques. The survey in this article focuses on the interface between cloud security and cloud security assurance. First, we provide an overview of the state of the art on cloud security. Then, we introduce the notion of cloud security assurance and analyze its growing impact on cloud security approaches. Finally, we present some recommendations for the development of next-generation cloud security and assurance solutions

    SLA Establishment Decisions: Minimizing the Risk of SLA Violations

    Get PDF
    This thesis presents an approach for service providers to select an SLA portfolio that minimizes the SLA violation risk. It considers constraints on expected profit and available resources. The problem is addressed by applying decision theory and risk measures, especially by adapting the concept of portfolio selection by Harry Markowitz and the semi-variance. In order to capture a decision maker\u27s attitude towards risk, utility theory and the concept of risk aversion are used

    Multiparty session types for dynamic verification of distributed systems

    Get PDF
    In large-scale distributed systems, each application is realised through interactions among distributed components. To guarantee safe communication (no deadlocks and communication mismatches) we need programming languages and tools that structure, manage, and policy-check these interactions. Multiparty session types (MPST), a typing discipline for structured interactions between communicating processes, offers a promising approach. To date, however, session types applications have been limited to static verification, which is not always feasible and is often restrictive in terms of programming API and specifying policies. This thesis investigates the design and implementation of a runtime verification framework, ensuring conformance between programs and specifications. Specifications are written in Scribble, a protocol description language formally founded on MPST. The central idea of the approach is a dynamic monitor, which takes a form of a communicating finite state machine, automatically generated from Scribble specifications, and a communication runtime stipulating a message format. We extend and apply Scribble-based runtime verification in manifold ways. First, we implement a Python library, facilitated with session primitives and verification runtime. We integrate the library in a large cyber-infrastructure project for oceanography. Second, we examine multiple communication patterns, which reveal and motivate two novel extensions, asynchronous interrupts for verification of exception handling behaviours, and time constraints for enforcement of realtime protocols. Third, we apply the verification framework to actor programming by augmenting an actor library in Python with protocol annotations. For both implementations, measurements show Scribble-based dynamic checking delivers minimal overhead and allows expressive specifications. Finally, we explore a static analysis of Scribble specifications as to efficiently compute a safe global state from which a monitored system of interacting processes can be recovered after a failure. We provide an implementation of a verification framework for recovery in Erlang. Benchmarks show our recovery strategy outperforms a built-in static recovery strategy, in Erlang, on a number of use cases.Open Acces

    Enhancing coverage adequacy of service compositions after runtime adaptation

    Get PDF
    Laufzeitüberwachung (engl. runtime monitoring) ist eine wichtige Qualitätssicherungs-Technik für selbstadaptive Service-Komposition. Laufzeitüberwachung überwacht den Betrieb der Service-Komposition. Zur Bestimmung der Genauigkeit von Software-Tests werden häufig Überdeckungskriterien verwendet. Überdeckungskriterien definieren Anforderungen die Software-Tests erfüllen muss. Wegen ihrer wichtigen Rolle im Software-Testen haben Forscher Überdeckungskriterien an die Laufzeitüberwachung von Service-Komposition angepasst. Die passive Art der Laufzeitüberwachung und die adaptive Art der Service-Komposition können die Genauigkeit von Software-Tests zur Laufzeit negativ beeinflussen. Dies kann jedoch die Zuversicht in der Qualität der Service-Komposition begrenzen. Um die Überdeckung selbstadaptiver Service-Komposition zur Laufzeit zu verbessern, untersucht diese Arbeit, wie die Laufzeitüberwachung und Online-Testen kombiniert werden können. Online-Testen bedeutet dass Testen parallel zu der Verwendung einer Service-Komposition erfolgt. Zunächst stellen wir einen Ansatz vor, um gültige Execution-Traces für Service-Komposition zur Laufzeit zu bestimmen. Der Ansatz berücksichtigt die Execution-Traces von Laufzeitüberwachung und (Online)-Testen. Er berücksichtigt Änderungen im Workflow und Software-Services eines Service-Komposition. Zweitens, definieren wir Überdeckungskriterien für Service-Komposition. Die Überdeckungskriterien berücksichtigen Ausführungspläne einer Service-Komposition und berücksichtigen die Überdeckung für Software-Services und die Service-Komposition. Drittens stellen wir Online-Testfälle Priorisierungs Techniken, um die Abdeckungniveau einer Service-Komposition schneller zu erreichen. Die Techniken berücksichtigen die Überdeckung einer Service-Komposition durch beide Laufzeitüberwachung und Online-Tests. Zusätzlich, berücksichtigen sie die Ausführungszeit von Testfällen und das Nutzungsmodell der Service-Komposition. Viertens stellen wir einen Rahmen für die Laufzeitüberwachung und Online-Testen von Software-Services und Service-Komposition, genannt PROSA, vor. PROSA bietet technische Unterstützung für die oben genannten Beiträge. Wir evaluieren die Beiträge anhand einer beispielhaften Service-Komposition, die häufig in dem Forschungsgebiet Service-oriented Computing eingesetzt wird.Runtime monitoring (or monitoring for short) is a key quality assurance technique for self-adaptive service compositions. Monitoring passively observes the runtime behaviour of service compositions. Coverage criteria are extensively used for assessing the adequacy (or thoroughness) of software testing. Coverage criteria specify certain requirements on software testing. The importance of coverage criteria in software testing has motivated researchers to adapt them to the monitoring of service composition. However, the passive nature of monitoring and the adaptive nature of service composition could negatively influence the adequacy of monitoring, thereby limiting the confidence in the quality of the service composition. To enhance coverage adequacy of self-adaptive service compositions at runtime, this thesis investigates how to combine runtime monitoring and online testing. Online testing means testing a service composition in parallel to its actual usage and operation. First, we introduce an approach for determining valid execution traces for service compositions at runtime. The approach considers execution traces of both monitoring and (online) testing. It considers modifications in both workflow and constituent services of a service composition. Second, we define coverage criteria for service compositions. The criteria consider execution plans of a service composition for coverage assessment and consider the coverage of an abstract service and the overall service composition. Third, we introduce online-test-case prioritization techniques to achieve a faster coverage of a service composition. The techniques employ coverage of a service composition from both monitoring and online testing, execution time of test cases, and the usage model of the service composition. Fourth, we introduce a framework for monitoring and online testing of services and service compositions called PROSA. PROSA provides technical support for the aforementioned contributions. We evaluate the contributions of this thesis using service compositions frequently used in service-oriented computing research

    Trust and Reputation Management: a Probabilistic Approach

    Get PDF
    Software architectures of large-scale systems are perceptibly shifting towards employing open and distributed computing. Web services emerged as autonomous and self-contained business applications that are published, found, and used over the web. These web services thus exist in an environment in which they interact among each other to achieve their goals. Two challenging tasks that govern the agents interactions have gained the attention of a large research community; web service selection and composition. The explosion of the number of published web services contributed to the growth of large pools of similarly functional services. While this is vital for a competitive and healthy marketplace, it complicates the aforementioned tasks. Service consumers resort to non-functional characteristics of available service providers to decide which service to interact with. Therefore, to optimize both tasks and maximize the gain of all involved agents, it is essential to build the capability of modeling and predicting the quality of these agents. In this thesis, we propose various trust and reputation models based on probabilistic approaches to address the web service selection and composition problems. These approaches consider the trustworthiness of a web service to be strongly tied to the outcomes of various quality of service metrics such as response time, throughput, and reliability. We represent these outcomes by a multinomial distribution whose parameters are learned using Bayesian inference which, given a likelihood function and a prior probability, derives the posterior probability. Since the likelihood, in this case, is a multinomial, a commonly used prior is the Dirichlet distribution. We propose, to overcome several limitations of the Dirichlet, by applying two alternative priors such as the generalized Dirichlet, and Beta-Liouville. Using these distributions, the learned parameters represent the probabilities of a web service to belong to each of the considered quality classes. These probabilities are consequently used to compute the trustworthiness of the evaluated web services and thus assisting consumers in the service selection process. Furthermore, after exploring the correlations among various quality metrics using real data sets, we introduce a hybrid trust model that captures these correlations using both Dirichlet and generalized Dirichlet distributions. Given their covariance structures, the former performs better when modeling negative correlations while the latter yields better modeling of positive correlations. To handle composite services, we propose various trust approaches using Bayesian networks and mixture models of three different distributions; the multinomial Dirichlet, the multinomial generalized Dirichlet, and the multinomial Beta-Liouville. Specifically, we employ a Bayesian network classifier with a Beta- Liouville prior to enable the classification of the QoS of composite services given the QoS of its constituents. In addition, we extend the previous models to function in online settings. Therefore, we present a generalized-Dirichlet power steady model that predicts compositional time series. We similarly extend the Bayesian networks model by using the Voting EM algorithm. This extension enables the estimation of the networks parameters after each interaction with a composite web service. Furthermore, we propose an algorithm to estimate the reputation of web services. We extend this algorithm by leveraging the capabilities of various clustering and outlier detection techniques to deal with malicious feedback and various strategic behavior commonly performed by web services. Alternatively, we suggest two data fusion methods for reputation feedback aggregation, namely, the covariance intersection and ellipsoidal intersection. These methods handle the dependency between the information that propagates through networks of interacting agents. They also avoid over confident estimates caused by redundant information. Finally, we present a reputation model for agent-based web services grouped into communities of homogeneous functionalities. We exploit various clustering and anomaly detection techniques to analyze and identify the quality trends provided by each service. This model enables the master of each community to allocate the requests it receives to the web service that best fulfill the quality requirements of the service consumers. We evaluate the effectiveness of the proposed approaches using both simulated and real data

    TOOL-ASSISTED VALIDATION AND VERIFICATION TECHNIQUES FOR STATE-BASED FORMAL METHODS

    Get PDF
    To tackle the growing complexity of developing modern software systems that usually have embedded and distributed nature, and more and more involve safety critical aspects, formal methods (FMs) have been affirmed as an efficient approach to ensure the quality and correctness of the design, that permits to discover errors yet at the early stages of the system development. Among the several FMs available, some of them can be described as state-based, since they describe systems by using the notions of state and transitions between states. State-based FMs are sometimes preferred since they produce specifications that are more intuitive, being the notions of state and transition close to the notions of program state and program execution that are familiar to any developer. Moreover, state-based FMs are usually executable and permit to be simulated, so having an abstraction of the execution of the system under development. The aim of the thesis is to provide tool-assisted techniques that help the adoption of state-based FMs. In particular we address four main goals: 1) identifying a process for the development of an integrated framework around a formal method. The adoption of a formal method is often prevented by the lack of tools to support the user in the different development activities, as model editing, validation, verification, etc. Moreover, also when tools are available, they have usually been developed to target only one aspect of the system development process. So, having a well-engineered process that helps in the development of concrete notations and tools for a FM can make FMs of practical application. 2) promoting the integration of different FMs. Indeed, having only one formal notation, for doing different formal activities during the development of the system, is preferable than having a different notation for each formal activity. Moreover such notation should be high-level: working with high level notations is definitely easier than working with low-level ones, and the produced specifications are usually more readable. This goal can be seen as a sub-goal of the first goal; indeed, in a framework around a formal method, it should also be possible to integrate other formal methods that better address some particular formal activities. 3) helping the user in writing correct specifications. The basic assumption of any formal technique is that the specification, representing the desired properties of the system or the model of the system, is correct. However, in case the specification is not correct, all the verification activities based on the specification produce results that are meaningless. So, validation techniques should assure that the specification reflects the intended requirements; besides traditional simulation (user-guided or scenario-based), also model review techniques, checking for common quality attributes that any specification should have, are a viable solution. 4) reducing the distance between the formal specification and the actual implementation of the system. Several FMs work on a formal description of the system which is assumed to reflect the actual implementation; however, in practice, the formal specification and the actual implementation could be not conformant. A solution is to obtain the implementation, through refinements steps, from the formal specification, and proving that the refinements steps are correct. A different viable solution is to link the implementation with its formal specification and check, during the program execution, if they are conformant
    corecore