13 research outputs found

    A Distributed Security Architecture for Large Scale Systems

    Get PDF
    This thesis describes the research leading from the conception, through development, to the practical implementation of a comprehensive security architecture for use within, and as a value-added enhancement to, the ISO Open Systems Interconnection (OSI) model. The Comprehensive Security System (CSS) is arranged basically as an Application Layer service but can allow any of the ISO recommended security facilities to be provided at any layer of the model. It is suitable as an 'add-on' service to existing arrangements or can be fully integrated into new applications. For large scale, distributed processing operations, a network of security management centres (SMCs) is suggested, that can help to ensure that system misuse is minimised, and that flexible operation is provided in an efficient manner. The background to the OSI standards are covered in detail, followed by an introduction to security in open systems. A survey of existing techniques in formal analysis and verification is then presented. The architecture of the CSS is described in terms of a conceptual model using agents and protocols, followed by an extension of the CSS concept to a large scale network controlled by SMCs. A new approach to formal security analysis is described which is based on two main methodologies. Firstly, every function within the system is built from layers of provably secure sequences of finite state machines, using a recursive function to monitor and constrain the system to the desired state at all times. Secondly, the correctness of the protocols generated by the sequences to exchange security information and control data between agents in a distributed environment, is analysed in terms of a modified temporal Hoare logic. This is based on ideas concerning the validity of beliefs about the global state of a system as a result of actions performed by entities within the system, including the notion of timeliness. The two fundamental problems in number theory upon which the assumptions about the security of the finite state machine model rest are described, together with a comprehensive survey of the very latest progress in this area. Having assumed that the two problems will remain computationally intractable in the foreseeable future, the method is then applied to the formal analysis of some of the components of the Comprehensive Security System. A practical implementation of the CSS has been achieved as a demonstration system for a network of IBM Personal Computers connected via an Ethernet LAN, which fully meets the aims and objectives set out in Chapter 1. This implementation is described, and finally some comments are made on the possible future of research into security aspects of distributed systems.IBM (United Kingdom) Laboratories Hursley Park, Winchester, U

    Adopting Automated Bug Assignment in Practice: A Longitudinal Case Study at Ericsson

    Full text link
    The continuous inflow of bug reports is a considerable challenge in large development projects. Inspired by contemporary work on mining software repositories, we designed a prototype bug assignment solution based on machine learning in 2011-2016. The prototype evolved into an internal Ericsson product, TRR, in 2017-2018. TRR's first bug assignment without human intervention happened in April 2019. Our study evaluates the adoption of TRR within its industrial context at Ericsson. Moreover, we investigate 1) how TRR performs in the field, 2) what value TRR provides to Ericsson, and 3) how TRR has influenced the ways of working. We conduct an industrial case study combining interviews with TRR stakeholders, minutes from sprint planning meetings, and bug tracking data. The data analysis includes thematic analysis, descriptive statistics, and Bayesian causal analysis. TRR is now an incorporated part of the bug assignment process. Considering the abstraction levels of the telecommunications stack, high-level modules are more positive while low-level modules experienced some drawbacks. On average, TRR automatically assigns 30% of the incoming bug reports with an accuracy of 75%. Auto-routed TRs are resolved around 21% faster within Ericsson, and TRR has saved highly seasoned engineers many hours of work. Indirect effects of adopting TRR include process improvements, process awareness, increased communication, and higher job satisfaction. TRR has saved time at Ericsson, but the adoption of automated bug assignment was more intricate compared to similar endeavors reported from other companies. We primarily attribute the difference to the very large size of the organization and the complex products. Key facilitators in the successful adoption include a gradual introduction, product champions, and careful stakeholder analysis.Comment: Under revie

    Management for Bachelors

    Get PDF
    The textbook contains educational module, which embraces the content of main regulatory disciplines on specialists training by the direction 6.030601 “Management” in the knowledge branch 03.06 “Management and administration” of the educational and qualification level “Bachelor”. According to the content the disciplines completely conform to curricula approved by scientific and methodological commission on management and agreed with logical and structural scheme of educational process. The textbook embraces almost all aspects of bachelor training. The chapters contain questions for self-control and list of recommended literature. While creating the chapters the results of fundamental and applied scientific researches of the evaluation branch, the forecasting and management of economic potential of complicated industrial system were used

    Teaching and learning introductory programming : a model-based approach

    Get PDF
    The dissertation identifies and discusses impact of a model-based approach to teaching and learning introductory object-oriented programming both for practitioners and for computer science education research.Learning to program is notoriously difficult. This dissertation investigates ways to teach introductory object-oriented programming at the university level. It focuses on a model-based approach, describes and argues for this approach and investigates several of its aspects. It gives an overview of the research in teaching introductory programming in an objects-first way. The dissertation also investigates ways for university teachers to share and document best practices in teaching introductory object-oriented programming through pedagogical patterns. The dissertation addresses both traditional young full-time students and experienced programmers (although not in object-orientation) participating in part-time education. It examines whether the same success factors for learning programming apply to a model-based approach as to introductory programming courses in general for full-time students and gives a general overview of research in success factors for introductory programming. Some factors are the same, because students‘ math competence is positively correlated with their success. The dissertation examines how experienced programmers link a model-based programming course to their professional practices. The general answer is that the part-time students do not need to have a direct link to their specific work-practice, they expect to create the link themselves; but the teacher must be aware of the conditions facing the part-time students in industry. Furthermore, the dissertation addresses interaction patterns for part-time students learning model-based introductory programming in a net-based environment. A previously prepared solution to an exercise is found to mediate the interaction in three different ways. Design patterns have had a major impact on the quality of object-oriented software. Inspired by this, researchers have suggested pedagogical patterns for sharing best practices in teaching introductory object-oriented programming. It was expected that university teachers‘ knowledge of pedagogical patterns was limited, but this research proved that to be wrong; about half of the teachers know pedagogical patterns. One of the problems this dissertation identifies is the lack of a structuring principle for pedagogical patterns; potential users have problems identifying the correct patterns to apply. An alternative structuring principle based on a constructivist learning theory is suggested and analysed

    Analysis and Transformation of Configurable Systems

    Get PDF
    Static analysis tools and transformation engines for source code belong to the standard equipment of a software developer. Their use simplifies a developer's everyday work of maintaining and evolving software systems significantly and, hence, accounts for much of a developer's programming efficiency and programming productivity. This is also beneficial from a financial point of view, as programming errors are early detected and avoided in the the development process, thus the use of static analysis tools reduces the overall software-development costs considerably. In practice, software systems are often developed as configurable systems to account for different requirements of application scenarios and use cases. To implement configurable systems, developers often use compile-time implementation techniques, such as preprocessors, by using #ifdef directives. Configuration options control the inclusion and exclusion of #ifdef-annotated source code and their selection/deselection serve as an input for generating tailor-made system variants on demand. Existing configurable systems, such as the linux kernel, often provide thousands of configuration options, forming a huge configuration space with billions of system variants. Unfortunately, existing tool support cannot handle the myriads of system variants that can typically be derived from a configurable system. Analysis and transformation tools are not prepared for variability in source code, and, hence, they may process it incorrectly with the result of an incomplete and often broken tool support. We challenge the way configurable systems are analyzed and transformed by introducing variability-aware static analysis tools and a variability-aware transformation engine for configurable systems' development. The main idea of such tool support is to exploit commonalities between system variants, reducing the effort of analyzing and transforming a configurable system. In particular, we develop novel analysis approaches for analyzing the myriads of system variants and compare them to state-of-the-art analysis approaches (namely sampling). The comparison shows that variability-aware analysis is complete (with respect to covering the whole configuration space), efficient (it outperforms some of the sampling heuristics), and scales even to large software systems. We demonstrate that variability-aware analysis is even practical when using it with non-trivial case studies, such as the linux kernel. On top of variability-aware analysis, we develop a transformation engine for C, which respects variability induced by the preprocessor. The engine provides three common refactorings (rename identifier, extract function, and inline function) and overcomes shortcomings (completeness, use of heuristics, and scalability issues) of existing engines, while still being semantics-preserving with respect to all variants and being fast, providing an instantaneous user experience. To validate semantics preservation, we extend a standard testing approach for refactoring engines with variability and show in real-world case studies the effectiveness and scalability of our engine. In the end, our analysis and transformation techniques show that configurable systems can efficiently be analyzed and transformed (even for large-scale systems), providing the same guarantees for configurable systems as for standard systems in terms of detecting and avoiding programming errors

    Safety and Reliability - Safe Societies in a Changing World

    Get PDF
    The contributions cover a wide range of methodologies and application areas for safety and reliability that contribute to safe societies in a changing world. These methodologies and applications include: - foundations of risk and reliability assessment and management - mathematical methods in reliability and safety - risk assessment - risk management - system reliability - uncertainty analysis - digitalization and big data - prognostics and system health management - occupational safety - accident and incident modeling - maintenance modeling and applications - simulation for safety and reliability analysis - dynamic risk and barrier management - organizational factors and safety culture - human factors and human reliability - resilience engineering - structural reliability - natural hazards - security - economic analysis in risk managemen

    Continuous Rationale Management

    Get PDF
    Continuous Software Engineering (CSE) is a software life cycle model open to frequent changes in requirements or technology. During CSE, software developers continuously make decisions on the requirements and design of the software or the development process. They establish essential decision knowledge, which they need to document and share so that it supports the evolution and changes of the software. The management of decision knowledge is called rationale management. Rationale management provides an opportunity to support the change process during CSE. However, rationale management is not well integrated into CSE. The overall goal of this dissertation is to provide workflows and tool support for continuous rationale management. The dissertation contributes an interview study with practitioners from the industry, which investigates rationale management problems, current practices, and features to support continuous rationale management beneficial for practitioners. Problems of rationale management in practice are threefold: First, documenting decision knowledge is intrusive in the development process and an additional effort. Second, the high amount of distributed decision knowledge documentation is difficult to access and use. Third, the documented knowledge can be of low quality, e.g., outdated, which impedes its use. The dissertation contributes a systematic mapping study on recommendation and classification approaches to treat the rationale management problems. The major contribution of this dissertation is a validated approach for continuous rationale management consisting of the ConRat life cycle model extension and the comprehensive ConDec tool support. To reduce intrusiveness and additional effort, ConRat integrates rationale management activities into existing workflows, such as requirements elicitation, development, and meetings. ConDec integrates into standard development tools instead of providing a separate tool. ConDec enables lightweight capturing and use of decision knowledge from various artifacts and reduces the developers' effort through automatic text classification, recommendation, and nudging mechanisms for rationale management. To enable access and use of distributed decision knowledge documentation, ConRat defines a knowledge model of decision knowledge and other artifacts. ConDec instantiates the model as a knowledge graph and offers interactive knowledge views with useful tailoring, e.g., transitive linking. To operationalize high quality, ConRat introduces the rationale backlog, the definition of done for knowledge documentation, and metrics for intra-rationale completeness and decision coverage of requirements and code. ConDec implements these agile concepts for rationale management and a knowledge dashboard. ConDec also supports consistent changes through change impact analysis. The dissertation shows the feasibility, effectiveness, and user acceptance of ConRat and ConDec in six case study projects in an industrial setting. Besides, it comprehensively analyses the rationale documentation created in the projects. The validation indicates that ConRat and ConDec benefit CSE projects. Based on the dissertation, continuous rationale management should become a standard part of CSE, like automated testing or continuous integration

    Time Localization of Abrupt Changes in Cutting Process using Hilbert Huang Transform

    Get PDF
    Cutting process is extremely dynamical process influenced by different phenomena such as chip formation, dynamical responses and condition of machining system elements. Different phenomena in cutting zone have signatures in different frequency bands in signal acquired during process monitoring. The time localization of signal’s frequency content is very important. An emerging technique for simultaneous analysis of the signal in time and frequency domain that can be used for time localization of frequency is Hilbert Huang Transform (HHT). It is based on empirical mode decomposition (EMD) of the signal into intrinsic mode functions (IMFs) as simple oscillatory modes. IMFs obtained using EMD can be processed using Hilbert Transform and instantaneous frequency of the signal can be computed. This paper gives a methodology for time localization of cutting process stop during intermittent turning. Cutting process stop leads to abrupt changes in acquired signal correlated to certain frequency band. The frequency band related to abrupt changes is localized in time using HHT. The potentials and limitations of HHT application in machining process monitoring are shown
    corecore